f5 cloud services
78 TopicsBig IP AWS Edition
I would like rewrite the Host Header with pendency of pool Member. I try the following: when LB_SELECTED { if { [LB::server addr] contains "194.76.212"} { HTTP::header replace "Host" "ff.geobasis.de" #log local0. "Node FF: [LB::server addr]" } if { [LB::server addr] contains "194.76.232"} { HTTP::header replace "Host" "zit.geobasis.de" #log local0. "Node ZIT: [LB::server addr]" } } It's not worked... Any idea for solution?413Views0likes5CommentsLoad balancing using an API
Hello team, We have a bunch of hosts running behind F5. Every host is running few services. One particular service is capable of providing free memory information through the API we developed: GET http://hostname/myservice/usageAPI Response: { "freeMemory": 369959592 } Is it possible to consume this API in F5, and load balance accordingly? E.g. If freeMemory is less than threshold, than no request should be sent to that host for the time being. After sometime, when freeMemory is above the threshold value, then F5 should redirect request to that host. How to load balance in F5 through such API? Note that we don't want to mark server/host status Up and Down. We just want to make sure that particular service has enough memory to take up the next memory intensive request. We know Dynamic Ratio Load Balancing but that considers the overall health of the host. We want to load balance based on status of one service out of few other services running on the host.386Views0likes2CommentsIs one self-IP enough to health check number of nodes?
Hi Guys, I am working on a new setup where I have F5 VM deployed in one arm. The device is currently standalone and configured following: 1- SNAT Pool instead of using Automap 2- Single Self-IP 3- Route to Self-IP gateway The VIP and nodes is in a separate subnet and currently I have configured two nodes to test and I see health check is happening using a single Self-IP that I have configured. I wanted to know is it okay to use single self ip to monitor all nodes in different subnets without having an issue like port exahustion or any other know issue? In SNAT Pool I have added 20 IPs, can I use SNAT Pool to do health check and data communication instead of using a single self IP? What is the best practice?300Views0likes3CommentsRedis Server Unprotected by Password Authentication
Solution : Enable the 'requirepass' directive in the redis.conf configuration file.check if Redis is working on the servers.$ redis-cli ping PONG #requirepass "xxxxxxxx"-- change the password of the user and uncomment it. /etc/init.d/redis-server status /etc/init.d/redis-server stop /etc/init.d/redis-server start The above solution provided are for single server What is the solution for the clusters of Linux and there are multiple configuration files given below? config/redis/redis_121.conf config/redis/redis_122.conf config/redis/redis_123.conf config/redis/redis_124.conf config/redis/redis_125.conf2KViews0likes0CommentsCSS patch for the new DevCentral Forums
Hi Folks, I've digged into the CSS of the new DevCentral forum and spoted some minor tweaks to make forum more usable for daily active users. I've mainly focused to expand the Question & Answer area, disable the default "Show more" truncation and to disable any line breaks on Code snippets to make them easier to read. Below is the CSS patch that can be imported to Chrome addons like Stylebot (via "Edit CSS" button) to persistently overwrite the look & feel of the new DevCentral site. /* Increase the width of the DevCentral Forum question & answer area to 100% */ .comm-layout-column { width: 100% ; } /* Allign replys and comments to the very left position. */ .slds-comment__content { margin-left: -50px ; } .comment__footer { margin-left: -50px ; } /* Less indent for comment replys */ .forceChatterFeedback.threaded-discussion .cuf-commentLi .cuf-commentLi { padding-left: 1rem ; } /* Always expand comments and remove the "Show more" button */ .feedBodyInnerTruncated { max-height: 100% ; } .forceChatterFeedBodyText .fadeOut { display: none ; } /* Disable line-breaks on code snippets and use a scroll bar instead. */ .forceChatterFeedBodyText code { overflow-x: auto ; white-space: pre ; width: 100% ; } .forceChatterFeedBodyText code ol.linenums { min-width: calc(100% - 40px) ; width: fit-content ; } Note: If you experience any formating issues outside of the DevCentral Forums, then let me know. I did not verified the changes on every single DevCentral sub page. Note: If you have other ideas to optimize the DevCentral Forum, then let me know. I'm happy to integrate your ideas in this post. Cheers, Kai299Views1like0CommentsAS3 PATCH method to add new pool?
AS3 noob here. Been successful using POST to create partition, app, VS, and pool. Now, I simply want to add a new standalone pool to the tenant, but not attached to a virtual server (because I'm using dynamic pool routing via DG) using the PATCH method. Should be very easy, but seeing some interesting errors, which I'm sure is related to schema formatting. Any advice? The restnoded logs were not much help. Payload follows using PATCH method, followed by the error. Also tried adding to the schema without the app, but same deal... [ { "path": "/Sample/A1", "op": "add", "value": { "web_pool_new": { "class": "Pool", "monitors": [ "http" ], "members": [{ "servicePort": 80, "serverAddresses": [ "192.0.1.20", "192.0.1.21" ] } ] } } } ] { "code":422, "errors":[ "/Sample/A1:shouldhaverequiredproperty'class'" ], "declarationFullId":"", "message":"declarationisinvalid" }535Views0likes3CommentsF5 Kubernetes BIG-IP Controller or CIS not connecting to Azure Big-IP deployment
I have started a POC for the BIG-IP Azure deployments, which deployed successfully and I have accessed and set the password. I've deployed the helm chart for CIS, but the pod fails to start. I've tested connectivity to the Azure BIG-IP deployment from a separate pod in the same namespace and it authenticates and returns correct info. I've validated the Azure BIG-IP creds are properly formatted in a secret and that secret is getting mounted in the CIS pod. Here is the pod log with logging level set to debug: 2021/10/04 21:21:39 [DEBUG] No url in credentials directory, falling back to CLI argument 2021/10/04 21:21:39 [INFO] [INIT] Starting: Container Ingress Services - Version: 2.5.0, BuildInfo: azure-465-1952a80a2165b7fc2d3561795ad09d1eb8615136 2021/10/04 21:21:39 [INFO]TeemServer:product.apis.f5.com 2021/10/04 21:21:39 teemClient:{{CIS-Ecosystem CIS/v2.5.0 df103609-7748-43e4-95a4-6631030e67d0} mmhJU2sCd63BznXAXDh4kxLIyfIMm3Ar product.apis.f5.com} 2021/10/04 21:21:39 [DEBUG] digitalAssetId:950e75d5-7fe0-88bc-eb3c-d654ebb4de47 2021/10/04 21:21:39 [DEBUG] telemetryDatalist:[{"Agent":"as3","ConfigmapsCount":0,"DateOfCISDeploy":"2021-10-04T21:21:39.452535893Z","ExternalDNSCount":0,"IPAMSvcLBCount":0,"IPAMTransportServerCount":0,"IPAMVirtualServerCount":0,"IngressCount":0,"IngressLinkCount":0,"Mode":"cluster","PlatformInfo":"CIS/v2.5.0 K8S/v1.19.11","RoutesCount":0,"RunningInDocker":false,"SDNType":"calico","TransportServerCount":0,"VirtualServerCount":0}] 2021/10/04 21:21:39 [DEBUG] ControllerAsDocker:#{docker} 2021/10/04 21:21:40 Resp Code:204 Status:204 No Content 2021/10/04 21:21:40 [INFO] ConfigWriter started: 0xc000284570 2021/10/04 21:21:40 [DEBUG] [CCCL] ConfigWriter (0xc000284570) writing section name global 2021/10/04 21:21:40 [DEBUG] [CCCL] ConfigWriter (0xc000284570) successfully wrote section (global) 2021/10/04 21:21:40 [DEBUG] [CCCL] ConfigWriter (0xc000284570) writing section name bigip 2021/10/04 21:21:40 [DEBUG] [CCCL] ConfigWriter (0xc000284570) successfully wrote section (bigip) 2021/10/04 21:21:40 [INFO] Started config driver sub-process at pid: 21 2021/10/04 21:21:40 [DEBUG] [INIT] Invalid trusted-certs-cfgmap option provided. 2021/10/04 21:21:40 [INFO] [INIT] Creating Agent for as3 2021/10/04 21:21:40 [DEBUG] [CORE] Agent Response Worker started and blocked on channel 0xc0004e04e0 2021/10/04 21:21:40 [INFO] [AS3] Initializing AS3 Agent 2021/10/04 21:21:41 [DEBUG] [AS3] No certs appended, using only system certs 2021/10/04 21:21:41 [DEBUG] [AS3] Validating AS3 schema with as3-schema-3.28.0-3-cis.json 2021/10/04 21:21:41 [DEBUG] [AS3] posting GET BIGIP AS3 Version request on https://10.2.0.7:8443/mgmt/shared/appsvcs/info 2021/10/04 21:21:43 [ERROR] [AS3] Response body unmarshal failed: invalid character '<' looking for beginning of value 2021/10/04 21:21:43 [ERROR] [AS3] Internal Error 2021/10/04 21:21:43 [CRITICAL] [INIT] Failed to initialize as3 agent, Internal ErrorSolved2.5KViews0likes3CommentsF5 powered API security and management
Editor's Note:The F5 Beacon capabilities referenced in this article hosted on F5 Cloud Services are planning a migration to a new SaaS Platform - Check out the latesthere. Introduction Application Programming Interfaces (APIs) enable application delivery systems to communicate with each other. According to a survey conducted by IDC, security is the main impediment to delivery of API-based services.Research conducted by F5 Labs shows that APIs are highly susceptible to cyber-attacks. Access or injection attacks against the authentication surface of the API are launched first, followed by exploitation of excessive permissions to steal or alter data that is reachable via the API.Agile development practices, highly modular application architectures, and business pressures for rapid development contribute to security holes in both APIs exposed to the public and those used internally. API delivery programs must include the following elements : (1) Automated Publishing of APIs using Swagger files or OpenAPI files, (2) Authentication and Authorization of API calls, (3) Routing and rate limiting of API calls, (4) Security of API calls and finally (5) Metric collection and visualization of API calls.The reference architecture shown below offers a streamlined way of achieving each element of an API delivery program. F5 solution works with modern automation and orchestration tools, equipping developers with the ability to implement and verify security at strategic points within the API development pipeline. Security gets inserted into the CI/CD pipeline where it can be tested and attached to the runtime build, helping to reduce the attack surface of vulnerable APIs. Common Patterns Enterprises need to maintain and evolve their traditional APIs, while simultaneously developing new ones using modern architectures. These can be delivered with on-premises servers, from the cloud, or hybrid environments. APIs are difficult to categorize as they are used in delivering a variety of user experiences, each one potentially requiring a different set of security and compliance controls. In all of the patterns outlined below, NGINX Controller is used for API Management functions such as publishing the APIs, setting up authentication and authorization, and NGINX API Gateway forms the data path.Security controls are addressed based on the security requirements of the data and API delivery platform. 1.APIs for highly regulated business Business APIs that involve the exchange of sensitive or regulated information may require additional security controls to be in compliance with local regulations or industry mandates.Some examples are apps that deliver protected health information or sensitive financial information.Deep payload inspection at scale, and custom WAF rules become an important mechanism for protecting this type of API. F5 Advanced WAF is recommended for providing security in this scenario. 2.Multi-cloud distributed API Mobile App users who are dispersed around the world need to get a response from the API backend with low latency.This requires that the API endpoints be delivered from multiple geographies to optimize response time.F5 DNS Load Balancer Cloud Service (global server load balancing) is used to connect API clients to the endpoints closest to them.In this case, F5 Cloud Services Essential App protect is recommended to offer baseline security, and NGINX APP protect deployed closer to the API workload, should be used for granular security controls. Best practices for this pattern are described here. 3.API workload in Kubernetes F5 service mesh technology helps API delivery teams deal with the challenges of visibility and security when API endpoints are deployed in Kubernetes environment. NGINX Ingress Controller, running NGINX App Protect, offers seamless North-South connectivity for API calls. F5 Aspen Mesh is used to provide East-West visibility and mTLS-based security for workloads.The Kubernetes cluster can be on-premises or deployed in any of the major cloud provider infrastructures including Google’s GKE, Amazon’s EKS/Fargate, and Microsoft’s AKS. An example for implementing this pattern with NGINX per pod proxy is described here, and more examples are forthcoming in the API Security series. 4.API as Serverless Functions F5 cloud services Essential App Protect offering SaaS-based security or NGINX App Protect deployed in AWS Fargate can be used to inject protection in front of serverless API endpoints. Summary F5 solutions can be leveraged regardless of the architecture used to deliver APIs or infrastructure used to host them.In all patterns described above, metrics and logs are sent to one or many of the following: (1) F5 Beacon (2) SIEM of choice (3) ELK stack.Best practices for customizing API related views via any of these visibility solutions will be published in the following DevCentral series. DevOps can automate F5 products for integration into the API CI/CD pipeline.As a result, security is no longer a roadblock to delivering APIs at the speed of business. F5 solutions are future-proof, enabling development teams to confidently pivot from one architecture to another. To complement and extend the security of above solutions, organizations can leverage the power of F5 Silverline Managed Services to protect their infrastructure against volumetric, DNS, and higher-level denial of service attacks.The Shape bot protection solutions can also be coupled to detect and thwart bots, including securing mobile access with its mobile SDK.820Views2likes0CommentsAutomating certificate lifecycle management with HashiCorp Vault
One of the challenges many enterprises face today, is keeping track of various certificates and ensuring those which are associated with critical applications deployed across multi-cloud are current and valid.This integration helps you to improve your security poster with short lived dynamic SSL certificates using HashiCorp Vault and AS3 on BIG-IP. First, a bit about AS3… Application Services 3 Extension (referred to asAS3 Extensionor more often simplyAS3) is a flexible, low-overhead mechanism for managing application-specific configurations on a BIG-IP system. AS3 uses a declarative model, meaning you provide a JSON declaration rather than a set of imperative commands. The declaration represents the configuration which AS3 is responsible for creating on a BIG-IP system. AS3 is well-defined according to the rules of JSON Schema, and declarations validate according to JSON Schema. AS3 accepts declaration updates via REST (push), reference (pull), or CLI (flat file editing). What is Vault? Vault is a tool for securely accessingsecrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, or certificates. Vault provides a unified interface to any secret, while providing tight access control and recording a detailed audit log. A modern system requires access to a multitude of secrets: database credentials, API keys for external services, credentials for service-oriented architecture communication, etc. Understanding who is accessing what secrets is already very difficult and platform specific. Adding on key rolling, secure storage, and detailed audit logs is almost impossible without a custom solution. This is where Vault steps in. Public Key Infrastructure (PKI) provides a way to verify authenticity and guarantee secure communication between applications. Setting up your own PKI infrastructure can be a complex and very manual process. Vault PKI allows users to dynamically generate X.509 certificates quickly and on demand. Vault PKI can streamline distributing TLS certificates and allows users to create PKI certificates with a single command. Vault PKI reduces overhead around the usual manual process of generating a private key and CSR, submitting to a CA, and waiting for a verification and signing process to complete, while additionally providing an authentication and authorization mechanism to validate as well. Benefits of using Vault automation for BIG-IP Cloud and platform independent solution for your application anywhere (public cloud or private cloud) Uses vault agent and Leverages AS3 Templating to update expiring certificates No application downtime - Dynamically update configuration without affecting traffic Configuration: 1.Setting up the environment - deploy instances of BIG-IP VE and Vault in cloud or on-premises You can create instances in the cloud for Vault & BIG-IP using terraform. The repo https://github.com/f5devcentral/f5-certificate-rotate This will pretty much download Vault binary and start the Vault server. Also, it will deploy the F5 BIG-IP instance on the AWS Cloud. Once we have the instances ready, you can SSH into the Vault ubuntu server and change directory to /tmp and execute below commands. # Point to the Vault Server export VAULT_ADDR=http://127.0.0.1:8200 # Export the Vault Token export VAULT_TOKEN=root # Create roles and define allowed domains with TTL for Certificate vault write pki/roles/web-certs allowed_domains=demof5.com ttl=160s max_ttl=30m allow_subdomains=true # Enable the app role vault auth enable approle # Create a app policy and apply https://github.com/f5devcentral/f5-certificate-rotate/blob/master/templates/app-pol.hcl vault policy write app-pol app-pol.hcl # Apply the app policy using app role vault write auth/approle/role/web-certs policies="app-pol" # Read the Role id from the Vault vault read -format=json auth/approle/role/web-certs/role-id | jq -r '.data.role_id' > roleID # Using the role id use the secret id to authenticate vault server vault write -f -format=json auth/approle/role/web-certs/secret-id | jq -r '.data.secret_id' > secretID # Finally run the Vault agent using the config file vault agent -config=agent-config.hcl -log-level=debug 2.UseAS3 Template file certs.tmpl with the values as shown The template file shown below will be automatically uploaded to the Vault instance, the ubuntu server in the /tmp directory Here I am using an AS3 file called certs.tmpl which is templatized as shown below. {{ with secret "pki/issue/web-certs" "common_name=www.demof5.com" }} [ { "op": "replace", "path": "/Demof5/HTTPS/webcert/remark", "value": "Updated on {{ timestamp }}" }, { "op": "replace", "path": "/Demof5/HTTPS/webcert/certificate", "value": "{{ .Data.certificate | toJSON | replaceAll "\"" "" }}" }, { "op": "replace", "path": "/Demof5/HTTPS/webcert/privateKey", "value": "{{ .Data.private_key | toJSON | replaceAll "\"" "" }}" }, { "op": "replace", "path": "/Demof5/HTTPS/webcert/chainCA", "value": "{{ .Data.issuing_ca | toJSON | replaceAll "\"" "" }}" } ] {{ end }} 3.Vault will render a new JSON payload file called certs.json whenever the SSL Certs expires When the Certificate expires, Vault generates a new Certificate which we can use to update the BIG-IP using ssh script, below shows the certs.json created automatically. Snippet of certs.json being created [ { "op": "replace", "path": "/Demof5/HTTPS/webcert/remark", "value": "Updated on 2020-10-02T19:05:53Z" }, { "op": "replace", "path": "/Demof5/HTTPS/webcert/certificate", "value": "-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIUaMgYXdERwzwU+tnFsSFld3DYrkEwDQYJKoZIhvcNAQEL\nBQAwEzERMA8GA1UEAxMIZGVtby5jb20wHhcNMjAxMDAyMTkwNTIzWhcNMj 4.Use Vault Agent file to run the integration forever without application traffic getting affected Example Vault Agent file pid_file = "./pidfile" vault { address = "http://127.0.0.1:8200" } auto_auth { method "approle" { mount_path = "auth/approle" config = { role_id_file_path = "roleID" secret_id_file_path = "secretID" remove_secret_id_file_after_reading = false } } sink "file" { config = { path = "approleToken" } } } template { source = "./certs.tmpl" destination = "./certs.json" #command = "bash updt.sh" } template { source = "./https.tmpl" destination = "./https.json" } 5. For Integration with HCP Vault If you are using HashiCorp hosted Vault solution instead of standalone Vault you can still use this solution with making few changes in the vault agent file. Detail documentation when using HCP vault is here atREADME.You can map tenant application objects on BIG-IP to Namespace on HCP Vault which provides islotation. More details how to create this solution athttps://github.com/f5businessdevelopment/f5-hcp-vault Summary The integration has following components listed below, here the Venafi or Lets Encrypt can also be used as external CA. Using this solution, you are able to: Improve your security posture with short lived dynamic certificates Automatically update applications using templating and robust AS3 service Increased collaborating breaking down silos Cloud agnostic solution can be deployed on-prem or public cloud3.9KViews4likes0CommentsAgility sessions announced
Good news, everyone! This year's virtual Agilitywill have over 100 sessions for you to choose from, aligned to 3 pillars. There will be Breakouts (pre-recorded 25 minutes, unlimited audience) Discussion Forums (live content up to 45 minutes, interactive for up to 75 attendees) Quick Hits (pre-recorded 10 minutes, unlimited audience) So, what kind of content are we talking about? If you'd like to learn more about how to Simplify Delivery of Legacy Apps, you might be interested in Making Sense of Zero Trust: what’s required today and what we’ll need for the future (Discussion Forum) Are you ready for a service mesh? (breakout) BIG-IP APM + Microsoft Azure Active Directory for stronger cybersecurity defense (Quick Hits) If you'd like to learn more about how to Secure Digital Experiences, you might be interested in The State of Application Strategy 2022: A Sneak Peak (Discussion Forum) Security Stack Change at the Speed of Business (Breakout) Deploy App Protect based WAF Solution to AWS in minutes (Quick Hits) If you'd like to learn more about how to Enable Modern App Delivery at Scale, you might be interested in Proactively Understanding Your Application's Vulnerabilities (Discussion Forum Is That Project Ready for you? Open Source Maturity Models (Breakout) How to balance privacy and security handling DNS over HTTPS (Quick Hits) The DevCentral team will be hosting livestreams, and the DevCentral lounge where we can hang out, connect, and you can interact directly with session presenters and other technical SMEs. Please go to https://agility2022.f5agility.com/sessions.html to see the comprehensive list, and check back with us for more information as we get closer to the conference.440Views7likes1Comment