F5 powered API security and management
Editor's Note:The F5 Beacon capabilities referenced in this article hosted on F5 Cloud Services are planning a migration to a new SaaS Platform - Check out the latesthere. Introduction Application Programming Interfaces (APIs) enable application delivery systems to communicate with each other. According to a survey conducted by IDC, security is the main impediment to delivery of API-based services.Research conducted by F5 Labs shows that APIs are highly susceptible to cyber-attacks. Access or injection attacks against the authentication surface of the API are launched first, followed by exploitation of excessive permissions to steal or alter data that is reachable via the API.Agile development practices, highly modular application architectures, and business pressures for rapid development contribute to security holes in both APIs exposed to the public and those used internally. API delivery programs must include the following elements : (1) Automated Publishing of APIs using Swagger files or OpenAPI files, (2) Authentication and Authorization of API calls, (3) Routing and rate limiting of API calls, (4) Security of API calls and finally (5) Metric collection and visualization of API calls.The reference architecture shown below offers a streamlined way of achieving each element of an API delivery program. F5 solution works with modern automation and orchestration tools, equipping developers with the ability to implement and verify security at strategic points within the API development pipeline. Security gets inserted into the CI/CD pipeline where it can be tested and attached to the runtime build, helping to reduce the attack surface of vulnerable APIs. Common Patterns Enterprises need to maintain and evolve their traditional APIs, while simultaneously developing new ones using modern architectures. These can be delivered with on-premises servers, from the cloud, or hybrid environments. APIs are difficult to categorize as they are used in delivering a variety of user experiences, each one potentially requiring a different set of security and compliance controls. In all of the patterns outlined below, NGINX Controller is used for API Management functions such as publishing the APIs, setting up authentication and authorization, and NGINX API Gateway forms the data path.Security controls are addressed based on the security requirements of the data and API delivery platform. 1.APIs for highly regulated business Business APIs that involve the exchange of sensitive or regulated information may require additional security controls to be in compliance with local regulations or industry mandates.Some examples are apps that deliver protected health information or sensitive financial information.Deep payload inspection at scale, and custom WAF rules become an important mechanism for protecting this type of API. F5 Advanced WAF is recommended for providing security in this scenario. 2.Multi-cloud distributed API Mobile App users who are dispersed around the world need to get a response from the API backend with low latency.This requires that the API endpoints be delivered from multiple geographies to optimize response time.F5 DNS Load Balancer Cloud Service (global server load balancing) is used to connect API clients to the endpoints closest to them.In this case, F5 Cloud Services Essential App protect is recommended to offer baseline security, and NGINX APP protect deployed closer to the API workload, should be used for granular security controls. Best practices for this pattern are described here. 3.API workload in Kubernetes F5 service mesh technology helps API delivery teams deal with the challenges of visibility and security when API endpoints are deployed in Kubernetes environment. NGINX Ingress Controller, running NGINX App Protect, offers seamless North-South connectivity for API calls. F5 Aspen Mesh is used to provide East-West visibility and mTLS-based security for workloads.The Kubernetes cluster can be on-premises or deployed in any of the major cloud provider infrastructures including Google’s GKE, Amazon’s EKS/Fargate, and Microsoft’s AKS. An example for implementing this pattern with NGINX per pod proxy is described here, and more examples are forthcoming in the API Security series. 4.API as Serverless Functions F5 cloud services Essential App Protect offering SaaS-based security or NGINX App Protect deployed in AWS Fargate can be used to inject protection in front of serverless API endpoints. Summary F5 solutions can be leveraged regardless of the architecture used to deliver APIs or infrastructure used to host them.In all patterns described above, metrics and logs are sent to one or many of the following: (1) F5 Beacon (2) SIEM of choice (3) ELK stack.Best practices for customizing API related views via any of these visibility solutions will be published in the following DevCentral series. DevOps can automate F5 products for integration into the API CI/CD pipeline.As a result, security is no longer a roadblock to delivering APIs at the speed of business. F5 solutions are future-proof, enabling development teams to confidently pivot from one architecture to another. To complement and extend the security of above solutions, organizations can leverage the power of F5 Silverline Managed Services to protect their infrastructure against volumetric, DNS, and higher-level denial of service attacks.The Shape bot protection solutions can also be coupled to detect and thwart bots, including securing mobile access with its mobile SDK.806Views2likes0CommentsAutomating certificate lifecycle management with HashiCorp Vault
One of the challenges many enterprises face today, is keeping track of various certificates and ensuring those which are associated with critical applications deployed across multi-cloud are current and valid.This integration helps you to improve your security poster with short lived dynamic SSL certificates using HashiCorp Vault and AS3 on BIG-IP. First, a bit about AS3… Application Services 3 Extension (referred to asAS3 Extensionor more often simplyAS3) is a flexible, low-overhead mechanism for managing application-specific configurations on a BIG-IP system. AS3 uses a declarative model, meaning you provide a JSON declaration rather than a set of imperative commands. The declaration represents the configuration which AS3 is responsible for creating on a BIG-IP system. AS3 is well-defined according to the rules of JSON Schema, and declarations validate according to JSON Schema. AS3 accepts declaration updates via REST (push), reference (pull), or CLI (flat file editing). What is Vault? Vault is a tool for securely accessingsecrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, or certificates. Vault provides a unified interface to any secret, while providing tight access control and recording a detailed audit log. A modern system requires access to a multitude of secrets: database credentials, API keys for external services, credentials for service-oriented architecture communication, etc. Understanding who is accessing what secrets is already very difficult and platform specific. Adding on key rolling, secure storage, and detailed audit logs is almost impossible without a custom solution. This is where Vault steps in. Public Key Infrastructure (PKI) provides a way to verify authenticity and guarantee secure communication between applications. Setting up your own PKI infrastructure can be a complex and very manual process. Vault PKI allows users to dynamically generate X.509 certificates quickly and on demand. Vault PKI can streamline distributing TLS certificates and allows users to create PKI certificates with a single command. Vault PKI reduces overhead around the usual manual process of generating a private key and CSR, submitting to a CA, and waiting for a verification and signing process to complete, while additionally providing an authentication and authorization mechanism to validate as well. Benefits of using Vault automation for BIG-IP Cloud and platform independent solution for your application anywhere (public cloud or private cloud) Uses vault agent and Leverages AS3 Templating to update expiring certificates No application downtime - Dynamically update configuration without affecting traffic Configuration: 1.Setting up the environment - deploy instances of BIG-IP VE and Vault in cloud or on-premises You can create instances in the cloud for Vault & BIG-IP using terraform. The repo https://github.com/f5devcentral/f5-certificate-rotate This will pretty much download Vault binary and start the Vault server. Also, it will deploy the F5 BIG-IP instance on the AWS Cloud. Once we have the instances ready, you can SSH into the Vault ubuntu server and change directory to /tmp and execute below commands. # Point to the Vault Server export VAULT_ADDR=http://127.0.0.1:8200 # Export the Vault Token export VAULT_TOKEN=root # Create roles and define allowed domains with TTL for Certificate vault write pki/roles/web-certs allowed_domains=demof5.com ttl=160s max_ttl=30m allow_subdomains=true # Enable the app role vault auth enable approle # Create a app policy and apply https://github.com/f5devcentral/f5-certificate-rotate/blob/master/templates/app-pol.hcl vault policy write app-pol app-pol.hcl # Apply the app policy using app role vault write auth/approle/role/web-certs policies="app-pol" # Read the Role id from the Vault vault read -format=json auth/approle/role/web-certs/role-id | jq -r '.data.role_id' > roleID # Using the role id use the secret id to authenticate vault server vault write -f -format=json auth/approle/role/web-certs/secret-id | jq -r '.data.secret_id' > secretID # Finally run the Vault agent using the config file vault agent -config=agent-config.hcl -log-level=debug 2.UseAS3 Template file certs.tmpl with the values as shown The template file shown below will be automatically uploaded to the Vault instance, the ubuntu server in the /tmp directory Here I am using an AS3 file called certs.tmpl which is templatized as shown below. {{ with secret "pki/issue/web-certs" "common_name=www.demof5.com" }} [ { "op": "replace", "path": "/Demof5/HTTPS/webcert/remark", "value": "Updated on {{ timestamp }}" }, { "op": "replace", "path": "/Demof5/HTTPS/webcert/certificate", "value": "{{ .Data.certificate | toJSON | replaceAll "\"" "" }}" }, { "op": "replace", "path": "/Demof5/HTTPS/webcert/privateKey", "value": "{{ .Data.private_key | toJSON | replaceAll "\"" "" }}" }, { "op": "replace", "path": "/Demof5/HTTPS/webcert/chainCA", "value": "{{ .Data.issuing_ca | toJSON | replaceAll "\"" "" }}" } ] {{ end }} 3.Vault will render a new JSON payload file called certs.json whenever the SSL Certs expires When the Certificate expires, Vault generates a new Certificate which we can use to update the BIG-IP using ssh script, below shows the certs.json created automatically. Snippet of certs.json being created [ { "op": "replace", "path": "/Demof5/HTTPS/webcert/remark", "value": "Updated on 2020-10-02T19:05:53Z" }, { "op": "replace", "path": "/Demof5/HTTPS/webcert/certificate", "value": "-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIUaMgYXdERwzwU+tnFsSFld3DYrkEwDQYJKoZIhvcNAQEL\nBQAwEzERMA8GA1UEAxMIZGVtby5jb20wHhcNMjAxMDAyMTkwNTIzWhcNMj 4.Use Vault Agent file to run the integration forever without application traffic getting affected Example Vault Agent file pid_file = "./pidfile" vault { address = "http://127.0.0.1:8200" } auto_auth { method "approle" { mount_path = "auth/approle" config = { role_id_file_path = "roleID" secret_id_file_path = "secretID" remove_secret_id_file_after_reading = false } } sink "file" { config = { path = "approleToken" } } } template { source = "./certs.tmpl" destination = "./certs.json" #command = "bash updt.sh" } template { source = "./https.tmpl" destination = "./https.json" } 5. For Integration with HCP Vault If you are using HashiCorp hosted Vault solution instead of standalone Vault you can still use this solution with making few changes in the vault agent file. Detail documentation when using HCP vault is here atREADME.You can map tenant application objects on BIG-IP to Namespace on HCP Vault which provides islotation. More details how to create this solution athttps://github.com/f5businessdevelopment/f5-hcp-vault Summary The integration has following components listed below, here the Venafi or Lets Encrypt can also be used as external CA. Using this solution, you are able to: Improve your security posture with short lived dynamic certificates Automatically update applications using templating and robust AS3 service Increased collaborating breaking down silos Cloud agnostic solution can be deployed on-prem or public cloud3.8KViews4likes0CommentsAgility sessions announced
Good news, everyone! This year's virtual Agilitywill have over 100 sessions for you to choose from, aligned to 3 pillars. There will be Breakouts (pre-recorded 25 minutes, unlimited audience) Discussion Forums (live content up to 45 minutes, interactive for up to 75 attendees) Quick Hits (pre-recorded 10 minutes, unlimited audience) So, what kind of content are we talking about? If you'd like to learn more about how to Simplify Delivery of Legacy Apps, you might be interested in Making Sense of Zero Trust: what’s required today and what we’ll need for the future (Discussion Forum) Are you ready for a service mesh? (breakout) BIG-IP APM + Microsoft Azure Active Directory for stronger cybersecurity defense (Quick Hits) If you'd like to learn more about how to Secure Digital Experiences, you might be interested in The State of Application Strategy 2022: A Sneak Peak (Discussion Forum) Security Stack Change at the Speed of Business (Breakout) Deploy App Protect based WAF Solution to AWS in minutes (Quick Hits) If you'd like to learn more about how to Enable Modern App Delivery at Scale, you might be interested in Proactively Understanding Your Application's Vulnerabilities (Discussion Forum Is That Project Ready for you? Open Source Maturity Models (Breakout) How to balance privacy and security handling DNS over HTTPS (Quick Hits) The DevCentral team will be hosting livestreams, and the DevCentral lounge where we can hang out, connect, and you can interact directly with session presenters and other technical SMEs. Please go to https://agility2022.f5agility.com/sessions.html to see the comprehensive list, and check back with us for more information as we get closer to the conference.439Views7likes1CommentLeaked Credential Check with Advanced WAF
Description In this article you will learn how to configure and use Leaked Credential Check (LCC).LCC provides access to a database of compromised credentials, which can be used to detect and prevent a Credential Stuffing Attack. LCC is a subscription-based service which can be added to BIG-IP Advanced WAF. Summary Leaked Credential Check stops leaked or stolen credentials from being used to access personal or business applications. It automatically detects and mitigates compromised credential use. If compromised credentials are detected during an attempted login, Leaked Credential Check enables several mitigation options for SecOps teams to enact, individually or collectively, including: Requiring the user to employ multi-factor authentication (MFA) before granting access. Redirecting the user to another application page; for example, a customer support web page. Responding to the suspicious login with a preset page requesting further action by the user, such as contacting customer support. Blocking the user and their login from accessing the application. Sending an alert to the SecOps team to take additional action This article assumes you have Advanced WAF configured and deployed for one or more Virtual Servers and you have purchased the add-on subscription for LCC. Typical Steps Involved in a Credential Stuffing Attack High Level Network Topology Configuration Steps From the BIG-IP Configuration Utility select Security > Application Security > Security Policies > Policies List. Notice the Policy name in this example is Leaked-Credential-Check.There are 2 Virtual Servers attached to this policy, vs_arcadia.emea.f5se.com_II and vs_Hackazon_IV. LCC is configured from Security > Cloud Services > Cloud Security Services Applications. Click the name of the Cloud Security Application, f5-credential-stuffing-cloud-app in this example. Note: if the application has not been created yet click the Create button on the right. Give it a name if creating a new app.Set the Service Type to Blackfish Credential Stuffing Service.Enter your API Key ID and Secret.Specify the Endpoint, f5-credential-stuffing-blackfishin this example. Click Save when done LCC is enabled in Security > Application Security > Brute Force Attack Prevention. Check the box to Enable Detection Under Action you can choose different mitigation Actions. Alarm: report the Leaked Credentials Detection violation in the Event Log Alarm and Blocking Page: report the Leaked Credentials Detection violation in Event Log and send the Blocking Response Page Alarm and Honeypot Page: report the Leaked Credentials Detection violation in Event Log and send the Honeypot Response Page Alarm and Leaked Credentials Page: report the Leaked Credentials Detection violation in Event Log and send the Leaked Credentials Page Select Learning and Blocking Settings to configure them. For Sessions and Logins set Leaked Credentials Detection to Alarm and Block. The Honeypot Page and Leaked Credentials Page can be configured from Security > Application Security > Security Policies > Policies List Select the Leaked-Credential-Check Policy Select Response and Blocking Pages on the left. Scroll down and the Failed Login Honeypot response and Leaked Credentials response can be configured here. Test Leak Credentials Detection Attempt to login to your web application using known leaked credentials.In this example we’ll use “HACKAZON”.Click the Sign In link near the top on the right. Attempt to login using the following: Username: demo33@fidnet.com Password: mountainman01 The login should fail. Try to login with the following credentials: Username: admin Password: 12345678 Check the BIG-IP Event Log From the Configuration Utility go to Security > Event Logs > Application > Requests. There are two requests at the top that look important. Select the first one.Here we can see details about the request.As suspected, the violation was due to the Leaked Credentials Detection policy. Scroll down under Request and you can see the username and password that triggered the violation. Now select the second one.As you can see, this violation was triggered by the login attempt with the username “demo33@fidnet.com”. Conclusion Congratulations!You have successfully configured and test Leak Credential Checking.2.5KViews1like0CommentsMulti-cluster Kubernetes/Openshift with GSLB-TOOL
Overview This is article 1 of 2. GSLB-TOOL is an OSS project around BIG-IP DNS (GTM) and F5 CloudServices’ DNS LB GSLB products to provide GSLB functionality to Openshift/Kubernetes. GSLB-TOOL is a multi-cluster enabler. Doing multi-cluster with GSLB has the following advantages: Cross-cloud. Services are published in a coordinated manner while being hosted in any public cloud or private cloud. High degree of control. Publishing is done based on service name instead of IP address. Traffic is directed to specific data center based on operational decisions such as service load and also allowing canary, blue/green, and A/B deployments across data centers. Stickiness. Regardless the topology changes in the network, clients will be consistently directed to the same data center. IP Intelligence. Clients can be redirected to the desired data center based on client’s location and gather stats for analytics. The use cases covered by GSLB-TOOL are: Multi-cluster deployments Data center load distribution Enhanced customer experience Advanced Blue/Green, A/B and Canary deployment options Disaster Recovery Cluster Migrations Kubernetes <-> Openshift migrations Container's platform version migration. For example, OCP 3.x to 4.x or OCP 4.x to 4.y. GSLB-TOOL is implemented as a set of Ansible scripts and roles that can be used from the cli or from a Continious Delivery tool such as Spinnaker or Argo CD. The tool operates as a glue between the Kubernetes/Openshift API and the GSLB API. GSLB-TOOL uses GIT as source of truth to store its state hence the GSLB state is not in any specific cluster. The next figure shows an schema of it. It is important to emphasize that GSLB-TOOL is cross-vendor as well since it can use any Ingress Controller or Router implementation. In other words, It is not necessary to use BIG-IP or NGINX for this. Moreover, a given cluster can have several Router/Ingress controller instances from difference vendors. This is thanks of only using the Openshift/Kubernetes APIs when inquiring about the container routes deployed Usage To better understand how GSLB-TOOL operates it is important to remark the following characteristics: GSLB-TOOL operates with project/namespace granularity, in a per cluster bases. When operating with a cluster's project/namespace it operates with all the L7 routes of the cluster's project/namespace at once. For example, the following command: $ project-retrieve shop onprem Will retrieve all the L7 routes of the namespace shop from the onprem cluster. Having a cluster/namespace simplifies management and mimics the behavior of RedHat’s Cluster’s Application Migration tool. In the next figure we can see the overal operations of GSLB-TOOL. At the top we can see in bold the name of the clusters (onprem and aws). In the figure these are only Openshift (aka OCP) clusters but it could be any other Kubernetes as well. We can also see two sample project/namespaces (Project A and Project B). Different clusters can have different namespaces as well. There are two types of commands/actions: The project-* commands operate on the Kubernetes/Openshift API and in the source of truth/GIT repository. These commands operate with a project/namespace granularity. GSLB-TOOL doesn't modify your Openshift/K8s cluster, it only performs read-only operations. The gslb-* commands operates on the source of truth/GIT repository and with the GSLB API of choice, either BIG-IP or F5 Cloud Services. These commands operate with all the project/namespaces of all clusters at once either submitting or rolling back the changes in the GSLB backends. When GSLB-TOOL pushes the GSLB configuration either performs all changes or doesn’t perform any. Thanks to the use of GIT the gslb-rollback command easily reverts the configuration if desired. Actually, creating the Backup of the previous step is only useful when using GSLB-TOOL without GIT which is possible too. GSLB-TOOL flexibility GSLB-TOOL has been designed with flexibility in mind. This is reflected in many features it has: It is agnostic of the Router/Ingress Controller implementation. In the same GSLB domain, it supports concurrently vanilla Kubernetes and Openshift clusters. It is possible to have multiple Routers/Ingress Controllers in the same Kubernetes/Openshift cluster. It is possible to define multiple Availability Zones for a given Router/Ingress Controller. It can be easily modified given that it is written in Ansible. Furthermore, the Ansible playbooks make use of template files that can be modified if desired. Multple GSLB backends. At present GSLB-TOOL can use either F5 Cloud Service’s DNS LB (a SaaS offering) or F5 BIG-IP DNS (aka GTM) by simply changing the value of the backend configuration option to either f5aas or bigip. All operations, configuration files, etc… remain the same. At present it is recommended F5 BIG-IP DNS because currently offers better monitoring options. Easiness to PoC. F5 Cloud Service’s DNS LB can be used to test the tool and later on switch to F5 BIG-IP DNS by simply changing the backend configuration option. GSLB-TOOL L7 route inquire/config flexibility It is specially important to have flexibility when configuring the L7 routes in our source of truth. We might be interested in the following scenarios for a given namespace: Homogeneous L7 routes across clusters - In occasions we expect that all clusters have the same L7 routes for a given namespace. This happens, for example, when all applications are the same in all clusters. Heterogeneous L7 routes across clusters - In occasions we expect that each cluster might have different L7 routes for a given namespace, when there are different versions of the applications (or different applications). This happens, for example, when we are testing new versions of the applications in a cluster and other clusters use the previous version. To handle these scenarios, we have two strategies when populating the routes: project-retrieve – We use the information from the cluster’s Route/Ingress API to populate GSLB. project-populate– We use the information from another cluster’s Route/Ingress API to populate GSLB. The cluster from where we take the L7 routes is referred as the reference cluster. We exemplify these strategies in the following figure where we use a configuration of two clusters (onprem and aws) and a single project/namespace. The L7 routes (either Ingress or Route resources) in these are different: the cluster onprem has two addional L7 routes (/shop and /checkout). We are going to populate our GSLB backend in three different ways: In Example 1, we perform the following actions in sequence: With project-retrieve web onprem we retrieve from the cluster onprem the L7 routes of the project web and these are stored in the Git repository or source of truth. Analogously, with project-retrieve web aws we retrieve from the cluster aws the L7 routes (only one in this case) and these are treieved in the Git repository or source of truth. We submit this configuration into the GSLB backend with gslb-commit. The GSLB backend expects that the onprem cluster has 3 routes and the aws backend 1 route. If the services are available the health check results for both clusters will be Green. Therefore the FQDN will return the IP addresses of both clusters' Routers/Ingress Controllers. In Example 2, we use the project-populate strategy: We perform the same first action as in Example 1. With project-populate web onprem aws we indicate that we expect that the L7 routes defined in onprem are also available in the aws cluster which is not the case. In other words, the onprem cluster is used as the reference cluster for aws. After we submit the configuration in GSLB with gslb-commit, the healthchecks in the onprem cluster will succeed and will fail on aws because /shop and /checkout don't exist (an HTP/404 is returned). Therefore for the FQDN www.f5bddemos.io will return only the IP address of onprem. This will be green automatically, once we update the L7 routes and applications in aws. In Example 3, we use again the project-populate strategy but we use aws are reference cluster. Unlike in the previous examples, with project-retrieve web aws we retrieve the routes from the cluster aws. With project-populate web aws onprem we do the reverse as in step b of the Example 2: we use the aws as reference for onprem instead. After submission of the config with gslb-commit. Given that onprem has the L7 route that aws has, the health checking will succeed. For sake of simplicity, In the examples above it has been shown projects/namespaces with only a single FQDN for all their L7 routes but for a given namespace it is possible to have an arbitrary number of L7 routes and FQDNs. There is no limitation on this either. Additional information If you want to see GSLB-TOOL in practice please check this video.For more information on this tool, please visit the GSLB-TOOL Home Page and it's Wiki Page for additional documentation.1.4KViews0likes0CommentsAdopting SRE practices with F5: Multi-cluster Blue-green deployment
In lastarticle, wecoveredblue-green deploymentas the most straightforward SRE deployment modelatahigh level,herewe are divingdeeperinto the detailsto see how F5 technologies enable this use case. Let’sstart offbylooking atsome of the key components. F5 DNS Load Balancer Cloud Service(GSLB) The first component of the solution is F5CloudService. TheDNS Load Balancer provides GSLB as a cloud-hosted SaaS service with built-in DDoS protection and an API-firstapproach.A blue-green deployment aims to minimize downtimedue to app deployment, and there are some basicroutingmechanismsout oftheboxwithOpenShiftthat assist in this area. However, ifwe are looking forswift routing switchwithmore flexibilityandreliabilityacross differentOpenShiftclusters, different clouds, or geo locations, this is when F5 DNS Load Balancer Cloud Service comes into the picture. Setting up DNS for F5 Cloud Services This solution requires thatyour corporateDNS server delegatesa DNSzone (akasubdomain)to theF5DNS Load BalancerCloud Service. An OpenShift cluster typically has its own domaincreatedfor the applications, for example: *.apps.<cluster name>.example.com.Theend user,however,doesn't really use such a long name and instead queries for www.example.com.A CNAME record is often used tomapone domainname(analias)to another (thetrue domainname). All set up, this is theDNSscenario: In case the customer hasmore than onecluster,itrequiresoneCNAME recordpercluster,with requestsload balancedamong clusters.The drawbacks ofthis type of solutionsinclude: No comprehensive health checkingandmonitoring Unabletoswitchworkloads across clustersat speed Lack of automation and integration with the OpenShiftcluster F5 Cloud Services provides these features in amulti-cluster and multi-cloudinfrastructure around the globe with the ease of aSaaSsolution,without the need ofinfrastructure modifications.You will set up your corporateDNStouseF5DNS Load Balancer CloudServiceasfollows: Here is a sample configuration foraCloud/CorporateDNS: You can register an F5 Cloud Service account, and then subscribe to DNS Load BalancerSservicehere: F5CloudServices F5 GSLB toolfor Ansible Automation The blue-green deployment represents a sequence of steps to rollout your new application.GSLB toolis developed toprovide a common automation plane for both OpenShift and F5 Cloud Service. LeveragingthedeclarativeAPI fromF5DNSLoadBalancerCloud Serviceand OpenShift, we used Ansible to automate the process. Itenables you to standardize and automate release and deployment by orchestrating the build, test, provisioning, configuration management, and deployment tools in yourContinuousDelivery pipeline. More specifically,GSLB toolautomatesyourinteraction with: theOpenShift/K8s deployments Retrieve Layer 7 routesfrom given project/namespace and OpenShift Cluster CopyLayer 7routes of a given project/namespace from one OpenShift Cluster to another F5 DNS Load BalancerService Createof GSLBload Balanced Records (LBRs)along with needed pieces (Monitors, IP endpoints, Pools etc.) Set the GSLB ratio for each deployment for a given project/namespace The benefits of using GSLB tool to automate the entire process: Improve speed and scale especially with100’s ofOpenShiftroutes Eliminate room for human error Achievedeterministic and repeatable outcomes I want to give credit tomy colleague,Ulises Alonso Camaro,who developedthe GSLB tool.Please refer to theGitHubfor details of the GSLB tool, andwikion how to set up the tool and operation. Buildand Run the Blue-green Deployment Now we canlookathow we can use F5 DNS Load BalancerServiceand GSLB tool to canary test the new version and manipulate the traffic routingfor Blue-green deployment.In Blue-green deployment, we deploy two versions of the application running simultaneously in two identical production environments called Blue (OpenShift Cluster 1) and Green (new OpenShift Cluster 2). Step 1. Retrieve routes from Blue cluster and push to F5 DNSLoad BalancerCloud Service Once you haveinstalledthe GSLB tool and configured thedeployment settingsfor your infrastructure,the firstset ofcommandsto runare ./project-retrieve defaultaws1&&./gslb-commit "publish routes from Blue cluster to F5 DNS load balancer" These commandsretrieve the OpenShift route(s) from your Blue clusteraws1, andthenpublishtheretrieved routes intoF5 DNS Load BalancerCloud Service. Step 2. Retrieve routes from Green cluster and push toF5 DNSLoad Balancer Cloud Service User to input the following commands: ./project-retrieve defaultaws2&&./gslb-commit "publish routes from Blue cluster to F5 DNS load balancer" Thesecommands will retrieve the OpenShift route(s) from your Green clusteraws2, andpush toF5 DNS Load Balancer Cloud Servicesconfiguration Step 3. Canary test green deployment User to input the following command: ./project-ratios default '{"aws1": "90", "aws2": "10" }&&./gslb-commit "canary testing blue and green clusters" The commands will set the traffic ratio fortheBlue (90%) andtheGreen deployment (10%) andpublish the configuration. As you can see,F5 DNS Load Balancer Cloud Servicesets the traffic ratio for each endpoint accordingly. Step 4. Switch traffic to Green After the testing succeeds, itis time to switch production traffic totheGreen cluster. User to input the following commands: ./project-evacuate defaultaws1&&./gslb-commit "switch all traffic to green cluster" The commands will switch the traffic completely fromtheBlue to theGreen deployment. More ArchitecturalPatterns There are many related patternsforBlue-greendeployment, each ofwhichoffers a different focus for an automated production deployment. Some examplevariantsinclude: Infrastructure as Code (IaC)In this variant of the pattern the release deployment target environment does not exist until it is created by the DevOps pipeline.Post deployment the original ‘blue’ environment is scheduled for destruction once the ‘green’ environment is considered stable in production. Container-based DeploymentIn this variant of the pattern the release deployment target is represented as a collection of one or more containers.Post release, once the ‘green’ environment is considered stable in production, the containers represented by the ‘blue’ container group are scheduled for destruction. Our solution can address allBlue-green deploymentvariants, withresources used in theblueandgreenenvironments can becreated or destroyed as needed, orthey can begeographically distributed. WhileContinuous Deployment (CD) is a natural fit for the Blue-green deployment,F5 DNSLoad BalancerCloud Servicecombined with GSLB tool can enable manypossibilitiesand support a collection of architecture patterns including: Migrateapplication froma source cluster(OCP 3.x)to a destination cluster(OCP 4.x),referherefor details MigrateworkloadfromKubernetescluster to OpenShift Cluster Modernize your application deployment with Lift and Shift. Repackage your application running as a set of VM’s into containers, and deploy then into OpenShift or Kubernetes cluster Built intoCI/CD pipeline so that any future changes to the application are built and deployed automatically. We arecontinuouslyworking onmore usagepatternsandwillexplore in more details in future blog posts. What’s next? So,go aheadtoDevCentralGitHub repo,download source code behind our technologies,follow the guide totestit out in yourownenvironment.1.4KViews1like1CommentF5 Essential App Protect and AWS CloudFront - Better Together.
What is the function of any application? In the simplest of terms an application must provide a value to the user. While the definition of "value" may differ, it is universally true that the user experience is critical to the perception of value.The user experience, loyalty, and your brand are predicated on the fact that the application be useable. So, what does it mean to be useable?It means that the application must be available, it must be secure, and it must be fast. Recently we announced that F5 Essential App Protect (EAP) would be integrating with AWS CloudFront to provide both security and Content Delivery Network (CDN) services for customers. This is an evolutionary step for F5 in our transition to a software and services company. Now, from the same portal where customers deploy applications and employ F5 Essential App Protect for security, in just a couple of clicks or a single API call, they can also access a global CDN leveraging AWS CloudFront. Let's review why EAP is cool, why AWS CloudFront is interesting, and take a look at the steps to enable the feature. F5 EAP is a high value, low time investment, WAF solution allowing customers to leverage F5's leading WAF Engine as a service using an intuitive interface.Customers can leverage EAP to meet security needs while incurring minimal operational overhead and pay based on traffic patterns. F5 EAP offers customers access to the sensitive data masking, security signatures, new threat campaigns, and IP reputation services that we have developed over nearly 20 years and continue to evolve. By subscribing to EAP customers can increase the efficacy of their SOC without increasing headcount or adding to operational burdens while gaining access to security protections that go far beyond what other services can provide. Below, the overview dashboard of an application protected by F5 Essential App Protect AWS CloudFront is a global CDN that allows users to deploy cacheable content across the globe in over 200 edge points of presence (PoP). AWS CloudFront is based on AWS best in class technology, expansive know how on building cloud services and leverages the AWS global network between the PoP closest to your customers and your Essential App Protect deployment (built on AWS). AWS CloudFront becomes the entry point for all traffic into the application, caching the content that is cacheable and passing the dynamic or non-cacheable portions to the origin. So why would you want to enable a CDN for your application? The simple answer is user experience. The better performing an application is the more likely it is that it will be used. By leveraging a CDN your content can be placed closer to your users and cached. This decreases page load time, something that always makes users happy. A more complete answer includes attributes such as reducing your WAF costs by not having to process cacheable content for each request and removing that traffic load from your origin application systems. The use of a CDN provides benefits for both the content provider and the content consumer. In the graph below you can clearly see the impact of enabling caching had on the origin servers in one of our test applications. So what does the high level architecture look like? Below you can see that DNS traffic is processed by AWS Route 53, directing the traffic to one of the AWS CloudFront edge locations. Traffic that has passed basic validations such as the host header matches the certificate name is either served from the edge cache or directed to F5 EAP for further processing. Traffic that passes WAF validation by F5 EAP is then passed to your origin application no matter if it is in AWS or located in your data center. For applications that reside in AWS all of this network traffic uses the AWS global network between the AWS CloudFront edge, F5 EAP and your app. For applications that reside in a non-AWS based location you can still use F5 EAP with the AWS CloudFront Integration (no AWS account required on your part) noting the slight difference in the data path as shown in the architecture below. Enabling your CDN Assuming that you already have an application deployed and protected by F5 EAP let's take a look at updating and adding AWS CloudFront functionality. From our F5 EAP portal we can see that our application has an FQDN of na2-auction.cloudservicesdemo.net. This application is already deployed and the DNS flow is as follows, a user's computer performs a DNS look up for na2-auction.cloudservicesdemo.net and it is a CNAME for your enabled F5 EAP dataplane locations. First let's enable caching on our Application which will enable AWS CloudFront. Next we need to configure the EdgeTiers (which geographies) that we would like to leverage to place the content closer to the users. Now we should filter any cookies or headers that we would like forwarded and enable compression. Since our application was already deployed on F5 EAP we will use the same CNAME and direct our traffic into AWS CloudFront. After a brief period our F5 Essential App Protect deployment will now have an integrated AWS CloudFront distribution and you will start to see caching metrics as traffic traverses the data path as described in the architecture. What if you need to invalidate some content? Well that is just as easy as enabling the feature! First you need to create the invalidation (purge), and when you click save the process will run deleting the items that match from the cache based on your EdgeTier selections. What about support for AWS CloudFront? That is on F5. We will deploy the AWS CloudFront distribution for you and it is part of our service that you subscribed to. If you experience issues you contact us at F5. We will diagnose and, if necessary, work with AWS to resolve the issue. No more having to coordinate multiple vendors for your application and no more getting stuck in the middle of two vendors not agreeing to a resolution. That's it! You have enabled an integrated security and CDN based on F5 EAP and AWS CloudFront in a few clicks from a single user interface! F5 Essential App Protect with our integrated AWS CloudFront offering can handle the the full lifecycle of your application security and edge performance optimization needs. Take it for a test drive from the F5 CloudServices portal.534Views3likes0CommentsAdopting Site Reliability Engineering with F5
Foreword The role of the Site Reliability Engineering (SRE) is common in cloud first enterprises and becoming more widespreadin traditional IT teams.Here, we would like to kick off this article series to look at the concepts that give SRE shape, outline the primary tools and best practices that make it possible, and explore some common use cases around Continuous Deployment (CD) strategy, visibility and security. While SRE and DevOps share many areas of commonality, there are significant differences between them. DevOps is a loose set of practices, guidelines, and culture designed to break down silos in Development, IT operations, Network, and Security team. DevOps does not tell you how to run operations at a detailed level. On the other hand, SRE, a term pioneered by Google, brings an opinionated framework to the problem of how to run operations effectively. If you think of DevOps as a philosophy, you can argue that SRE implements some of the philosophy that DevOps describes. In a way, SRE implements DevOps practices. After all, SRE only works at all if we have tools and technologies to enable it. Balancing Release Velocity and Reliability SRE aims to find the balance between feature velocity and reliability, which are often treated as opposing goals. Despite the risk of making changes to software, these changes are necessary for the business to succeed. Instead of advocating against change, SRE uses the concept of Service Level Objectives (SLOs) and error budgets to measure the impact of releases on reliability. The goal is to ship software as quickly as possible while meeting the reliability targets the users expect. While there are a wide range of ways an SRE-focused IT team might optimize the balance between agility and stability, two deployment models stand out for their widespread applicability and general ease of execution: Blue-green deployment For SRE, availability is currently the most common SLO. If getting new software to your users and uninterrupted access is truly required, there needs to be engineering work to implement load balancing or fractional release measures like blue-green or canary deployments to minimize any downtime. Recovery is a factor too. The idea behind blue-green deployment is that your blue environment is your existing production environment carrying live traffic. In parallel, you provision a green environment, which is identical to the blue environment other than the new version of your code. As you prepare a new version of your software, deployment and the final stage of testing takes place inthe environment that is not live: in this example, Green (or new OpenShift Cluster). When it's time to deploy, you route production traffic from the blue environment to the green environment. This technique can eliminate downtime due to app deployment. In addition, blue-green deployment reduces risk: if something unexpected happens with your new version on Green, you can immediately roll back by reverting traffic to the original blue environment. When you are looking for manipulating the traffic with more flexibility, reliability, across different clusters, different clouds, or geo locations, this is when F5 DNS Load Balancer Cloud Service comes into the picture. F5 Cloud service GSLB is a SaaS offering. It can provide automatic failover, load balancing across multiple locations, increased reliability by avoiding a single point of failure, and increased performance by directing traffic to the optimal site. This allows SRE to move fast while still maintaining enterprise grade. Targeted Canary deployment Another approach to promote availability for SRE SLO is canary deployment. In some cases, swapping out the entire deployment via a blue-green environment may not be desired. In a canary deployment, you upgrade an application on a subset of the infrastructure and allow a limited set of users to access the new version. This approach allows you to test the new software under a Production-like load, evaluate how well it meets users’ needs, and assess whether new features are profitable. One approachoftenused by Azure DevOps is ring deployment model. Users fall into three general buckets based on their respective different risk profiles: Ring 1 - Canaries who voluntarily test bleeding edge features as soon as they are available. Ring 2 - Early adopters who voluntarily preview releases, considered more refined than the canary bits. Ring 3 - Users who consume the products, after passing through canaries and early adopters. Developer can promote and target new versions of the same application (version 1.2, 1.1, 1.0) to targeted users (ring 1, 2 and 3) respectively, without involving and waiting the infrastructure operations team (NoOps). To identify theuserfor the right version, you maychoose tosimplyuse IP address, authenticate directlyby backend, oradd an authenticationlayerin front of the backend.F5 technologies can helpenable this targeted canary use case: BIG-IP APM in N-S will authenticate and identify users as ring 1, 2 or 3, and inject user identification into HTTP header This identification is passed on to NGINX plus micro-gateway to direct users to the correct microservice versions. Combining BIG-IP and NGINX, this architecture uniquely gives SRE the flexibility to adapt with the ability to define the baseline service control and security (for NetOps or SecOps), while extending controls for more granular and enhanced security to the developer team (for DevOps). The need for observability For SRE, at the heart of implementing SLOs practically is monitoring. You can't understand what you can't see. A classic and common approach to monitoring is to watch for a specific value or condition, and then to trigger an alert when that value is exceeded or that condition occurs. One of the valid monitoringoutputsis logging, which is recorded for diagnosis or forensic purposes. The ELK stack, a collection of three open source projects, namely Elasticsearch, Logstash and Kibana, provides IT project stakeholders the capabilities of multi-system and multi-application log aggregation and analysis. ELKcan beutilized for the analysis and visualization of applicationmetricsthrough a centralized dashboard. With general visibility in place, tracking can be enabled in order to add a level of specificity to what is being observed.Taking advantageofiRuleon BIG-IP,NetOps can generateUUID and insertitinto the HTTP header of every HTTP request packet arriving at BIG-IP. All traffic access logs containing UUIDs,fromBIG-IP and NGINX,are sent to the ELK server, for validation of informationsuch asuser location, response time by user location, response timeetc. Through the dashboard, end-users can easily correlate North-South traffic (processed by BIG-IP) with East-West traffic (processed by NIGNX+ inside cluster), for an end-to-end performance visibility. In turn, tracking performance metrics opens up the possibility of defining service level objectives (SLO). With observability, security is possible Security incident will always occur, and hence it's essential to integrate security into observability. What’s most important is giving reliability engineers the tools so that they can identify the security problem, work around it, and fix it as quickly as possible. Using the right set of tools, you can build custom autogenerated dashboards and tooling to expose the generated information to engineers in a way that makes it much easier to sort through everything and determine the root cause of a security problem. These include things like Kibana dashboard, which allows engineers to investigate incident, apply filters, quickly pinpoint suspicious data traffic and source. In concert with F5 Advanced WAF and NGINX App Protect, SRE can protect applications against software vulnerabilities and common attacks from both inside and outsidemicroservice clusters.UponBIG-IP Advance WAF orNGINX App Protect detect suspicious traffic, it sends alert with details toELK stack, whichwillindex, and processthe data, and thenexecute the pre-defined ‘Ansible Playbook’,to enforce security policy into Kubernetes or NGINX App Protect for immediate remediation. SRE does not only identify but rectify the anomalies by enacting security policy enforcement along the data path.Detect once and protecteverywhere. What’s next? This serves as an introductiontoor the first article of this SRE article series.In the coming articles, we will deep dive into each of the use cases, to showcase the technical details about how we are leveraging F5technologies and capabilitiestohelpSRE bring together DevOps, NetOps, and SecOps to develop the safeguards and implement the best practices. To learn more about developing a business case for SRE in your organization, please reach out toanF5 Business Development. For technical details and additional information, see thisDevCentralGitHub repo.1.1KViews0likes0CommentsBlocking Zero-day WordPress Attacks with F5 Essential App Protect
Overview A recent ZDNet article reports that millions of WordPress sites are being probed and compromised due to a vulnerability with the popular "WP File Manager" plugin. Defending against the attacks of this type is one of the fundamental use cases for F5 Essential App Protect, which can block the malicious request and prevents the site from being compromised. This proof-of-concept article demonstrates what happens to a vulnerable WordPress site with and without F5 Essential App Protect. Attack without Essential App Protect Our test WordPress site at http://blog.haxrip.net was previously used in a blog post where we set up protection in less than 5 minutes. For this test we removed the instance that protected our site, and activated an older vulnerable version of the WP File Manager plugin: We’re now ready to send a payload via a Python script that exploits connector.minimal.php on our test site. This results in executing an upload of an arbitrary file and thus exposing possibility of remote code execution as per this vulnerability report. The end-result, the upload of a file: hacked.php, which you can see in this “before” and “after” view of the directory into which we’ll upload an arbitrary file via an exploited vulnerability This allows us to execute remote code by running hacked.php from a browser, which is a bad thing! Besides completely opening this WordPress site to modifications, this exposes potentially sensitive information like passwords, credentials, file structure, certificates & keys, and other info on the remote system, which is likely to enable an attacker to grow their attack footprint. So how can you avoid this? Enter Essential App Protect Now let’s “roll back” our attacked WordPress deployment by removing the hacked.php file from our system, and let’s try the same attack after we deploy an instance of Essential App Protect. This takes just a few minutes, with the below screenshots highlighting the main 3 steps of our deployment (a full deployment walk-through is available in the documentation): 1. Set up protection by using the FQDN of our WordPress site: blog.haxrip.net 2. Confirm the deployment region, use http / port 80 listener (for simplicity), and all of the default configuration options: 3. Use the generated CNAME value to configure the DNS entry for our blog to point traffic to our new Essential App Protect instance: That’s it! We are using Route 53 for our DNS management, so the change propagates in just a few minutes. While that happens we will switch our service from “Monitoring” to “Blocking” mode, to give our vulnerable site the protection it so badly needs! Now we’ll re-run the same Python script that was previously used to explore the vulnerability, which fails as the payload gets detected and blocked by our instance of F5 Essential App Protect. Inside the main dashboard Events View, we can see that our attempt has been flagged, as 4 signatures were picked up to correctly assess this request as an attack and block it. At this point, our site is protected from this and many other attacks based on thousands of signatures that are continuously updated from the F5 Threat Labs. The beauty of this platform is that it also uses predictive intelligence to detects abnormalities in the requests even if that exact attack signature hasn’t been captured yet. This means that we have a much higher chance of successfully detecting and preventing zero-day attacks on web applications. Conclusion F5 Essential App Protect is an effective platform to quickly protect potentially vulnerable instances of WordPress based on the particular exploit that’s making the news rounds this week. Note that in our tests an already exploited site is likely to be vulnerable to this & other attacks, so please set up protection and/or make sure your deployments and all of the plug-ins are up-to-date! Essential App Protect takes just a few minutes to deploy and sits between a hacker and a targeted website, scrubbing the requests across a number of pre-configured attack vectors, using signatures and predictive intelligence that provide holistic protection for externally-facing web apps. Find out more and get a trial set up today!569Views0likes0CommentsVisibility and Orchestration
Introduction Applications living in public clouds such as AWS, Azure, GCP, Alibaba Cloud, delivered via CI/CD pipelines and deployed using Kubernetes or Docker, introduce significant challenges from a security visibility and orchestration perspective. This article is the first of series, and aims at defining the tenets of effective security visibility and explore a journey to achieving end-to-end visibility into the application construct.The consideration of visibility from the inception of service or application is crucial for a successful implementation of security.Comprehensive visibility enables ‘security and performance’ oriented insights, generation of alerts and orchestration of action based on pertinent events. The end-goal of integrating visibility and orchestration in application management is to provide a complete framework for the operation of adaptive applications with appropriate level of security. Initial Premise and Challenge For the purposes of this article, an application is defined as a collection of loosely coupled microservices (ref. https://en.wikipedia.org/wiki/Microservices) that communicate over the back and/or front-end network to support the frontend.Microservices are standalone services that provide one or more capabilities of the application.They are deployed across multiple clouds and application management infrastructures.The following shows a sample application built on microservices (ref. https://github.com/GoogleCloudPlatform/microservices-demo). The challenge posed by the infrastructure above lies in providing comprehensive visibility for this application. From a confidentiality, integrity and availability standpoint, it is essential to have this visibility to ensure that the application is running optimally and that the users are guaranteed a reasonable level of reliability. Visibility In this article we consider two distinct aspects of security: posture assessment and monitoring.The first is based on the review and assessment of the configuration of the security measures protecting the application.For example, the analysis aims to answer questions such as: Is the application protected by an advanced web application firewall? What threats are the application and/or the microservices protected from? DoS? Bot? Application vulnerabilities? Is the communication between microservices confidential? The second aspect of security visibility is dynamic in nature.It relies on the collection of logs and intelligence from the infrastructure and control points.The information collected is then processed and presented in an intelligible fashion to the administrator.For example, log gathering aims to answer questions such as: How much of the traffic reaching my applications and its components, “good traffic” and how much of it is coming from Bots and Scans? Does the application traffic hitting my servers conform to what I am expecting?Is there any rogue user traffic attempting to exploit possible vulnerabilities to disrupt availability of my application or to gain access to the data it references? Are the protection policies in place appropriate and effective? The figure below gives a graphical representation on possible feeds into the visibility infrastructure. Once the data is ingested and available from the assessment and monitoring, we can use the data to produce reports, trigger alerts and extract insights. Strategies for reporting, alerting and insights will vary depending on requirements, infrastructure and organizational/operational imperatives. In following articles, we will offer different possibilities to generate and distribute reports, strategies for alerting and providing insights (dashboards, chatops etc.) From an F5-centric perspective, the figure below shows the components that can contribute to the assessment and monitoring of the infrastructure: Orchestration In order to be able to consume security visibility with an adaptive application, it needs to be implemented as part of the CI/CD pipeline.The aim is to deploy it alongside the application.Any dev modification will trigger the inclusion and testing of the security visibility. Also, any change in the security posture (new rules, modified policy etc.) will trigger the pipeline and hence testing of deployment and validation of security efficacy. From an application deployment point of view, inserting F5 solutions in the application through the pipeline would look something like the picture below. In the picture above, BIG-IP, NGINX Plus / NGINX App Protect, F5 Cloud Services, Silverline/Shape are inserted at key points in the infrastructure and provide control points for the traffic traversing the application.Each element in the chain also provides a point that will generate telemetry and other information regarding the component protected by the F5 element. In conjunction with the control points, security can also be provided through the implementation of an F5 AspenMesh service mesh for Kubernetes environments. The implementation of security is integrated into the CI/CD pipeline so as to tightly integrate, testing and deployment of security with the application.An illustration of such a process is provided as an example in the figure below. In the figure above, changes in the security of the application is tightly coupled with the CI/CD pipeline. It is understood that this might not always be practical for different reasons, however, it is a desirable goal that simplifies the security and overall operations considerably. The figure shows that any change in the service or workload code, its security or the API endpoints it serves triggers the pipeline for validation and deployment in the different environments (prod, pre-prod/staging, testing).The figure does not show different integration points, testing tools (unit and integration testing) or static code analysis tools, as those tools are beyond the scope of this discussion. Eventually, the integration of artificial AI/ML, and other tooling should enable security orchestration, automation and response (SOAR).With greater integration and automation with other network and infrastructure security tools (DoS prevention, firewall, CDN, DNS protection, intrusion prevention) and the automated framework, it is conceivable to offer important SOAR capabilities.The insertion and management of the visibility, inference, and automation in the infrastructure enables coordinated and automated action on any element of the cyber kill-chain thus providing enhanced security and end-to-end remediation. Journey It is understood that the implementation of security frameworks cannot occur overnight.This is true for both existing and net-new infrastructure.The journey to integration will be fraught with resistance from both the Security or secdevops group and the Application or devops organization.The former needs to adapt to tools that seem foreign but are incredibly accessible and powerful.The latter category feels that security is a hindrance and delays implementation of the application and is anything but agile.Neither sentiment is based on more than anecdotal evidence. F5 offers a set of possible integrations for automated pipelines, this includes: A comprehensive automation toolchain for BIG-IP (link). F5 supported integrations with multiple infrastructure-as-code and automation tools such as Terraform (link)or Ansible (link). Integrations with all major cloud providers including Amazon Web Services (link), Google Cloud Platform (link) or Microsoft Azure (link). Full RESTful API’s to manage all F5 products including Cloud Services, Shape and NGINX Plus instances. These integrations are supported and maintained by F5 and available across all products. The following are possible steps in the journey to achieve complete visibility with the use of F5 products – the following steps as well as recommendations will be discussed in greater detail in follow-up articles: Architect the insertion of F5 control points (Cloud Services, BIG-IP, NGINX, Shape etc.) wherever possible. This differs somewhat based on the type of infrastructure (e.g. deployment in on premises datacenter versus cloud deployments) Automate the insertion and configuration of F5 control points leveraging the centralized management platforms, and other F5-provided tools to make the process easy. Automate the configuration of logging and telemetry of different control points in such a manner so that they feed into the centralized management platforms, and other visibility tools such as Beacon, ELK or other 3 rd -party utilities (e.g. SIEM etc). Based on available tools, define logging/telemetry/signaling strategy to collect and build a global view of the infrastructure – this might include the layering of technologies to distill the information as it is interpreted and aggregated, for example, leverage BIG-IQ as the first line of logging and visibility for BIG-IP Leveraging F5 Beacon or other 3 rd party tools, and build custom dashboards offering various insights to appropriate party involved in delivery and security of applications. Enhance the visibility provided through a reporting strategy that will allow the sharing of information among different groups as well as different platforms (e.g. SIEM etc.) Build a centralized solution collecting metrics, logs and telemetry from the different platforms to build meaningful insights. (Optional) Integrate with flexible ops tools such as chatops (Slack, MS Teams etc.) and other visibility tools via webhooks. (Optional) Integrate with ML/AI tools to speed the discovery and creation of insights (Optional) Progressively build SOAR and other capabilities leveraging F5 and/or 3 rd -party tools. Keep in mind that applications are fluid and adaptive.This raises important challenges with regards, to maintaining and adjusting the visibility so that it remains relevant and efficient.This justifies the integration of security in the CI/CD pipeline to remain nimble and adapt at the speed of business. Conclusion This article aims at helping you define a journey to building a comprehensive resilient and scalable application security visibility framework.The context for this journey is that of adaptive applications that evolve constantly. Do note that the same principles apply to common off-the-shelf applications as well as they are extended and communicate with third-party applications. The journey may begin by defining the goals for implementing security visibility and then a strategy to get to the goals.To be successful, the security strategy should include a “shift-left” plan to integrate into the CI/CD pipeline, removing friction and complexity from an implementation standpoint.Operational efficiency and usability should remain the guiding principles at the forefront of the strategy to ensure continued use, evolution and rapid change. The next articles in this series will help you along this journey, leveraging F5 technologies such as BIG-IP, BIG-IQ, NGINX Plus, NGINX App Protect, Essential App Protect and other F5 Cloud Services to gather telemetry and configuration information.Other articles will discuss avenues for visibility and alerting services leveraging F5 services such as Beacon, as well as 3 rd party utilities and infrastructure. At the heart of this article is the implementation of proxies and control-points throughout the networked infrastructure to gather data and assess the application’s security posture.As a continuation of this discussion, the reader is encouraged to consider other infrastructure aspects such as static code analysis, infrastructure isolation (what process/container/hypervisor connects to which peer etc.) and monitoring.1.3KViews1like0Comments