F5 Cloud Services
23 TopicsF5 Essential App Protect: Go !
App deployments nowadays tend to target API-driven distributed services and microservices-based topologies. How do you move fast when it comes to “securing an app”, when you have so many things to worry about: what services are part of the overall topology, where and how these services are deployed (VMs, containers, PaaS), and what technologies stacks and frameworks these services are built on? Secure Your Apps at “Ludicrous Speed” As evidenced by OWASP Top 10, we know that one of the most critical attack surfaces are web-facing app front-ends. From cross-site scripting (XSS) attacks, to injection, to exploits through third-party scripts, there is a lot to be concerned about; especially when you take into the account the common practices of using external libraries and open source components in JavaScript-based apps. And while security absolutely needs to be part of the development process, both tech and attacks are evolving so rapidly that most dev teams can’t be expected to keep up! With that in mind our team built F5 Essential App Protect, to enable “LUDICROUS SPEED” for securing your web apps. We architected it to deliver Web Application Firewall functionality with more capabilities, delivered as-a-Service with F5 Cloud Services. Sparing you what may sound like an obvious marketing spiel like the simplicity of the UI, applying F5’s 20+ years of security expertise, or the speed of deployment and integration (it’s a SaaS, duh)...let me focus on a few reasons why I’m personally excited about our implementation of this solution: Built for global app architectures It’s deployed on a global data plane, which means you can co-locate your service close to the application or service endpoint that’s being protected. For example, an HTTP request that would typically be routed to a US-EAST based app doesn’t need to “bounce” around the world to get processed; Essential App Protect automatically detects and recommends US-EAST as a region and deploys protection instance in the region closest to your web service, resulting in minimal latency. This supports the “any app on any cloud” mantra, without sacrificing performance. Forward-looking protection Besides using over 5,000+ signatures right out of the gate to check for malicious traffic, Essential App Protect continuously ingests new signatures from the F5 Threat Labs and stays current to ensure that we help defend against developing threats. On top of that, it also uses an advanced probability-based rating system that anticipates malicious requests and improves as the platform evolves. Simply put, we stay on top of the rapidly evolving threat landscape, so that you don’t have to! Simple on-ramp, easy APIs The north star of Essential App Protect is to make app security simple yet flexible not only from the UI, but to target DevOps scenarios with an API-first approach. This means you can onramp protection for your app with a couple of declarative API calls, from zero to ready in just aminute. Everything is defined through one simple JSON template, which makes it very easy to integrate into your CI/CD pipeline. All of the config, from tuning of protection option to accessing security event logs, are done through APIs. This makes automation a no-brainer, be itthe initial deployment, or managing a consistent security policy across your dev/test/prod environments for all of your app deployments. “Go ahead, take it for a spin!” F5 Essential App Protect provides the enterprise-grade security you need to keep your web-facing apps safe.It is delivered as-a-Service with no hardware to manage or software to download.And you don’t need to be a security expert, because the service is pre-configured using the best practices we’ve compiled while working with top enterprises for the last 20 years. We architected it for the cloud and global delivery, while focused on future-proofing your app protection, and making it DevOps ready out of the gate. Check out Essential App Protect. Go ahead, signup for the free trial, and check out the new Essential App Protect Lab on GitHub... Go!6.1KViews2likes0CommentsExploring Kubernetes API using Wireshark part 1: Creating, Listing and Deleting Pods
Related Articles: Exploring Kubernetes API using Wireshark part 2: Namespaces Exploring Kubernetes API using Wireshark part 3: Python Client API Quick Intro This article answers the following question: What happens when we create, list and delete pods under the hood? More specifically on the wire. I used these 3 commands: I'll show you on Wireshark the communication between kubectl client and master node (API) for each of the above commands. I used a proxy so we don't have to worry about TLS layer and focus on HTTP only. Creating NGINX pod pcap:creating_pod.pcap (use http filter on Wireshark) Here's our YAML file: Here's how we create this pod: Here's what we see on Wireshark: Behind the scenes, kubectl command sent an HTTP POST with our YAML file converted to JSON but notice the same thing was sent (kind, apiVersion, metadata, spec): You can even expand it if you want to but I didn't to keep it short. Then, Kubernetes master (API) responds with HTTP 201 Created to confirm our pod has been created: Notice that master node replies with similar data with the additional status column because after pod is created it's supposed to have a status too. Listing Pods pcap:listing_pods.pcap (use http filter on Wireshark) When we list pods, kubectl just sends a HTTP GET request instead of POST because we don't need to submit any data apart from headers: This is the full GET request: And here's the HTTP 200 OK with JSON file that contains all information about all pods from default's namespace: I just wanted to emphasise that when you list a pod the resource type that comes back isPodListand when we created our pod it was justPod. Remember? The other thing I'd like to point out is that all of your pods' information should be listed underitems. Allkubectldoes is to display some of the API's info in a humanly readable way. Deleting NGINX pod pcap:deleting_pod.pcap (use http filter on Wireshark) Behind the scenes, we're just sending an HTTP DELETE to Kubernetes master: Also notice that the pod's name is also included in the URI: /api/v1/namespaces/default/pods/nginx← this is pods' name HTTP DELETEjust likeHTTP GETis pretty straightforward: Our master node replies with HTTP 200 OK as well as some json file with all the info about the pod, including about it's termination: It's also good to emphasise here that when our pod is deleted, master node returns JSON file with all information available about the pod. I highlighted some interesting info. For example, resource type is now just Pod (not PodList when we're just listing our pods).4.5KViews3likes0CommentsAutomating certificate lifecycle management with HashiCorp Vault
One of the challenges many enterprises face today, is keeping track of various certificates and ensuring those which are associated with critical applications deployed across multi-cloud are current and valid.This integration helps you to improve your security poster with short lived dynamic SSL certificates using HashiCorp Vault and AS3 on BIG-IP. First, a bit about AS3… Application Services 3 Extension (referred to asAS3 Extensionor more often simplyAS3) is a flexible, low-overhead mechanism for managing application-specific configurations on a BIG-IP system. AS3 uses a declarative model, meaning you provide a JSON declaration rather than a set of imperative commands. The declaration represents the configuration which AS3 is responsible for creating on a BIG-IP system. AS3 is well-defined according to the rules of JSON Schema, and declarations validate according to JSON Schema. AS3 accepts declaration updates via REST (push), reference (pull), or CLI (flat file editing). What is Vault? Vault is a tool for securely accessingsecrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, or certificates. Vault provides a unified interface to any secret, while providing tight access control and recording a detailed audit log. A modern system requires access to a multitude of secrets: database credentials, API keys for external services, credentials for service-oriented architecture communication, etc. Understanding who is accessing what secrets is already very difficult and platform specific. Adding on key rolling, secure storage, and detailed audit logs is almost impossible without a custom solution. This is where Vault steps in. Public Key Infrastructure (PKI) provides a way to verify authenticity and guarantee secure communication between applications. Setting up your own PKI infrastructure can be a complex and very manual process. Vault PKI allows users to dynamically generate X.509 certificates quickly and on demand. Vault PKI can streamline distributing TLS certificates and allows users to create PKI certificates with a single command. Vault PKI reduces overhead around the usual manual process of generating a private key and CSR, submitting to a CA, and waiting for a verification and signing process to complete, while additionally providing an authentication and authorization mechanism to validate as well. Benefits of using Vault automation for BIG-IP Cloud and platform independent solution for your application anywhere (public cloud or private cloud) Uses vault agent and Leverages AS3 Templating to update expiring certificates No application downtime - Dynamically update configuration without affecting traffic Configuration: 1.Setting up the environment - deploy instances of BIG-IP VE and Vault in cloud or on-premises You can create instances in the cloud for Vault & BIG-IP using terraform. The repo https://github.com/f5devcentral/f5-certificate-rotate This will pretty much download Vault binary and start the Vault server. Also, it will deploy the F5 BIG-IP instance on the AWS Cloud. Once we have the instances ready, you can SSH into the Vault ubuntu server and change directory to /tmp and execute below commands. # Point to the Vault Server export VAULT_ADDR=http://127.0.0.1:8200 # Export the Vault Token export VAULT_TOKEN=root # Create roles and define allowed domains with TTL for Certificate vault write pki/roles/web-certs allowed_domains=demof5.com ttl=160s max_ttl=30m allow_subdomains=true # Enable the app role vault auth enable approle # Create a app policy and apply https://github.com/f5devcentral/f5-certificate-rotate/blob/master/templates/app-pol.hcl vault policy write app-pol app-pol.hcl # Apply the app policy using app role vault write auth/approle/role/web-certs policies="app-pol" # Read the Role id from the Vault vault read -format=json auth/approle/role/web-certs/role-id | jq -r '.data.role_id' > roleID # Using the role id use the secret id to authenticate vault server vault write -f -format=json auth/approle/role/web-certs/secret-id | jq -r '.data.secret_id' > secretID # Finally run the Vault agent using the config file vault agent -config=agent-config.hcl -log-level=debug 2.UseAS3 Template file certs.tmpl with the values as shown The template file shown below will be automatically uploaded to the Vault instance, the ubuntu server in the /tmp directory Here I am using an AS3 file called certs.tmpl which is templatized as shown below. {{ with secret "pki/issue/web-certs" "common_name=www.demof5.com" }} [ { "op": "replace", "path": "/Demof5/HTTPS/webcert/remark", "value": "Updated on {{ timestamp }}" }, { "op": "replace", "path": "/Demof5/HTTPS/webcert/certificate", "value": "{{ .Data.certificate | toJSON | replaceAll "\"" "" }}" }, { "op": "replace", "path": "/Demof5/HTTPS/webcert/privateKey", "value": "{{ .Data.private_key | toJSON | replaceAll "\"" "" }}" }, { "op": "replace", "path": "/Demof5/HTTPS/webcert/chainCA", "value": "{{ .Data.issuing_ca | toJSON | replaceAll "\"" "" }}" } ] {{ end }} 3.Vault will render a new JSON payload file called certs.json whenever the SSL Certs expires When the Certificate expires, Vault generates a new Certificate which we can use to update the BIG-IP using ssh script, below shows the certs.json created automatically. Snippet of certs.json being created [ { "op": "replace", "path": "/Demof5/HTTPS/webcert/remark", "value": "Updated on 2020-10-02T19:05:53Z" }, { "op": "replace", "path": "/Demof5/HTTPS/webcert/certificate", "value": "-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIUaMgYXdERwzwU+tnFsSFld3DYrkEwDQYJKoZIhvcNAQEL\nBQAwEzERMA8GA1UEAxMIZGVtby5jb20wHhcNMjAxMDAyMTkwNTIzWhcNMj 4.Use Vault Agent file to run the integration forever without application traffic getting affected Example Vault Agent file pid_file = "./pidfile" vault { address = "http://127.0.0.1:8200" } auto_auth { method "approle" { mount_path = "auth/approle" config = { role_id_file_path = "roleID" secret_id_file_path = "secretID" remove_secret_id_file_after_reading = false } } sink "file" { config = { path = "approleToken" } } } template { source = "./certs.tmpl" destination = "./certs.json" #command = "bash updt.sh" } template { source = "./https.tmpl" destination = "./https.json" } 5. For Integration with HCP Vault If you are using HashiCorp hosted Vault solution instead of standalone Vault you can still use this solution with making few changes in the vault agent file. Detail documentation when using HCP vault is here atREADME.You can map tenant application objects on BIG-IP to Namespace on HCP Vault which provides islotation. More details how to create this solution athttps://github.com/f5businessdevelopment/f5-hcp-vault Summary The integration has following components listed below, here the Venafi or Lets Encrypt can also be used as external CA. Using this solution, you are able to: Improve your security posture with short lived dynamic certificates Automatically update applications using templating and robust AS3 service Increased collaborating breaking down silos Cloud agnostic solution can be deployed on-prem or public cloud3.9KViews4likes0CommentsProtect your web app in under 5 minutes.
Background Adding protection for your web-facing app shouldn’t require you to be an expert in security or networking. Having no deep expertise in either area made me an ideal candidate to try out F5’snew SaaS offer: F5 Essential App Protect Service. In my last article I used Amazon Lightsail to set up a full WordPress stack with a new domain. While the full app stack approach is incredibly convenient, it’s unlikely to deploy the latest patched version of that app: stacks are typically locked to major app versions, it takes time & effort to test each stack configuration, and frankly it’s up to end-users to ensure their app stays up-to-date. WordPress, just like any other popular app, has vulnerabilities: both core and from its many plug-ins. So, adding an extra level of protection in the form of Essential App Protect is really a no-brainer, as it shields against common attacks like XSS and SQL Injection. Below are the “minimal steps” to create and set up a web application firewall (WAF) with my app: Pre-requisites F5 Cloud Services subscription: I started with an active subscription; if you don’t have one already get your own here. Access to update your DNS with a CNAME value: I’m using Amazon Route 53, but we can do the same on any other registrar like GoDaddy. You may need to talk to your IT/NetOps team if you are in a company that manages this kind of stuff for you. Your app IP or Domain: I’m using the AWS-provided IP address for my app, but we could also use a domain name value instead. API Interaction: I’m using cURL on my Mac, but it’s also native on Windows 10 1803 onwards. Of course, you can use your favorite way to interact with an API, such as Postman, Fiddler, or code. Essential App Protect Setup Part A: Login and Ready We’ll use the F5 Cloud Services API to log in and retrieve a few values needed for creating our Essential App Protect subscription. 1. Log in – use the username / password to authenticate and retrieve the authorization token that we’ll need to use in the header of the subsequent API calls to the F5 portal. API Request: curl --location --request POST 'https://api.cloudservices.f5.com/v1/svc-auth/login' --data-raw '{"username": "", "password": "" }"' API Response (tokens cropped): We’ll save the token into a file ‘headers.txt’, which we can reference in the header of subsequent calls using cURL’s @filename feature (as of 7.55.0) . Our token needs to be stored in ‘headers.txt’ in a single line (no carriage returns) in this format: Authorization: Bearer <your token>. On a separate line we’ll add another header, such as: Content-Type: application/json. The resulting ‘headers.txt’ file would therefore look like this: 2. Get User Account ID – next let’s retrieve the “ID” value for our account, which is one of the two values needed to subscribe to the Essential App Protect catalog and also to create a service subscription instance for our app. API Request: curl --location --request GET 'https://api.cloudservices.f5.com/v1/svc-account/user' --header @headers.txt API Response: 3. Get Catalogs – Here we will retrieve the list of all available Catalogs and get the Catalog ID for the Essential App Protect service, which is designated with “service_type”:”waf”. API Request: curl --location --request GET 'https://api.cloudservices.f5.com/v1/svc-catalog/catalogs' --header @headers.txt API Response: 4. Subscribe to Catalog -- This step can be skipped you have already subscribed to Essential App Protect catalog (in the portal or through API). We will use the account “id” and the “catalog_id” values retrieved earlier. API Request: curl --location --request POST 'https://api.cloudservices.f5.com/v1/svc-account/accounts/<your-account-id>/catalogs' --data-raw '{"account_id": "<your-account-id>","catalog_id": "c-aa9N0jgHI4"}' --header @headers.txt API Response: At this point we are logged in and subscribed to the Catalog. Next let’s create our service instance. Part B: Create and Activate Subscription 5. Create Subscription – Here we will use the account “id” and the “catalog_id” values retrieved earlier, plus a few other values for our app hosted on AWS. In the response we will need to capture the "subscription_id. API Request: curl --location --request POST 'https://api.cloudservices.f5.com/v1/svc-subscription/subscriptions' --data-raw '{ "account_id": "<your-account-id>", "catalog_id": "c-aa9N0jgHI4", "service_instance_name": "<descriptive name>", "service_type": "waf", "configuration": { "waf_service": { "application": { "domain": "<cool domain>", "remark": "<cool remark>", "waf_regions": { "aws": { "us-west-2": { "endpoint": { "ips": [ "<your ip here>" ], "port": 80, "use_TLS": false } } } } }, "event_logging": { "enabled": true }, "industry": "finance", "policy": { "compliance_enforcement": { "data_guard": { "cc": true, "enabled": true, "ssn": true }, "sensitive_parameters": { "enabled": true } }, "encoding": "utf-8", "high_risk_attack_mitigation": { "allowed_methods": { "enabled": true, "methods": [ { "contains_http_data": true, "name": "POST" }, { "contains_http_data": false, "name": "HEAD" }, { "contains_http_data": false, "name": "GET" } ] }, "api_compliance_enforcement": { "enabled": true }, "disallowed_file_types": { "enabled": true, "file_types": [ "exe", "com", "bat", "dll", "back", "cfg", "dat", "cmd", "bck", "eml", "bin", "config", "ini", "old", "sav", "save", "idq", "idc", "ida", "htw", "exe1", "exe_renamed", "hta", "htr" ] }, "enabled": true, "enforcement_mode": "blocking", "geolocation_enforcement": { "disallowed_country_codes": [], "enabled": true }, "http_compliance_enforcement": { "enabled": true }, "ip_enforcement": { "enabled": true, "ips": [ { "action": "block", "address": "178.18.62.195", "description": "This is anonymous proxy", "log": true }, { "action": "allow", "address": "1.2.3.5", "description": "some description", "log": false } ] }, "signature_enforcement": { "enabled": true }, "websocket_compliance_enforcement": { "enabled": true } }, "malicious_ip_enforcement": { "enabled": true, "enforcement_mode": "blocking", "ip_categories": [ { "block": true, "log": true, "name": "tor_proxies" }, { "block": false, "log": true, "name": "cloud_services" } ] }, "threat_campaigns": { "enabled": true, "enforcement_mode": "blocking" } } } } }' --header @headers.txt API Response (truncated): 6. Activate Subscription – Now we are ready to activate the instance using the “subscription_id” captured in the previous step. Note that if the returned “service_state” is “UNDEPLOYED” it just means it’s being activated, re-running the same API call should eventually return “service_state”: “DEPLOYED”. API Request: curl --location --request POST 'https://api.cloudservices.f5.com/v1/svc-subscription/subscriptions/<your-subscription-id>/activate' --data-raw '{ "subscription_id": "<your-subscription-id>", "omit_config": true }' --header @headers.txt API Response: With this our Essential App Protect service should be live, ready to accept requests, and should look like this in the portal: CNAME & Domain Update The only remaining thing is to retrieve the CNAME value of our live Essential App Protect service. This is what we can browse to test our site, and where we will need to send traffic from our domain. Let’s do it: 7. Get CNAME Value – using the same “subscription_id” from the previous step, let’s get the info for our service and retrieve “CNAMEValue”. API Request: curl --location --request GET 'https://api.cloudservices.f5.com/v1/svc-subscription/subscriptions/<your-subscription-id>' --header @headers.txt API Response: 8. Browser Check – let’s validate what it looks like in our browser (copy + paste the value of CNAMEValue). We can see our blog, and can even try to do something like adding a disallowed filetype at the end of the URL: 9. Domain records update – finally let’s update the Amazon Route 53 configuration for our site with the new CNAME. We will need to add a record type CNAME and provide CNAMEValue. This will essentially route blog.haxrip.net traffic to Essential App Protect, which in turn will route it to the IP that we specified earlier. Conclusion Adding protection to a website with F5’s Essential App Protect is pretty straightforward and requires just a few API calls. If you’re running a web-facing app and don’t have the time or resources to keep it constantly updated to protect against known vulnerabilities -- it’s a good idea to have an extra protection in place for possible (and likely) attacks.2.6KViews0likes3CommentsLeaked Credential Check with Advanced WAF
Description In this article you will learn how to configure and use Leaked Credential Check (LCC).LCC provides access to a database of compromised credentials, which can be used to detect and prevent a Credential Stuffing Attack. LCC is a subscription-based service which can be added to BIG-IP Advanced WAF. Summary Leaked Credential Check stops leaked or stolen credentials from being used to access personal or business applications. It automatically detects and mitigates compromised credential use. If compromised credentials are detected during an attempted login, Leaked Credential Check enables several mitigation options for SecOps teams to enact, individually or collectively, including: Requiring the user to employ multi-factor authentication (MFA) before granting access. Redirecting the user to another application page; for example, a customer support web page. Responding to the suspicious login with a preset page requesting further action by the user, such as contacting customer support. Blocking the user and their login from accessing the application. Sending an alert to the SecOps team to take additional action This article assumes you have Advanced WAF configured and deployed for one or more Virtual Servers and you have purchased the add-on subscription for LCC. Typical Steps Involved in a Credential Stuffing Attack High Level Network Topology Configuration Steps From the BIG-IP Configuration Utility select Security > Application Security > Security Policies > Policies List. Notice the Policy name in this example is Leaked-Credential-Check.There are 2 Virtual Servers attached to this policy, vs_arcadia.emea.f5se.com_II and vs_Hackazon_IV. LCC is configured from Security > Cloud Services > Cloud Security Services Applications. Click the name of the Cloud Security Application, f5-credential-stuffing-cloud-app in this example. Note: if the application has not been created yet click the Create button on the right. Give it a name if creating a new app.Set the Service Type to Blackfish Credential Stuffing Service.Enter your API Key ID and Secret.Specify the Endpoint, f5-credential-stuffing-blackfishin this example. Click Save when done LCC is enabled in Security > Application Security > Brute Force Attack Prevention. Check the box to Enable Detection Under Action you can choose different mitigation Actions. Alarm: report the Leaked Credentials Detection violation in the Event Log Alarm and Blocking Page: report the Leaked Credentials Detection violation in Event Log and send the Blocking Response Page Alarm and Honeypot Page: report the Leaked Credentials Detection violation in Event Log and send the Honeypot Response Page Alarm and Leaked Credentials Page: report the Leaked Credentials Detection violation in Event Log and send the Leaked Credentials Page Select Learning and Blocking Settings to configure them. For Sessions and Logins set Leaked Credentials Detection to Alarm and Block. The Honeypot Page and Leaked Credentials Page can be configured from Security > Application Security > Security Policies > Policies List Select the Leaked-Credential-Check Policy Select Response and Blocking Pages on the left. Scroll down and the Failed Login Honeypot response and Leaked Credentials response can be configured here. Test Leak Credentials Detection Attempt to login to your web application using known leaked credentials.In this example we’ll use “HACKAZON”.Click the Sign In link near the top on the right. Attempt to login using the following: Username: demo33@fidnet.com Password: mountainman01 The login should fail. Try to login with the following credentials: Username: admin Password: 12345678 Check the BIG-IP Event Log From the Configuration Utility go to Security > Event Logs > Application > Requests. There are two requests at the top that look important. Select the first one.Here we can see details about the request.As suspected, the violation was due to the Leaked Credentials Detection policy. Scroll down under Request and you can see the username and password that triggered the violation. Now select the second one.As you can see, this violation was triggered by the login attempt with the username “demo33@fidnet.com”. Conclusion Congratulations!You have successfully configured and test Leak Credential Checking.2.4KViews1like0CommentsWelcome to F5 Cloud Services
F5 Cloud Services offer rapid deployment, simplification, ease of use and dynamic security are the foundational pillars of the Cloud Services "SaaS" portfolio. These services are optimized for cloud-native applications and microservices. They have a sleek modern interface, including an intuitive experience with low-touch configuration, or can be fully automated via comprehensive declarative APIs. Built in a pay-as-you-go model, Cloud Services offer predictable pricing, flexibility and the ability to auto-scale to meet changing demand. All Cloud Services are architected on an anycast network for global accessibility and scalability. They can be provisioned and configured in minutes through the Cloud Services Portal or the AWS Marketplace . F5 is removing the barriers to rapid technology adoption, so enterprises can deploy faster and with confidence. Features and capabilities include: Easily provision and configure services within a few clicks. Consumption-based pricing - only pay for what you use. Robust Security - applications are automatically protected from multiple attack vectors. Global availability and diversity for high-availability and responsiveness. Automate everything and integrate directly into your application. Track performance, usage and billing with detailed reports and visualization tools through a centralized dashboard. F5 Cloud Services dashboard The Cloud Services dashboard is the main landing page and enables users to: Add, remove, enable, disable, and pause a service. View essential status and performance data. Manage users and roles. View usage statistics for service subscriptions. View billing information. Expanded account levels During registration you can create accounts based on three levels of business or organizational hierarchy: Organization level account - Your organization has one account. Each user is associated with one organization account. Division level account - Each organization account can have one or more division accounts. Each user can be associated with one or more division accounts. User level account Each user has a user account. Expanded user roles: Administrators, and privileged users to a lesser extent, can do the following: Secure multi-factor authentication. Add and remove service subscriptions. Create and manage users according to the type of account. Review usage-based billing and payments. Review service metrics and other data on the dashboard. All users can manage the Cloud Services they have been granted access to, and to see pertinent data for associated divisions. F5 DNS Cloud Service The DNS Cloud Service serves as a secondary DNS to your primary DNS. You must have a primary DNS, such as F5 BIG-IP, from which the DNS Cloud Service can transfer your zone configurations. This provides flexibility and redundancy as you can use the DNS Cloud Service as needed; or you can hide your primary DNS and send all the traffic to the DNS Cloud Service. The DNS Cloud Service does not intercept or inspect network traffic. The service will be delivered with Anycast IP. Anycast ensures that clients making DNS requests are always querying the DNS server that is optimally situated for the best performance. Initially, each DNS instance subscribes to your DNS data “zone-file” from your private zone-master server, which maps domain names with IP addresses. F5 DNS Load Balancer Cloud Service The DNS Load Balancer Cloud Service ensures that end users are able to access your application using the highest performing instances. This service monitors the state of your application instances, and then directs client traffic to the best instance by manipulating the DNS responses. The best instance can be defined by rules that include the following factors: Business policies Datacenter locations User location Performance conditions Similar to the DNS service, the DNS Load Balancer has built-in security features and leverages the global Anycast network to ensure maximum security and responsiveness. You no longer have to worry about the complexities of managing an on-premises load balancing infrastructure, as the DNS Load Balancer automatically scales to accommodate dynamically changing situations. F5 Essential App Protect Service The Essential App Protect Service provides security tools and services in a unified security portal that takes the complexity out of securing your applications with a simple, cloud-native, SaaS platform.Essential App Protect offers a web application firewall (WAF) solution that is self-service, cloud-hosted, and signature-based. Essential App Protect is built and optimized for two types of users: DevOps and Application Developers, without a deep background in application security, requiring fast adoption and deployment. Cloud Architects defining enterprise standards in mid-large organizations. The Essential App Protect Service allows users to create a fast, efficient, and optimized security policy addressing known OWASP top 10 vulnerabilities. It is accessible through both a modern UI and open APIs connecting to existing CI/CD pipelines. Cloud Services Support The F5 Cloud Services team is dedicated to providing customers an enjoyable and pain-free experience. Learn more about F5 Cloud Services. If you run into issues, ask a question on DevCentral or visit our Cloud Services Support Page.2.3KViews0likes0CommentsIdentity-Aware Proxy in the Public Cloud
Or: VPN Security without the VPN Backstory Many organizations have relied on Virtual Private Network (VPN) software to extend their enterprise networks out to remote workers. This is a time‑honored strategy, but it comes with some significant caveats: Your current VPN solution may not be sized to support your full workforce connecting concurrently, Per‑user VPN license costs can quickly add up, VPN software must be managed on all your end user devices, Bandwidth requirements at the head end of the VPN termination point And, perhaps most importantly, You are extending your enterprise perimeter to devices that are also attached to untrusted networks. To be clear here, F5 is not saying to throw away a traditional VPN, but we have realized for ourselves is that with most applications being web enabled and using encryption for transport, some of the downfalls of a full VPN can be avoided with a web based portal system. Yes, you will have cases where you want a full VPN tunnel, but if we could minimize this, could we improve the user experience as well as security, while also reducing some of the overhead of VPN technologies? A VPN tunnel is indiscriminate in the traffic it allows to pass; without granular access control lists (and the maintenance they require) or split tunneling enabled, a compromised client could send any traffic it wants into your network, not just traffic for the applications you intend to expose. A hastily configured VPN creates a hole in your perimeter with a sledgehammer, when what you really want is a scalpel. Fortunately, there is another option. The Identity‑Aware Proxy, which is a function of F5 Access Policy Manager (APM), allows secure, contextual, per‑application access into the enterprise without the overhead of managing a full VPN tunnel. No client‑side software other than a web browser is needed, and because it’s built on the full‑proxy architecture of F5 BIG‑IP, it’s not dependent on a particular network topology and can be rapidly deployed in the public cloud for immediate scale. This article will take you through the combined F5 and public cloud solution, and how you can use the flexibility and scalability of the cloud to quickly get up and running. Traditional VPN vs Identity‑Aware Proxy Traditional VPN A traditional VPN relies on tunneling traffic from a remote client through your network perimeter into your network: Figure 1: A traditional Layer 3 VPN Modern VPNs use public‑key cryptography over protocols like TLS or DTLS to provide confidentiality, and integrate with your corporate directory—and, ideally, a multi‑factor authentication system—to authenticate the user attempting to connect. Once authentication is completed, however, a simple layer 3 connection is established, with little to no inspection or visibility into what is traversing that connection. Identity‑Aware Proxy The Identity‑Aware Proxy, by comparison, works at layer 7, sitting in the middle of the HTTPS conversation between your users and your applications, validating each request against the access policy you define: Figure 2: BIG‑IP Access Policy Manager as an Identity‑Aware Proxy Much like a traditional VPN, the Identity‑Aware Proxy uses strong, modern TLS encryption and supports a broad array of multi‑factor authentication solutions. Unlike a traditional VPN, there is no layer 3 tunnel created, and only valid HTTPS requests are forwarded. No client‑side software is required. The biggest difference compared to a traditional VPN is that—instead of making an allow or deny decision at the beginning of a session—each request is validated against the access policy you define. Throughout the lifetime of an access session, the proxy continually examines and assesses the requests being made by the client for authorization and context, including: IP Reputation: is this request coming from a known malicious source? Geo‑blocking: is the request originating from a country or region where you don’t normally provide services? Device security posture: Does the client have the correct security and anti‑malware services enabled?* Is the user a member of a group that is authorized to access this resource? * Requires the F5 APM Helper App, included with APM This is a core tenet of zero‑trust: never trust, always verify. Just because we let a user connect to our environment once doesn’t mean that any subsequent requests they make should be allowed. Per‑request policy enforcement enables this continuous and ongoing validation. Our Example Environment Figure 3: BIG‑IP APM Deployed in the public cloud For this example, we’re using the public cloud to quickly spin up a proxy that will allow users to connect to our applications inside the network without needing to deploy any additional hardware or software within our on‑premises environment. We’re using an IPSEC VPN from the cloud to our on‑prem firewall for backend communication as it’s quick to configure and widely supported by existing firewalls and routers. You could just as easily use other hybrid connectivity options like ExpressRoute or Direct Connect, or interconnect services through your ISP or a connectivity provider like Equinix. We’re also using ADFS to provide federated login between your on‑prem directory and BIG‑IP in the cloud, but this isn’t the only option. Active Directory, Azure AD, LDAP, Okta, SAML and oAuth/OIDC, amongst others, are all supported as user identity sources, as well as most multi‑factor authentication systems like Azure MFA, Duo, SafeNet, and more. F5 in Public Cloud The F5 BIG‑IP software that powers the Identity‑Aware Proxy is available natively in all major public clouds. Multiple deployment and licensing options are available, but for the purposes of this example, we’ll be using the cloud providers’ own marketplaces to rapidly deploy BIG‑IP with hourly billing and no up‑front costs. If you’re looking to get started with BIG‑IP quickly, hourly billing is a great option; however, if you’re planning on running this environment for an extended period of time, or if you want more flexibility and more control over costs than the utility model provides, F5 offers several other consumption models. Deploying BIG‑IP While we provide tools to generate your own BIG‑IP images for deployment in public cloud, the cloud marketplaces are the simplest and quickest way to get BIG‑IP into your cloud environment. With just a few clicks and answering a few questions about your deployment, you can go from zero to deployed within minutes. This is the method we used in our example environment to get the proxy up and running as quickly as possible. Of course, that’s not the only way to do it. If you want a more programmatic approach, we offer supported templates for major cloud providers, as well as the F5 Automation Toolchain for declarative, repeatable deployments using tools like Terraform or Ansible. For more details on running BIG‑IP in the cloud, refer to F5 CloudDocs for guides, examples, templates, and more. Access Guided Configuration There are many underlying configuration objects, policies, and profiles that need to be created to enable a granular, contextual access policy, but BIG‑IP’s Guided Configuration makes the configuration easy and intuitive. Guided Configuration provides a simple, wizard‑like interface that walks you through the steps to: Create the Identity‑Aware Proxy; Integrate with an identity provider like Active Directory Federation Services, Okta, and OpenID Connect, amongst others; Enable multi‑factor authentication for remote users (Azure MFA, Duo, RSA, SafeNet, etc.), Enable seamless Single Sign‑On to apps behind the proxy; Map access requirements to applications inside your environment; and, Create contextual access rules (geo‑blocking, IP reputation, device security posture, group membership, etc.). With Guided Configuration, the underlying complexity of building a granular, per‑request access policy is abstracted away. You don’t need to be an F5 guru to securely deploy applications through the proxy. Global Availability Now that you’ve deployed your proxy, you want to make sure that it’s scalable and resilient. Guided Configuration, templates, and the Automation Toolchain make deploying multiple instances across cloud environments easy, but directing traffic to those instances and getting users to the best proxy is another matter. If you’ve been working with F5 for a while, you’re probably familiar with F5 DNS Services, which provides intelligent, dynamic DNS resolution for globally‑available applications. Amongst other features, F5 DNS provides Global Server Load Balancing (GSLB), which allows you to direct users to any one of several globally‑distributed endpoints for a particular service. In our environment we use GSLB to make our proxy redundant and globally‑available. Users are, by default, routed to the closest proxy to serve their request. If one instance fails, reaches capacity, or if that cloud region otherwise has a bad day, then that failure is automatically detected and users are moved to an alternate site. Figure 4: Global Availability with F5 Cloud Services DNS Load Balancing If you don’t already have F5 DNS in your environment, you don’t need to run out and buy a bunch of BIG‑IPs just to enable GSLB. F5 Cloud Services DNS Load Balancer provides GSLB from an on‑demand, usage‑based SaaS platform. There’s even a free tier that offers a load balancing configuration with up to 3 million DNS queries per month, making it easy to get started. In our environment, we use CNAME records to direct multiple applications through the same DNS Load Balancer, so depending on your usage requirements, you can add global availability to your proxy. Summary Hopefully this article has shown you how quick and easy it is to get going with F5 in the cloud. Between Infrastructure‑as‑a‑Service from your cloud vendor, hourly billing through the cloud marketplace for BIG‑IP, and F5 Cloud Services for global availability, you can deploy this entire solution on utility billing without needing to go through cumbersome procurement processes. Please let me know in the comments below how you’re using cloud and infrastructure‑on‑demand to respond quickly to the changing needs of a workforce in these trying times. Stay safe, stay healthy, and wash your hands!1.7KViews3likes0CommentsMulti-cluster Kubernetes/Openshift with GSLB-TOOL
Overview This is article 1 of 2. GSLB-TOOL is an OSS project around BIG-IP DNS (GTM) and F5 CloudServices’ DNS LB GSLB products to provide GSLB functionality to Openshift/Kubernetes. GSLB-TOOL is a multi-cluster enabler. Doing multi-cluster with GSLB has the following advantages: Cross-cloud. Services are published in a coordinated manner while being hosted in any public cloud or private cloud. High degree of control. Publishing is done based on service name instead of IP address. Traffic is directed to specific data center based on operational decisions such as service load and also allowing canary, blue/green, and A/B deployments across data centers. Stickiness. Regardless the topology changes in the network, clients will be consistently directed to the same data center. IP Intelligence. Clients can be redirected to the desired data center based on client’s location and gather stats for analytics. The use cases covered by GSLB-TOOL are: Multi-cluster deployments Data center load distribution Enhanced customer experience Advanced Blue/Green, A/B and Canary deployment options Disaster Recovery Cluster Migrations Kubernetes <-> Openshift migrations Container's platform version migration. For example, OCP 3.x to 4.x or OCP 4.x to 4.y. GSLB-TOOL is implemented as a set of Ansible scripts and roles that can be used from the cli or from a Continious Delivery tool such as Spinnaker or Argo CD. The tool operates as a glue between the Kubernetes/Openshift API and the GSLB API. GSLB-TOOL uses GIT as source of truth to store its state hence the GSLB state is not in any specific cluster. The next figure shows an schema of it. It is important to emphasize that GSLB-TOOL is cross-vendor as well since it can use any Ingress Controller or Router implementation. In other words, It is not necessary to use BIG-IP or NGINX for this. Moreover, a given cluster can have several Router/Ingress controller instances from difference vendors. This is thanks of only using the Openshift/Kubernetes APIs when inquiring about the container routes deployed Usage To better understand how GSLB-TOOL operates it is important to remark the following characteristics: GSLB-TOOL operates with project/namespace granularity, in a per cluster bases. When operating with a cluster's project/namespace it operates with all the L7 routes of the cluster's project/namespace at once. For example, the following command: $ project-retrieve shop onprem Will retrieve all the L7 routes of the namespace shop from the onprem cluster. Having a cluster/namespace simplifies management and mimics the behavior of RedHat’s Cluster’s Application Migration tool. In the next figure we can see the overal operations of GSLB-TOOL. At the top we can see in bold the name of the clusters (onprem and aws). In the figure these are only Openshift (aka OCP) clusters but it could be any other Kubernetes as well. We can also see two sample project/namespaces (Project A and Project B). Different clusters can have different namespaces as well. There are two types of commands/actions: The project-* commands operate on the Kubernetes/Openshift API and in the source of truth/GIT repository. These commands operate with a project/namespace granularity. GSLB-TOOL doesn't modify your Openshift/K8s cluster, it only performs read-only operations. The gslb-* commands operates on the source of truth/GIT repository and with the GSLB API of choice, either BIG-IP or F5 Cloud Services. These commands operate with all the project/namespaces of all clusters at once either submitting or rolling back the changes in the GSLB backends. When GSLB-TOOL pushes the GSLB configuration either performs all changes or doesn’t perform any. Thanks to the use of GIT the gslb-rollback command easily reverts the configuration if desired. Actually, creating the Backup of the previous step is only useful when using GSLB-TOOL without GIT which is possible too. GSLB-TOOL flexibility GSLB-TOOL has been designed with flexibility in mind. This is reflected in many features it has: It is agnostic of the Router/Ingress Controller implementation. In the same GSLB domain, it supports concurrently vanilla Kubernetes and Openshift clusters. It is possible to have multiple Routers/Ingress Controllers in the same Kubernetes/Openshift cluster. It is possible to define multiple Availability Zones for a given Router/Ingress Controller. It can be easily modified given that it is written in Ansible. Furthermore, the Ansible playbooks make use of template files that can be modified if desired. Multple GSLB backends. At present GSLB-TOOL can use either F5 Cloud Service’s DNS LB (a SaaS offering) or F5 BIG-IP DNS (aka GTM) by simply changing the value of the backend configuration option to either f5aas or bigip. All operations, configuration files, etc… remain the same. At present it is recommended F5 BIG-IP DNS because currently offers better monitoring options. Easiness to PoC. F5 Cloud Service’s DNS LB can be used to test the tool and later on switch to F5 BIG-IP DNS by simply changing the backend configuration option. GSLB-TOOL L7 route inquire/config flexibility It is specially important to have flexibility when configuring the L7 routes in our source of truth. We might be interested in the following scenarios for a given namespace: Homogeneous L7 routes across clusters - In occasions we expect that all clusters have the same L7 routes for a given namespace. This happens, for example, when all applications are the same in all clusters. Heterogeneous L7 routes across clusters - In occasions we expect that each cluster might have different L7 routes for a given namespace, when there are different versions of the applications (or different applications). This happens, for example, when we are testing new versions of the applications in a cluster and other clusters use the previous version. To handle these scenarios, we have two strategies when populating the routes: project-retrieve – We use the information from the cluster’s Route/Ingress API to populate GSLB. project-populate– We use the information from another cluster’s Route/Ingress API to populate GSLB. The cluster from where we take the L7 routes is referred as the reference cluster. We exemplify these strategies in the following figure where we use a configuration of two clusters (onprem and aws) and a single project/namespace. The L7 routes (either Ingress or Route resources) in these are different: the cluster onprem has two addional L7 routes (/shop and /checkout). We are going to populate our GSLB backend in three different ways: In Example 1, we perform the following actions in sequence: With project-retrieve web onprem we retrieve from the cluster onprem the L7 routes of the project web and these are stored in the Git repository or source of truth. Analogously, with project-retrieve web aws we retrieve from the cluster aws the L7 routes (only one in this case) and these are treieved in the Git repository or source of truth. We submit this configuration into the GSLB backend with gslb-commit. The GSLB backend expects that the onprem cluster has 3 routes and the aws backend 1 route. If the services are available the health check results for both clusters will be Green. Therefore the FQDN will return the IP addresses of both clusters' Routers/Ingress Controllers. In Example 2, we use the project-populate strategy: We perform the same first action as in Example 1. With project-populate web onprem aws we indicate that we expect that the L7 routes defined in onprem are also available in the aws cluster which is not the case. In other words, the onprem cluster is used as the reference cluster for aws. After we submit the configuration in GSLB with gslb-commit, the healthchecks in the onprem cluster will succeed and will fail on aws because /shop and /checkout don't exist (an HTP/404 is returned). Therefore for the FQDN www.f5bddemos.io will return only the IP address of onprem. This will be green automatically, once we update the L7 routes and applications in aws. In Example 3, we use again the project-populate strategy but we use aws are reference cluster. Unlike in the previous examples, with project-retrieve web aws we retrieve the routes from the cluster aws. With project-populate web aws onprem we do the reverse as in step b of the Example 2: we use the aws as reference for onprem instead. After submission of the config with gslb-commit. Given that onprem has the L7 route that aws has, the health checking will succeed. For sake of simplicity, In the examples above it has been shown projects/namespaces with only a single FQDN for all their L7 routes but for a given namespace it is possible to have an arbitrary number of L7 routes and FQDNs. There is no limitation on this either. Additional information If you want to see GSLB-TOOL in practice please check this video.For more information on this tool, please visit the GSLB-TOOL Home Page and it's Wiki Page for additional documentation.1.3KViews0likes0CommentsAdopting SRE practices with F5: Multi-cluster Blue-green deployment
In lastarticle, wecoveredblue-green deploymentas the most straightforward SRE deployment modelatahigh level,herewe are divingdeeperinto the detailsto see how F5 technologies enable this use case. Let’sstart offbylooking atsome of the key components. F5 DNS Load Balancer Cloud Service(GSLB) The first component of the solution is F5CloudService. TheDNS Load Balancer provides GSLB as a cloud-hosted SaaS service with built-in DDoS protection and an API-firstapproach.A blue-green deployment aims to minimize downtimedue to app deployment, and there are some basicroutingmechanismsout oftheboxwithOpenShiftthat assist in this area. However, ifwe are looking forswift routing switchwithmore flexibilityandreliabilityacross differentOpenShiftclusters, different clouds, or geo locations, this is when F5 DNS Load Balancer Cloud Service comes into the picture. Setting up DNS for F5 Cloud Services This solution requires thatyour corporateDNS server delegatesa DNSzone (akasubdomain)to theF5DNS Load BalancerCloud Service. An OpenShift cluster typically has its own domaincreatedfor the applications, for example: *.apps.<cluster name>.example.com.Theend user,however,doesn't really use such a long name and instead queries for www.example.com.A CNAME record is often used tomapone domainname(analias)to another (thetrue domainname). All set up, this is theDNSscenario: In case the customer hasmore than onecluster,itrequiresoneCNAME recordpercluster,with requestsload balancedamong clusters.The drawbacks ofthis type of solutionsinclude: No comprehensive health checkingandmonitoring Unabletoswitchworkloads across clustersat speed Lack of automation and integration with the OpenShiftcluster F5 Cloud Services provides these features in amulti-cluster and multi-cloudinfrastructure around the globe with the ease of aSaaSsolution,without the need ofinfrastructure modifications.You will set up your corporateDNStouseF5DNS Load Balancer CloudServiceasfollows: Here is a sample configuration foraCloud/CorporateDNS: You can register an F5 Cloud Service account, and then subscribe to DNS Load BalancerSservicehere: F5CloudServices F5 GSLB toolfor Ansible Automation The blue-green deployment represents a sequence of steps to rollout your new application.GSLB toolis developed toprovide a common automation plane for both OpenShift and F5 Cloud Service. LeveragingthedeclarativeAPI fromF5DNSLoadBalancerCloud Serviceand OpenShift, we used Ansible to automate the process. Itenables you to standardize and automate release and deployment by orchestrating the build, test, provisioning, configuration management, and deployment tools in yourContinuousDelivery pipeline. More specifically,GSLB toolautomatesyourinteraction with: theOpenShift/K8s deployments Retrieve Layer 7 routesfrom given project/namespace and OpenShift Cluster CopyLayer 7routes of a given project/namespace from one OpenShift Cluster to another F5 DNS Load BalancerService Createof GSLBload Balanced Records (LBRs)along with needed pieces (Monitors, IP endpoints, Pools etc.) Set the GSLB ratio for each deployment for a given project/namespace The benefits of using GSLB tool to automate the entire process: Improve speed and scale especially with100’s ofOpenShiftroutes Eliminate room for human error Achievedeterministic and repeatable outcomes I want to give credit tomy colleague,Ulises Alonso Camaro,who developedthe GSLB tool.Please refer to theGitHubfor details of the GSLB tool, andwikion how to set up the tool and operation. Buildand Run the Blue-green Deployment Now we canlookathow we can use F5 DNS Load BalancerServiceand GSLB tool to canary test the new version and manipulate the traffic routingfor Blue-green deployment.In Blue-green deployment, we deploy two versions of the application running simultaneously in two identical production environments called Blue (OpenShift Cluster 1) and Green (new OpenShift Cluster 2). Step 1. Retrieve routes from Blue cluster and push to F5 DNSLoad BalancerCloud Service Once you haveinstalledthe GSLB tool and configured thedeployment settingsfor your infrastructure,the firstset ofcommandsto runare ./project-retrieve defaultaws1&&./gslb-commit "publish routes from Blue cluster to F5 DNS load balancer" These commandsretrieve the OpenShift route(s) from your Blue clusteraws1, andthenpublishtheretrieved routes intoF5 DNS Load BalancerCloud Service. Step 2. Retrieve routes from Green cluster and push toF5 DNSLoad Balancer Cloud Service User to input the following commands: ./project-retrieve defaultaws2&&./gslb-commit "publish routes from Blue cluster to F5 DNS load balancer" Thesecommands will retrieve the OpenShift route(s) from your Green clusteraws2, andpush toF5 DNS Load Balancer Cloud Servicesconfiguration Step 3. Canary test green deployment User to input the following command: ./project-ratios default '{"aws1": "90", "aws2": "10" }&&./gslb-commit "canary testing blue and green clusters" The commands will set the traffic ratio fortheBlue (90%) andtheGreen deployment (10%) andpublish the configuration. As you can see,F5 DNS Load Balancer Cloud Servicesets the traffic ratio for each endpoint accordingly. Step 4. Switch traffic to Green After the testing succeeds, itis time to switch production traffic totheGreen cluster. User to input the following commands: ./project-evacuate defaultaws1&&./gslb-commit "switch all traffic to green cluster" The commands will switch the traffic completely fromtheBlue to theGreen deployment. More ArchitecturalPatterns There are many related patternsforBlue-greendeployment, each ofwhichoffers a different focus for an automated production deployment. Some examplevariantsinclude: Infrastructure as Code (IaC)In this variant of the pattern the release deployment target environment does not exist until it is created by the DevOps pipeline.Post deployment the original ‘blue’ environment is scheduled for destruction once the ‘green’ environment is considered stable in production. Container-based DeploymentIn this variant of the pattern the release deployment target is represented as a collection of one or more containers.Post release, once the ‘green’ environment is considered stable in production, the containers represented by the ‘blue’ container group are scheduled for destruction. Our solution can address allBlue-green deploymentvariants, withresources used in theblueandgreenenvironments can becreated or destroyed as needed, orthey can begeographically distributed. WhileContinuous Deployment (CD) is a natural fit for the Blue-green deployment,F5 DNSLoad BalancerCloud Servicecombined with GSLB tool can enable manypossibilitiesand support a collection of architecture patterns including: Migrateapplication froma source cluster(OCP 3.x)to a destination cluster(OCP 4.x),referherefor details MigrateworkloadfromKubernetescluster to OpenShift Cluster Modernize your application deployment with Lift and Shift. Repackage your application running as a set of VM’s into containers, and deploy then into OpenShift or Kubernetes cluster Built intoCI/CD pipeline so that any future changes to the application are built and deployed automatically. We arecontinuouslyworking onmore usagepatternsandwillexplore in more details in future blog posts. What’s next? So,go aheadtoDevCentralGitHub repo,download source code behind our technologies,follow the guide totestit out in yourownenvironment.1.3KViews1like1CommentVisibility and Orchestration
Introduction Applications living in public clouds such as AWS, Azure, GCP, Alibaba Cloud, delivered via CI/CD pipelines and deployed using Kubernetes or Docker, introduce significant challenges from a security visibility and orchestration perspective. This article is the first of series, and aims at defining the tenets of effective security visibility and explore a journey to achieving end-to-end visibility into the application construct.The consideration of visibility from the inception of service or application is crucial for a successful implementation of security.Comprehensive visibility enables ‘security and performance’ oriented insights, generation of alerts and orchestration of action based on pertinent events. The end-goal of integrating visibility and orchestration in application management is to provide a complete framework for the operation of adaptive applications with appropriate level of security. Initial Premise and Challenge For the purposes of this article, an application is defined as a collection of loosely coupled microservices (ref. https://en.wikipedia.org/wiki/Microservices) that communicate over the back and/or front-end network to support the frontend.Microservices are standalone services that provide one or more capabilities of the application.They are deployed across multiple clouds and application management infrastructures.The following shows a sample application built on microservices (ref. https://github.com/GoogleCloudPlatform/microservices-demo). The challenge posed by the infrastructure above lies in providing comprehensive visibility for this application. From a confidentiality, integrity and availability standpoint, it is essential to have this visibility to ensure that the application is running optimally and that the users are guaranteed a reasonable level of reliability. Visibility In this article we consider two distinct aspects of security: posture assessment and monitoring.The first is based on the review and assessment of the configuration of the security measures protecting the application.For example, the analysis aims to answer questions such as: Is the application protected by an advanced web application firewall? What threats are the application and/or the microservices protected from? DoS? Bot? Application vulnerabilities? Is the communication between microservices confidential? The second aspect of security visibility is dynamic in nature.It relies on the collection of logs and intelligence from the infrastructure and control points.The information collected is then processed and presented in an intelligible fashion to the administrator.For example, log gathering aims to answer questions such as: How much of the traffic reaching my applications and its components, “good traffic” and how much of it is coming from Bots and Scans? Does the application traffic hitting my servers conform to what I am expecting?Is there any rogue user traffic attempting to exploit possible vulnerabilities to disrupt availability of my application or to gain access to the data it references? Are the protection policies in place appropriate and effective? The figure below gives a graphical representation on possible feeds into the visibility infrastructure. Once the data is ingested and available from the assessment and monitoring, we can use the data to produce reports, trigger alerts and extract insights. Strategies for reporting, alerting and insights will vary depending on requirements, infrastructure and organizational/operational imperatives. In following articles, we will offer different possibilities to generate and distribute reports, strategies for alerting and providing insights (dashboards, chatops etc.) From an F5-centric perspective, the figure below shows the components that can contribute to the assessment and monitoring of the infrastructure: Orchestration In order to be able to consume security visibility with an adaptive application, it needs to be implemented as part of the CI/CD pipeline.The aim is to deploy it alongside the application.Any dev modification will trigger the inclusion and testing of the security visibility. Also, any change in the security posture (new rules, modified policy etc.) will trigger the pipeline and hence testing of deployment and validation of security efficacy. From an application deployment point of view, inserting F5 solutions in the application through the pipeline would look something like the picture below. In the picture above, BIG-IP, NGINX Plus / NGINX App Protect, F5 Cloud Services, Silverline/Shape are inserted at key points in the infrastructure and provide control points for the traffic traversing the application.Each element in the chain also provides a point that will generate telemetry and other information regarding the component protected by the F5 element. In conjunction with the control points, security can also be provided through the implementation of an F5 AspenMesh service mesh for Kubernetes environments. The implementation of security is integrated into the CI/CD pipeline so as to tightly integrate, testing and deployment of security with the application.An illustration of such a process is provided as an example in the figure below. In the figure above, changes in the security of the application is tightly coupled with the CI/CD pipeline. It is understood that this might not always be practical for different reasons, however, it is a desirable goal that simplifies the security and overall operations considerably. The figure shows that any change in the service or workload code, its security or the API endpoints it serves triggers the pipeline for validation and deployment in the different environments (prod, pre-prod/staging, testing).The figure does not show different integration points, testing tools (unit and integration testing) or static code analysis tools, as those tools are beyond the scope of this discussion. Eventually, the integration of artificial AI/ML, and other tooling should enable security orchestration, automation and response (SOAR).With greater integration and automation with other network and infrastructure security tools (DoS prevention, firewall, CDN, DNS protection, intrusion prevention) and the automated framework, it is conceivable to offer important SOAR capabilities.The insertion and management of the visibility, inference, and automation in the infrastructure enables coordinated and automated action on any element of the cyber kill-chain thus providing enhanced security and end-to-end remediation. Journey It is understood that the implementation of security frameworks cannot occur overnight.This is true for both existing and net-new infrastructure.The journey to integration will be fraught with resistance from both the Security or secdevops group and the Application or devops organization.The former needs to adapt to tools that seem foreign but are incredibly accessible and powerful.The latter category feels that security is a hindrance and delays implementation of the application and is anything but agile.Neither sentiment is based on more than anecdotal evidence. F5 offers a set of possible integrations for automated pipelines, this includes: A comprehensive automation toolchain for BIG-IP (link). F5 supported integrations with multiple infrastructure-as-code and automation tools such as Terraform (link)or Ansible (link). Integrations with all major cloud providers including Amazon Web Services (link), Google Cloud Platform (link) or Microsoft Azure (link). Full RESTful API’s to manage all F5 products including Cloud Services, Shape and NGINX Plus instances. These integrations are supported and maintained by F5 and available across all products. The following are possible steps in the journey to achieve complete visibility with the use of F5 products – the following steps as well as recommendations will be discussed in greater detail in follow-up articles: Architect the insertion of F5 control points (Cloud Services, BIG-IP, NGINX, Shape etc.) wherever possible. This differs somewhat based on the type of infrastructure (e.g. deployment in on premises datacenter versus cloud deployments) Automate the insertion and configuration of F5 control points leveraging the centralized management platforms, and other F5-provided tools to make the process easy. Automate the configuration of logging and telemetry of different control points in such a manner so that they feed into the centralized management platforms, and other visibility tools such as Beacon, ELK or other 3 rd -party utilities (e.g. SIEM etc). Based on available tools, define logging/telemetry/signaling strategy to collect and build a global view of the infrastructure – this might include the layering of technologies to distill the information as it is interpreted and aggregated, for example, leverage BIG-IQ as the first line of logging and visibility for BIG-IP Leveraging F5 Beacon or other 3 rd party tools, and build custom dashboards offering various insights to appropriate party involved in delivery and security of applications. Enhance the visibility provided through a reporting strategy that will allow the sharing of information among different groups as well as different platforms (e.g. SIEM etc.) Build a centralized solution collecting metrics, logs and telemetry from the different platforms to build meaningful insights. (Optional) Integrate with flexible ops tools such as chatops (Slack, MS Teams etc.) and other visibility tools via webhooks. (Optional) Integrate with ML/AI tools to speed the discovery and creation of insights (Optional) Progressively build SOAR and other capabilities leveraging F5 and/or 3 rd -party tools. Keep in mind that applications are fluid and adaptive.This raises important challenges with regards, to maintaining and adjusting the visibility so that it remains relevant and efficient.This justifies the integration of security in the CI/CD pipeline to remain nimble and adapt at the speed of business. Conclusion This article aims at helping you define a journey to building a comprehensive resilient and scalable application security visibility framework.The context for this journey is that of adaptive applications that evolve constantly. Do note that the same principles apply to common off-the-shelf applications as well as they are extended and communicate with third-party applications. The journey may begin by defining the goals for implementing security visibility and then a strategy to get to the goals.To be successful, the security strategy should include a “shift-left” plan to integrate into the CI/CD pipeline, removing friction and complexity from an implementation standpoint.Operational efficiency and usability should remain the guiding principles at the forefront of the strategy to ensure continued use, evolution and rapid change. The next articles in this series will help you along this journey, leveraging F5 technologies such as BIG-IP, BIG-IQ, NGINX Plus, NGINX App Protect, Essential App Protect and other F5 Cloud Services to gather telemetry and configuration information.Other articles will discuss avenues for visibility and alerting services leveraging F5 services such as Beacon, as well as 3 rd party utilities and infrastructure. At the heart of this article is the implementation of proxies and control-points throughout the networked infrastructure to gather data and assess the application’s security posture.As a continuation of this discussion, the reader is encouraged to consider other infrastructure aspects such as static code analysis, infrastructure isolation (what process/container/hypervisor connects to which peer etc.) and monitoring.1.3KViews1like0Comments