series-devcentral-basics
40 TopicsAdvanced WAF v16.0 - Declarative API
Since v15.1 (in draft), F5® BIG-IP® Advanced WAF™ canimport Declarative WAF policy in JSON format. The F5® BIG-IP® Advanced Web Application Firewall (Advanced WAF) security policies can be deployed using the declarative JSON format, facilitating easy integration into a CI/CD pipeline. The declarative policies are extracted from a source control system, for example Git, and imported into the BIG-IP. Using the provided declarative policy templates, you can modify the necessary parameters, save the JSON file, and import the updated security policy into your BIG-IP devices. The declarative policy copies the content of the template and adds the adjustments and modifications on to it. The templates therefore allow you to concentrate only on the specific settings that need to be adapted for the specific application that the policy protects. ThisDeclarative WAF JSON policyis similar toNGINX App Protect policy. You can find more information on theDeclarative Policyhere : NAP :https://docs.nginx.com/nginx-app-protect/policy/ Adv. WAF :https://techdocs.f5.com/en-us/bigip-15-1-0/big-ip-declarative-security-policy.html Audience This guide is written for IT professionals who need to automate their WAF policy and are familiar with Advanced WAF configuration. These IT professionals can fill a variety of roles: SecOps deploying and maintaining WAF policy in Advanced WAF DevOps deploying applications in modern environment and willing to integrate Advanced WAF in their CI/CD pipeline F5 partners who sell technology or create implementation documentation This article covershow to PUSH/PULL a declarative WAF policy in Advanced WAF: With Postman With AS3 Table of contents Upload Policy in BIG-IP Check the import Apply the policy OpenAPI Spec File import AS3 declaration CI/CD integration Find the Policy-ID Update an existing policy Video demonstration First of all, you need aJSON WAF policy, as below : { "policy": { "name": "policy-api-arcadia", "description": "Arcadia API", "template": { "name": "POLICY_TEMPLATE_API_SECURITY" }, "enforcementMode": "blocking", "server-technologies": [ { "serverTechnologyName": "MySQL" }, { "serverTechnologyName": "Unix/Linux" }, { "serverTechnologyName": "MongoDB" } ], "signature-settings": { "signatureStaging": false }, "policy-builder": { "learnOnlyFromNonBotTraffic": false } } } 1. Upload Policy in BIG-IP There are 2 options to upload a JSON file into the BIG-IP: 1.1 Either youPUSHthe file into the BIG-IP and you IMPORT IT OR 1.2 the BIG-IPPULLthe file froma repository (and the IMPORT is included)<- BEST option 1.1PUSH JSON file into the BIG-IP The call is below. As you can notice, it requires a 'Content-Range' header. And the value is 0-(filesize-1)/filesize. In the example below, the file size is 662 bytes. This is not easy to integrate in a CICD pipeline, so we created the PULL method instead of the PUSH (in v16.0) curl --location --request POST 'https://10.1.1.12/mgmt/tm/asm/file-transfer/uploads/policy-api.json' \ --header 'Content-Range: 0-661/662' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ --header 'Content-Type: application/json' \ --data-binary '@/C:/Users/user/Desktop/policy-api.json' At this stage,the policy is still a filein the BIG-IP file system. We need toimportit into Adv. WAF. To do so, the next call is required. This call import the file "policy-api.json" uploaded previously. AnCREATEthe policy /Common/policy-api-arcadia curl --location --request POST 'https://10.1.1.12/mgmt/tm/asm/tasks/import-policy/' \ --header 'Content-Type: application/javascript' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ --data-raw '{ "filename":"policy-api.json", "policy": { "fullPath":"/Common/policy-api-arcadia" } }' 1.2PULL JSON file from a repository Here, theJSON file is hosted somewhere(in Gitlab or Github ...). And theBIG-IP will pull it. The call is below. As you can notice, the call refers to the remote repo and the body is a JSON payload. Just change the link value with your JSON policy URL. With one call, the policy isPULLEDandIMPORTED. curl --location --request POST 'https://10.1.1.12/mgmt/tm/asm/tasks/import-policy/' \ --header 'Content-Type: application/json' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ --data-raw '{ "fileReference": { "link": "http://10.1.20.4/root/as3-waf/-/raw/master/policy-api.json" } }' Asecond versionof this call exists, and refer to the fullPath of the policy.This will allow you to update the policy, from a second version of the JSON file, easily.One call for the creation and the update. As you can notice below, we add the"policy":"fullPath" directive. The value of the "fullPath" is thepartitionand thename of the policyset in the JSON policy file. This method is VERY USEFUL for CI/CD integrations. curl --location --request POST 'https://10.1.1.12/mgmt/tm/asm/tasks/import-policy/' \ --header 'Content-Type: application/json' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ --data-raw '{ "fileReference": { "link": "http://10.1.20.4/root/as3-waf/-/raw/master/policy-api.json" }, "policy": { "fullPath":"/Common/policy-api-arcadia" } }' 2. Check the IMPORT Check if the IMPORT worked. To do so, run the next call. curl --location --request GET 'https://10.1.1.12/mgmt/tm/asm/tasks/import-policy/' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ You should see a 200 OK, with the content below (truncated in this example). Please notice the"status":"COMPLETED". { "kind": "tm:asm:tasks:import-policy:import-policy-taskcollectionstate", "selfLink": "https://localhost/mgmt/tm/asm/tasks/import-policy?ver=16.0.0", "totalItems": 11, "items": [ { "isBase64": false, "executionStartTime": "2020-07-21T15:50:22Z", "status": "COMPLETED", "lastUpdateMicros": 1.595346627e+15, "getPolicyAttributesOnly": false, ... From now, your policy is imported and created in the BIG-IP. You can assign it to a VS as usual (Imperative Call or AS3 Call).But in the next session, I will show you how to create a Service with AS3 including the WAF policy. 3. APPLY the policy As you may know, a WAF policy needs to be applied after each change. This is the call. curl --location --request POST 'https://10.1.1.12/mgmt/tm/asm/tasks/apply-policy/' \ --header 'Content-Type: application/json' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ --data-raw '{"policy":{"fullPath":"/Common/policy-api-arcadia"}}' 4. OpenAPI spec file IMPORT As you know,Adv. WAF supports OpenAPI spec (2.0 and 3.0). Now, with the declarative WAF, we can import the OAS file as well. The BEST solution, is toPULL the OAS filefrom a repo. And in most of the customer' projects, it will be the case. In the example below, the OAS file is hosted in SwaggerHub(Github for Swagger files). But the file could reside in a private Gitlab repo for instance. The URL of the projectis :https://app.swaggerhub.com/apis/F5EMEASSA/Arcadia-OAS3/1.0.0-oas3 The URL of the OAS file is :https://api.swaggerhub.com/apis/F5EMEASSA/Arcadia-OAS3/1.0.0-oas3 This swagger file (OpenAPI 3.0 Spec file) includes all the application URL and parameters. What's more, it includes the documentation (for NGINX APIm Dev Portal). Now, it ispretty easy to create a WAF JSON Policy with API Security template, referring to the OAS file. Below, you can notice thenew section "open-api-files"with the link reference to SwaggerHub. And thenew templatePOLICY_TEMPLATE_API_SECURITY. Now, when I upload / import and apply the policy, Adv. WAF will download the OAS file from SwaggerHub and create the policy based on API_Security template. { "policy": { "name": "policy-api-arcadia", "description": "Arcadia API", "template": { "name": "POLICY_TEMPLATE_API_SECURITY" }, "enforcementMode": "blocking", "server-technologies": [ { "serverTechnologyName": "MySQL" }, { "serverTechnologyName": "Unix/Linux" }, { "serverTechnologyName": "MongoDB" } ], "signature-settings": { "signatureStaging": false }, "policy-builder": { "learnOnlyFromNonBotTraffic": false }, "open-api-files": [ { "link": "https://api.swaggerhub.com/apis/F5EMEASSA/Arcadia-OAS3/1.0.0-oas3" } ] } } 5. AS3 declaration Now, it is time to learn how we cando all of these steps in one call with AS3(3.18 minimum). The documentation is here :https://clouddocs.f5.com/products/extensions/f5-appsvcs-extension/latest/declarations/application-security.html?highlight=waf_policy#virtual-service-referencing-an-external-security-policy With thisAS3 declaration, we: Import the WAF policy from a external repo Import the Swagger file (if the WAF policy refers to an OAS file) from an external repo Create the service { "class": "AS3", "action": "deploy", "persist": true, "declaration": { "class": "ADC", "schemaVersion": "3.2.0", "id": "Prod_API_AS3", "API-Prod": { "class": "Tenant", "defaultRouteDomain": 0, "API": { "class": "Application", "template": "generic", "VS_API": { "class": "Service_HTTPS", "remark": "Accepts HTTPS/TLS connections on port 443", "virtualAddresses": ["10.1.10.27"], "redirect80": false, "pool": "pool_NGINX_API_AS3", "policyWAF": { "use": "Arcadia_WAF_API_policy" }, "securityLogProfiles": [{ "bigip": "/Common/Log all requests" }], "profileTCP": { "egress": "wan", "ingress": { "use": "TCP_Profile" } }, "profileHTTP": { "use": "custom_http_profile" }, "serverTLS": { "bigip": "/Common/arcadia_client_ssl" } }, "Arcadia_WAF_API_policy": { "class": "WAF_Policy", "url": "http://10.1.20.4/root/as3-waf-api/-/raw/master/policy-api.json", "ignoreChanges": true }, "pool_NGINX_API_AS3": { "class": "Pool", "monitors": ["http"], "members": [{ "servicePort": 8080, "serverAddresses": ["10.1.20.9"] }] }, "custom_http_profile": { "class": "HTTP_Profile", "xForwardedFor": true }, "TCP_Profile": { "class": "TCP_Profile", "idleTimeout": 60 } } } } } 6. CI/CID integration As you can notice, it is very easy to create a service with a WAF policy pulled from an external repo. So, it is easy to integrate these calls (or the AS3 call) into a CI/CD pipeline. Below, an Ansible playbook example. This playbook run the AS3 call above. That's it :) --- - hosts: bigip connection: local gather_facts: false vars: my_admin: "admin" my_password: "admin" bigip: "10.1.1.12" tasks: - name: Deploy AS3 WebApp uri: url: "https://{{ bigip }}/mgmt/shared/appsvcs/declare" method: POST headers: "Content-Type": "application/json" "Authorization": "Basic YWRtaW46YWRtaW4=" body: "{{ lookup('file','as3.json') }}" body_format: json validate_certs: no status_code: 200 7. FIND the Policy-ID When the policy is created, a Policy-ID is assigned. By default, this ID doesn't appearanywhere. Neither in the GUI, nor in the response after the creation. You have to calculate it or ask for it. This ID is required for several actions in a CI/CD pipeline. 7.1 Calculate the Policy-ID Wecreated this python script to calculate the Policy-ID. It is an hash from the Policy name (including the partition). For the previous created policy named"/Common/policy-api-arcadia",the policy ID is"Ar5wrwmFRroUYsMA6DuxlQ" Paste this python codein a newwaf-policy-id.pyfile, and run the commandpython waf-policy-id.py "/Common/policy-api-arcadia" Outcome will beThe Policy-ID for /Common/policy-api-arcadia is: Ar5wrwmFRroUYsMA6DuxlQ #!/usr/bin/python from hashlib import md5 import base64 import sys pname = sys.argv[1] print 'The Policy-ID for', sys.argv[1], 'is:', base64.b64encode(md5(pname.encode()).digest()).replace("=", "") 7.2 Retrieve the Policy-ID and fullPath with a REST API call Make this call below, and you will see in the response, all the policy creations. Find yours and collect thePolicyReference directive.The Policy-ID is in the link value "link": "https://localhost/mgmt/tm/asm/policies/Ar5wrwmFRroUYsMA6DuxlQ?ver=16.0.0" You can see as well, at the end of the definition, the "fileReference"referring to the JSON file pulled by the BIG-IP. And please notice the"fullPath", required if you want to update your policy curl --location --request GET 'https://10.1.1.12/mgmt/tm/asm/tasks/import-policy/' \ --header 'Content-Range: 0-601/601' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ { "isBase64": false, "executionStartTime": "2020-07-22T11:23:42Z", "status": "COMPLETED", "lastUpdateMicros": 1.595417027e+15, "getPolicyAttributesOnly": false, "kind": "tm:asm:tasks:import-policy:import-policy-taskstate", "selfLink": "https://localhost/mgmt/tm/asm/tasks/import-policy/B45J0ySjSJ9y9fsPZ2JNvA?ver=16.0.0", "filename": "", "policyReference": { "link": "https://localhost/mgmt/tm/asm/policies/Ar5wrwmFRroUYsMA6DuxlQ?ver=16.0.0", "fullPath": "/Common/policy-api-arcadia" }, "endTime": "2020-07-22T11:23:47Z", "startTime": "2020-07-22T11:23:42Z", "id": "B45J0ySjSJ9y9fsPZ2JNvA", "retainInheritanceSettings": false, "result": { "policyReference": { "link": "https://localhost/mgmt/tm/asm/policies/Ar5wrwmFRroUYsMA6DuxlQ?ver=16.0.0", "fullPath": "/Common/policy-api-arcadia" }, "message": "The operation was completed successfully. The security policy name is '/Common/policy-api-arcadia'. " }, "fileReference": { "link": "http://10.1.20.4/root/as3-waf/-/raw/master/policy-api.json" } }, 8 UPDATE an existing policy It is pretty easy to update the WAF policy from a new JSON file version. To do so, collect from the previous call7.2 Retrieve the Policy-ID and fullPath with a REST API callthe"Policy" and"fullPath"directive. This is the path of the Policy in the BIG-IP. Then run the call below, same as1.2 PULL JSON file from a repository,but add thePolicy and fullPath directives Don't forget to APPLY this new version of the policy3. APPLY the policy curl --location --request POST 'https://10.1.1.12/mgmt/tm/asm/tasks/import-policy/' \ --header 'Content-Type: application/json' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ --data-raw '{ "fileReference": { "link": "http://10.1.20.4/root/as3-waf/-/raw/master/policy-api.json" }, "policy": { "fullPath":"/Common/policy-api-arcadia" } }' TIP : this call, above, can be used in place of the FIRST call when we created the policy "1.2PULL JSON file from a repository". But be careful, the fullPath is the name set in the JSON policy file. The 2 values need to match: "name": "policy-api-arcadia" in the JSON Policy file pulled by the BIG-IP "policy":"fullPath" in the POST call 9 Video demonstration In order to help you to understand how it looks with the BIG-IP, I created this video covering 4 topics explained in this article : The JSON WAF policy Pull the policy from a remote repository Update the WAF policy with a new version of the declarative JSON file Deploy a full service with AS3 and Declarative WAF policy At the end of this video, you will be able to adapt the REST Declarative API calls to your infrastructure, in order to deploy protected services with your CI/CD pipelines. Direct link to the video on DevCentral YouTube channel : https://youtu.be/EDvVwlwEFRw3.8KViews5likes2CommentsWhat is an iApp?
iApp is a seriously cool, game changing technology that was released in F5’s v11. There are so many benefits to our customers with this tool that I am going to break it down over a series of posts. Today we will focus on what it is. Hopefully you are already familiar with the power of F5’s iRules technology. If not, here is a quick background. F5 products support a scripting language based on TCL. This language allows an administrator to tell their BIG-IP to intercept, inspect, transform, direct and track inbound or outbound application traffic. An iRule is the bit of code that contains the set of instructions the system uses to process data flowing through it, either in the header or payload of a packet. This technology allows our customers to solve real-time application issues, security vulnerabilities, etc that are unique to their environment or are time sensitive. An iApp is like iRules, but for the management plane. Again, there is a scripting language that administrators can build instructions the system will use. But instead of describing how to process traffic, in the case of iApp, it is used to describe the user interface and how the system will act on information gathered from the user. The bit of code that contains these instructions is referred to as an iApp or iApp template. A system administrator can use F5-provided iApp templates installed on their BIG-IP to configure a service for a new application. They will be presented with the text and input fields defined by the iApp author. Once complete, their answers are submitted, and the template implements the configuration. First an application service object (ASO) is created that ties together all the configuration objects which are created, like virtual servers and profiles. Each object created by the iApp is then marked with the ASO to identify their membership in the application for future management and reporting. That about does it for what an iApp is…..next up, how they can work for you.1.3KViews0likes4CommentsWhat is an Application Delivery Controller - Part I
A Little History Application Delivery got its start in the form of network-based load balancing hardware. It is the essential foundation on which Application Delivery Controllers (ADCs) operate. The second iteration of purpose-built load balancing (following application-based proprietary systems) materialized in the form of network-based appliances. These are the true founding fathers of today's ADCs. Because these devices were application-neutral and resided outside of the application servers themselves, they could load balance using straightforward network techniques. In essence, these devices would present a "virtual server" address to the outside world, and when users attempted to connect, they would forward the connection to the most appropriate real server doing bi-directional network address translation (NAT). Figure 1: Network-based load balancing appliances. With the advent of virtualization and cloud computing, the third iteration of ADCs arrived as software delivered virtual editions intended to run on hypervisors. Virtual editions of application delivery services have the same breadth of features as those that run on purpose-built hardware and remove much of the complexity from moving application services between virtual, cloud, and hybrid environments. They allow organizations to quickly and easily spin-up application services in private or public cloud environments. Basic Application Delivery Terminology It would certainly help if everyone used the same lexicon; unfortunately, every vendor of load balancing devices (and, in turn, ADCs) seems to use different terminology. With a little explanation, however, the confusion surrounding this issue can easily be alleviated. Node, Host, Member, and Server Most ADCs have the concept of a node, host, member, or server; some have all four, but they mean different things. There are two basic concepts that they all try to express. One concept—usually called a node or server—is the idea of the physical or virtual server itself that will receive traffic from the ADC. This is synonymous with the IP address of the physical server and, in the absence of a ADC, would be the IP address that the server name (for example, www.example.com) would resolve to. We will refer to this concept as the host. The second concept is a member (sometimes, unfortunately, also called a node by some manufacturers). A member is usually a little more defined than a server/node in that it includes the TCP port of the actual application that will be receiving traffic. For instance, a server named www.example.com may resolve to an address of 172.16.1.10, which represents the server/node, and may have an application (a web server) running on TCP port 80, making the member address 172.16.1.10:80. Simply put, the member includes the definition of the application port as well as the IP address of the physical server. We will refer to this as the service. Why all the complication? Because the distinction between a physical server and the application services running on it allows the ADC to individually interact with the applications rather than the underlying hardware or hypervisor. A host (172.16.1.10) may have more than one service available (HTTP, FTP, DNS, and so on). By defining each application uniquely (172.16.1.10:80, 172.16.1.10:21, and 172.16.1.10:53), the ADC can apply unique load balancing and health monitoring based on the services instead of the host. However, there are still times when being able to interact with the host (like low-level health monitoring or when taking a server offline for maintenance) is extremely convenient. Most load balancing-based technology uses some concept to represent the host, or physical server, and another to represent the services available on it— in this case, simply host and services. Pool, Cluster, and Farm Load balancing allows organizations to distribute inbound application traffic across multiple back-end destinations, including cloud deployments. It is therefore a necessity to have the concept of a collection of back-end destinations. Pools, as we will refer to them (also known as clusters or farms) are collections of similar services available on any number of hosts. For instance, all services that offer the company web page would be collected into a pool called "company web page" and all services that offer e-commerce services would be collected into a pool called "e-commerce." The key element here is that all systems have a collective object that refers to "all similar services" and makes it easier to work with them as a single unit. This collective object—a pool—is almost always made up of services, not hosts. Virtual Server Although not always the case, today the term virtual server means a server hosting virtual machines. It is important to note that like the definition of services, virtual server usually includes the application port was well as the IP address. The term "virtual service" would be more in keeping with the IP:Port convention; but because most vendors, ADC and Cloud alike use virtual server, this article uses virtual server as well. Putting It All Together Putting all of these concepts together makes up the basic steps in load balancing. The ADC presents virtual servers to the outside world. Each virtual server points to a pool of services that reside on one or more physical hosts. Figure 2: Application Delivery comprises four basic concepts—virtual servers, pool/cluster, services, and hosts. While the diagram above may not be representative of a real-world deployment, it does provide the elemental structure for continuing a discussion about application delivery basics. ps Next Steps ReadWhat is Load Balancing?if you haven't already and check out What is an Application Delivery Controller -Part II.3.5KViews0likes1CommentWhat is iCall?
tl;dr - iCall is BIG-IP’s event-based granular automation system that enables comprehensive control over configuration and other system settings and objects. The main programmability points of entrance for BIG-IP are the data plane, the control plane, and the management plane. My bare bones description of the three: Data Plane - Client/server traffic on the wire and flowing through devices Control Plane - Tactical control of local system resources Management Plane - Strategic control of distributed system resources You might think iControl (our SOAP and REST API interface) fits the description of both the control and management planes, and whereas you’d be technically correct, iControl is better utilized as an external service in management or orchestration tools. The beauty of iCall is that it’s not an API at all—it’s lightweight, it’s built-in via tmsh, and it integrates seamlessly with the data plane where necessary (via iStats.) It is what we like to call control plane scripting. Do you remember relations and set theory from your early pre-algebra days? I thought so! Let me break it down in a helpful way: P = {(data plane, iRules), (control plane, iCall), (management plane, iControl)} iCall allows you to react dynamically to an event at a system level in real time. It can be as simple as generating a qkview in the event of a failover or executing a tcpdump on a server with too many failed logins. One use case I’ve considered from an operations perspective is in the event of a core dump to have iCall generate a qkview, take checksums of the qkview and the dump file, upload the qkview and generate a support case via the iHealth API, upload the core dumps to support via ftp with the case ID generated from iHealth, then notify the ops team with all the appropriate details. If I had a solution like that back in my customer days, it would have saved me 45 minutes easy each time this happened! iCall Components Three are three components to iCall: events, handlers, and scripts. Events An event is really what drives the primary reason to use iCall over iControl. A local system event (whether it’s a failover, excessive interface or application errors, too many failed logins) would ordinarily just be logged or from a system perspective, ignored altogether. But with iCall, events can be configured to force an action. At a high level, an event is "the message," some named object that has context (key value pairs), scope (pool, virtual, etc), origin (daemon, iRules), and a timestamp. Events occur when specific, configurable, pre-defined conditions are met. Example (placed in /config/user_alert.conf) alert local-http-10-2-80-1-80-DOWN "Pool /Common/my_pool member /Common/10.2.80.1:80 monitor status down" { exec command="tmsh generate sys icall event tcpdump context { { name ip value 10.2.80.1 } { name port value 80 } { name vlan value internal } { name count value 20 } }" } Handlers Within the iCall system, there are three types of handlers: triggered, periodic, and perpetual. Triggered A triggered handler is used to listen for and react to an event. Example (goes with the event example from above:) sys icall handler triggered tcpdump { script tcpdump subscriptions { tcpdump { event-name tcpdump } } } Periodic A periodic handler is used to react to an interval timer. Example: sys icall handler periodic poolcheck { first-occurrence 2017-07-14:11:00:00 interval 60 script poolcheck } Perpetual A perpetual handler is used under the control of a deamon. Example: handler perpetual core_restart_watch sys icall handler perpetual core_restart_watch { script core_restart_watch } Scripts And finally, we have the script! This is simply a tmsh script moved under the /sys icall area of the configuration that will “do stuff" in response to the handlers. Example (continuing the tcpdump event and triggered handler from above:) modify script tcpdump { app-service none definition { set date [clock format [clock seconds] -format "%Y%m%d%H%M%S"] foreach var { ip port count vlan } { set $var $EVENT::context($var) } exec tcpdump -ni $vlan -s0 -w /var/tmp/${ip}_${port}-${date}.pcap -c $count host $ip and port $port } description none events none } Resources iCall Codeshare Lightboard Lessons on iCall Threshold violation article highlighting periodic handler8.4KViews2likes10CommentsWhat is Shape Security?
What is Shape Security? You heard the news that F5 acquired Shape Security, but what is Shape Security? Shape defends against malicious automation targeted at web and mobile applications. Why defend against malicious automation? Attackers use automation for all sorts of nefarious purposes. One of the most common and costly attacks is credential stuffing. On average, over eight million usernames and passwords are reported spilled or stolen per day. Attackers attempt to use these credentials across many websites, including those of financial institutions, retail, airlines, hospitality, and other industries. Because many people reuse usernames and passwords across many sites, these attacks are remarkably successful. And given the volume of the activity, criminals rely on automation to carry it out. (See the Shape 2018 Credential Spill Report.) Other than credential stuffing, attackers use automation for fake account creation, scraping, verifying stolen credit cards, and all manner of fraud. Losses include system downtime from heavy bot load, financial liability from account takeover, and loss of customer trust. What makes automation distinct as an attack vector? Traditional attacks on websites from XSS to SQL injection depend upon malformed inputs and vulnerabilities in apps that allow such inputs through. In contrast, malicious automation scripts send well-formed inputs, inputs that are indistinguishable from what is sent by valid users. Even if an application checks user inputs, even if the developers have followed secure programming practices, the app remains vulnerable to malicious automation. Even though the inputs may not vary from those of valid users, the attackers can achieve great damage ranging from account takeover to other forms of online fraud and abuse. What about traditional means of stopping bots? (CAPTCHAs and IP Rate Limits) We all dislike CAPTCHAs, those annoying puzzles that force us to prove we’re human. As it turns out, machine learning algorithms are now better at solving these puzzles than we humans. Other than adding friction for real customers, CAPTCHAs accomplish little. (See How Cybercriminals Bypass CAPTCHA.) Advanced attackers are also adept at bypassing IP rate limits. These criminals disguise their attacks through proxy services that make it look as if their requests are coming from thousands of valid residential IP addresses. Research by Shape shows that attackers reused IP addresses during a campaign only 2.2 times on average, well below any feasible rate limit. (See 5 Rando Stats from Watching eCrime All Day Every Day.) Many of these IP addresses are shared by real users because of NATing performed by ISPs. Therefore, blacklisting these IPs may interfere with real customers but will not stop sophisticated attackers. Why is it so difficult to stop malicious automations? With HTTP requests arriving from so many valid IP addresses containing inputs identical to valid requests, stopping malicious automation is no easy task. Attackers go to great lengths to simulate real users, from deploying real browsers to copying natural mouse movements to subtle randomization of behavior. With so much money at stake, attackers persist, retool, and continuously probe. The reality is that there is no silver bullet to stop such attacks, no one defense that will last for long. Shape’s ever learning, ever adapting system. Rather than depend upon a single countermeasure, Shape’s system is strategically designed for continuous learning and adaptation. Shape instruments mobile and web apps with code to collect signals that cover both browser environments and user behavior. A real-time rules engine processes these signals to detect bot activity and mitigate attacks. These signals feed into a data system that fuels multiple machine learning systems, the findings of which, guided by data scientists and domain experts, drive the development of new signals and new rules, ever improving the quality of data and decisions. As Shape has defended much of Fortune 500 for several years now, battling the most advanced, persistent attackers, it has gone through many learning cycles, developing an extremely rich signal and rule set and refining the learning process. Stopping bots depends on a network effect gained from defending many enterprises and analyzing massive quantities of data. How does the Shape system work? What are the components? The Shape system for continuous learning and adaptation consists of four main components: Shape Defense Engine, Client Signals, Shape AI Cloud, and Shape Protection Manager. At the core of the Shape system is a Layer 7 scriptable reverse proxy, the Shape Defense Engine. Deployed as clusters either on premises or in the cloud, the Shape Defense Engine processes traffic, applies real-time rules for bot detection, serves Shape’s JavaScript to browsers and mobile devices, and transmits telemetry to the Shape AI Cloud. Shape’s Client Signals are collected by JavaScript that utilizes remarkably sophisticated obfuscation. Based on a virtual machine implemented in JavaScript with opcodes randomized at frequent intervals, this technology makes reengineering both extremely difficult and minimizes the window for exploitation. The obfuscation hides from attackers what signals Shape collects, leaving them groping in the dark to solve a complex multivariate problem. In addition to the JavaScript for web, Shape offers mobile SDKs for integration into iOS and Android devices. The SDKs utilize JavaScript loaded in web views to ensure that the signals collected can evolve without requiring new integrations and forced app upgrades. The JavaScript collects signal data on the environment and user behavior that it attaches to HTTP requests to protected resources, such as login paths, paths for account creation, or paths that return data desired by scrapers. These requests with the signal data are routed through the Shape Defense Engine to determine whether the request is from a human or a bot. The Shape Defense Engine sends telemetry to the Shape AI Cloud, a highly secure data system, where it is analyzed by multiple machine learning algorithms for the detection of patterns of automation and of fraud more broadly. The insights gained enable Shape to generate new rules for bot detection and to offer rich security intelligence reports to its customers. The Shape Protection Manager (SPM), a web console, provides tools for deploying, configuring, and monitoring the Shape Defense Engine and viewing analytics from the Shape AI Cloud. From the SPM, you can dig into attack traffic, see when and how often attackers retool, see what paths attackers target, see which browsers and IP addresses they spoof. The SPM is the customer’s window into the Shape system and the toolset for managing it. What is next? Deploying Shape involves routing protected HTTP traffic through the Shape Defense Engine, which is often done through the configuration of BIG-IP and Nginx. In the next article in this series introducing Shape Security, we’ll learn best practices for integrating Shape. Related Information F5 Bot Defense and Management F5 Fraud and Risk Mitigation (Distributed Cloud) OWASP Top-21 Automated Threats (Distributed Cloud) F5 XC Bots & Fraud OWASP Automated Threats19KViews6likes2CommentsFrom ASM to Advanced WAF: Advancing your Application Security
TL;DR: As of April 01, 2021, F5 has officially placed Application Security Manager (ASM) into End of Sale (EoS) status, signifying the eventual retirement of the product. (F5 Support Announcement - K72212499 ) Existing ASM,or BEST bundle customers, under a valid support contract running BIG-IP version 14.1 or greater can simply reactivate their licenses to instantly upgrade to Advanced WAF (AdvWAF) completely free of charge. Introduction Protecting your applications is becoming more challenging every day; applications are getting more complex, and attackers are getting more advanced. Over the years we have heard your feedback that managing a Web Application Firewall (WAF) can be cumbersome and you needed new solutions to protect against the latest generation of attacks. Advanced Web Application Firewall, or AdvWAF, is an enhanced version of the Application Security Manager (ASM) product that introduces new attack mitigation techniques and many quality-of-life features designed to reduce operational overhead. On April 01, 2021 – F5 started providing free upgrades for existing Application Security Manager customers to the Advanced WAF license. Keep on reading for: A brief history of ASM and AdvWAF How the AdvWAF license differs from ASM (ASM vs AdvWAF How to determine if your BIG-IPs are eligible for this free upgrade Performing the license upgrade How did we get here? For many years, ASM has been the gold standard Web Application Firewall (WAF) used by thousands of organizations to help secure their most mission-critical web applications from would-be attackers. F5 acquired the technology behind ASM in 2004 and subsequently ‘baked’ it into the BIG-IP product, immediately becoming the leading WAF product on the market. In 2018, after nearly 14 years of ASM development, F5 released the new, Advanced WAF license to address the latest threats. Since that release, both ASM and AdvWAF have coexisted, granting customers the flexibility to choose between the traditional or enhanced versions of the BIG-IP WAF product.As new features were released, they were almost always unique to AdvWAF, creating further divergence as time went on, and often sparking a few common questions (all of which we will inevitably answer in this very article) such as: Is ASM going away? What is the difference between ASM and AdvWAF? Will feature X come to ASM too? I need it! How do I upgrade from ASM to AdvWAF? Is the BEST bundle no longer really the BEST? To simplify things for our customers (and us too!), we decided to announce ASM as End of Sale (EoS), starting on April 01, 2021. This milestone, for those unfamiliar, means that the ASM product can no longer be purchased after April 01 of this year – it is in the first of 4 stages of product retirement. An important note is that no new features will be added to ASM going forward. So, what’s the difference? A common question we get often is “How do I migrate my policy from ASM to AdvWAF?” The good news is that the policies are functionally identical, running on BIG-IP, with the same web interface, and have the same learning engine and underlying behavior. In fact, our base policies can be shared across ASM, AdvWAF, and NGINX App Protect (NAP). The AdvWAF license simply unlocks additional features beyond what ASM has, that is it – all the core behaviors of the two products are identical otherwise. So, if an engineer is certified in ASM and has managed ASM security policies previously, they will be delighted to find that nothing has changed except for the addition of new features. This article does not aim to provide an exhaustive list of every feature difference between ASM and AdvWAF. Instead, below is a list of the most popular features introduced in the AdvWAF license that we hope you can take advantage of. At the end of the article, we provide more details on some of these features: Secure Guided Configurations Unlimited L7 Behavioral DoS DataSafe (Client-side encryption) OWASP Compliance Dashboard Threat Campaigns (includes Bot Signature updates) Additional ADC Functionality Micro-services protection Declarative WAF Automation I’m interested, what’s the catch? There is none! F5 is a security company first and foremost, with a mission to provide the technology necessary to secure our digital world. By providing important useability enhancements like Secure Guided Config and OWASP Compliance Dashboard for free to existing ASM customers, we aim to reduce the operational overhead associated with managing a WAF and help make applications safer than they were yesterday - it’s a win-win. If you currently own a STANDALONE, ADD-ON or BEST Bundle ASM product running version 14.1 or later with an active support contract, you are eligible to take advantage of this free upgrade. This upgrade does not apply to customers running ELA licensing or standalone ASM subscription licenses at this time. If you are running a BIG-IP Virtual Edition you must be running at least a V13 license. To perform the upgrade, all you need to do is simply REACTIVATE your license, THAT IS IT! There is no time limit to perform the license reactivation and this free upgrade offer does not expire. *Please keep in mind that re-activating your license does trigger a configuration load event which will cause a brief interruption in traffic processing; thus, it is always recommended to perform this in a maintenance window. Step 1: Step 2: Choose “Automatic” if your BIG-IP can communicate outbound to the Internet and talk to the F5 Licensing Server. Choose Manual if your BIG-IP cannot reach the F5 Licensing Server directly through the Internet. Click Next and the system will re-activate your license. After you’ve completed the license reactivation, the quickest way to know if you now have AdvWAF is by looking under the Security menu. If you see "Guided Configuration”, the license upgrade was completed successfully. You can also login to the console and look for the following feature flags in the /config/bigip.license file to confirm it was completed successfully by running: grep -e waf_gc -e mod_waf -e mod_datasafe bigip.license You should see the following flags set to enabled: Waf_gc: enabled Mod_waf: enabled Mod_datasafe: enabled *Please note that the GUI will still reference ASM in certain locations such as on the resource provisioning page; this is not an indication of any failure to upgrade to the AdvWAF license. *Under Resource Provisioning you should now see that FPS is licensed. This will need to be provisioned if you plan on utilizing the new AdvWAF DataSafe feature explained in more detail in the Appendix below. For customers with a large install base, you can perform license reactivation through the CLI. Please refer to the following article for instructions: https://support.f5.com/csp/article/K2595 Conclusion F5 Advanced WAF is an enhanced WAF license now available for free to all existing ASM customers running BIG-IP version 14.1 or greater, only requiring a simple license reactivation. The AdvWAF license will provide immediate value to your organization by delivering visibility into the OWASP Top 10 compliance of your applications, configuration wizards designed to build robust security policies quickly, enhanced automation capabilities, and more. If you are running ASM with BIG-IP version 14.1 or greater, what are you waiting for? (Please DO wait for your change window though 😊) Acknowledgments Thanks to Brad Scherer , John Marecki , Michael Everett , and Peter Scheffler for contributing to this article! Appendix: More details on select AdvWAF features Guided Configurations One of the most common requests we hear is, “can you make WAF easier?” If there was such a thing as an easy button for WAF configurations, Guided Configs are that button. Guided Configurations easily take you through complex configurations for various use-cases such as Web Apps, OWASP top 10, API Protection, DoS, and Bot Protection. L7DoS – Behavioral DoS Unlimited Behavioral DoS - (BaDoS) provides automatic protection against DoS attacks by analyzing traffic behavior using machine learning and data analysis. With ASM you were limited to applying this type of DoS profile to a maximum of 2 Virtual Servers. The AdvWAF license completely unlocks this capability, removing the 2 virtual server limitation from ASM. Working together with other BIG-IP DoS protections, Behavioral DoS examines traffic flowing between clients and application servers in data centers, and automatically establishes the baseline traffic/flow profiles for Layer 7 (HTTP) and Layers 3 and 4. DataSafe *FPS must be provisioned DataSafe is best explained as real-time L7 Data Encryption. Designed to protect websites from Trojan attacks by encrypting data at the application layer on the client side. Encryption is performed on the client-side using a public key generated by the BIG-IP system and provided uniquely per session. When the encrypted information is received by the BIG-IP system, it is decrypted using a private key that is kept on the server-side. Intended to protect, passwords, pins, PII, and PHI so that if any information is compromised via MITB or MITM it is useless to the attacker. DataSafe is included with the AdvWAF license, but the Fraud Protection Service (FPS) must be provisioned by going to System > Resource Provisioning: OWASP Compliance Dashboard Think your policy is air-tight? The OWASP Compliance Dashboard details the coverage of each security policy for the top 10 most critical web application security risks as well as the changes needed to meet OWASP compliance. Using the dashboard, you can quickly improve security risk coverage and perform security policy configuration changes. Threat Campaigns (includes Bot Signature updates) Threat campaigns allow you to do more with fewer resources. This feature is unlocked with the AdvWAF license, it, however, does require an additional paid subscription above and beyond that. This paid subscription does NOT come with the free AdvWAF license upgrade. F5’s Security Research Team (SRT) discovers attacks with honeypots – performs analysis and creates attack signatures you can use with your security policies. These signatures come with an extremely low false-positive rate, as they are strictly based on REAL attacks observed in the wild. The Threat Campaign subscription also adds bot signature updates as part of the solution. Additional ADC Functionality The AdvWAF license comes with all of the Application Delivery Controller (ADC) functionality required to both deliver and protect a web application. An ASM standalone license came with only a very limited subset of ADC functionality – a limit to the number of pool members, zero persistence profiles, and very few load balancing methods, just to name a few. This meant that you almost certainly required a Local Traffic Manager (LTM) license in addition to ASM, to successfully deliver an application. The AdvWAF license removes many of those limitations; Unlimited pool members, all HTTP/web pertinent persistence profiles, and most load balancing methods, for example.13KViews8likes8CommentsWhat is Load Balancing?
tl;dr - Load Balancing is the process of distributing data across disparate services to provide redundancy, reliability, and improve performance. The entire intent of load balancing is to create a system that virtualizes the "service" from the physical servers that actually run that service. A more basic definition is to balance the load across a bunch of physical servers and make those servers look like one great big server to the outside world. There are many reasons to do this, but the primary drivers can be summarized as "scalability," "high availability," and "predictability." Scalability is the capability of dynamically, or easily, adapting to increased load without impacting existing performance. Service virtualization presented an interesting opportunity for scalability; if the service, or the point of user contact, was separated from the actual servers, scaling of the application would simply mean adding more servers or cloud resources which would not be visible to the end user. High Availability (HA) is the capability of a site to remain available and accessible even during the failure of one or more systems. Service virtualization also presented an opportunity for HA; if the point of user contact was separated from the actual servers, the failure of an individual server would not render the entire application unavailable. Predictability is a little less clear as it represents pieces of HA as well as some lessons learned along the way. However, predictability can best be described as the capability of having confidence and control in how the services are being delivered and when they are being delivered in regards to availability, performance, and so on. A Little Background Back in the early days of the commercial Internet, many would-be dot-com millionaires discovered a serious problem in their plans. Mainframes didn't have web server software (not until the AS/400e, anyway) and even if they did, they couldn't afford them on their start-up budgets. What they could afford was standard, off-the-shelf server hardware from one of the ubiquitous PC manufacturers. The problem for most of them? There was no way that a single PC-based server was ever going to handle the amount of traffic their idea would generate and if it went down, they were offline and out of business. Fortunately, some of those folks actually had plans to make their millions by solving that particular problem; thus was born the load balancing market. In the Beginning, There Was DNS Before there were any commercially available, purpose-built load balancing devices, there were many attempts to utilize existing technology to achieve the goals of scalability and HA. The most prevalent, and still used, technology was DNS round-robin. Domain name system (DNS) is the service that translates human-readable names (www.example.com) into machine recognized IP addresses. DNS also provided a way in which each request for name resolution could be answered with multiple IP addresses in different order. Figure 1: Basic DNS response for redundancy The first time a user requested resolution for www.example.com, the DNS server would hand back multiple addresses (one for each server that hosted the application) in order, say 1, 2, and 3. The next time, the DNS server would give back the same addresses, but this time as 2, 3, and 1. This solution was simple and provided the basic characteristics of what customer were looking for by distributing users sequentially across multiple physical machines using the name as the virtualization point. From a scalability standpoint, this solution worked remarkable well; probably the reason why derivatives of this method are still in use today particularly in regards to global load balancing or the distribution of load to different service points around the world. As the service needed to grow, all the business owner needed to do was add a new server, include its IP address in the DNS records, and voila, increased capacity. One note, however, is that DNS responses do have a maximum length that is typically allowed, so there is a potential to outgrow or scale beyond this solution. This solution did little to improve HA. First off, DNS has no capability of knowing if the servers listed are actually working or not, so if a server became unavailable and a user tried to access it before the DNS administrators knew of the failure and removed it from the DNS list, they might get an IP address for a server that didn't work. Proprietary Load Balancing in Software One of the first purpose-built solutions to the load balancing problem was the development of load balancing capabilities built directly into the application software or the operating system (OS) of the application server. While there were as many different implementations as there were companies who developed them, most of the solutions revolved around basic network trickery. For example, one such solution had all of the servers in a cluster listen to a "cluster IP" in addition to their own physical IP address. Figure 2: Proprietary cluster IP load balancing When the user attempted to connect to the service, they connected to the cluster IP instead of to the physical IP of the server. Whichever server in the cluster responded to the connection request first would redirect them to a physical IP address (either their own or another system in the cluster) and the service session would start. One of the key benefits of this solution is that the application developers could use a variety of information to determine which physical IP address the client should connect to. For instance, they could have each server in the cluster maintain a count of how many sessions each clustered member was already servicing and have any new requests directed to the least utilized server. Initially, the scalability of this solution was readily apparent. All you had to do was build a new server, add it to the cluster, and you grew the capacity of your application. Over time, however, the scalability of application-based load balancing came into question. Because the clustered members needed to stay in constant contact with each other concerning who the next connection should go to, the network traffic between the clustered members increased exponentially with each new server added to the cluster. The scalability was great as long as you didn't need to exceed a small number of servers. HA was dramatically increased with these solutions. However, since each iteration of intelligence-enabling HA characteristics had a corresponding server and network utilization impact, this also limited scalability. The other negative HA impact was in the realm of reliability. Network-Based Load balancing Hardware The second iteration of purpose-built load balancing came about as network-based appliances. These are the true founding fathers of today's Application Delivery Controllers. Because these boxes were application-neutral and resided outside of the application servers themselves, they could achieve their load balancing using much more straight-forward network techniques. In essence, these devices would present a virtual server address to the outside world and when users attempted to connect, it would forward the connection on the most appropriate real server doing bi-directional network address translation (NAT). Figure 3: Load balancing with network-based hardware The load balancer could control exactly which server received which connection and employed "health monitors" of increasing complexity to ensure that the application server (a real, physical server) was responding as needed; if not, it would automatically stop sending traffic to that server until it produced the desired response (indicating that the server was functioning properly). Although the health monitors were rarely as comprehensive as the ones built by the application developers themselves, the network-based hardware approach could provide at least basic load balancing services to nearly every application in a uniform, consistent manner—finally creating a truly virtualized service entry point unique to the application servers serving it. Scalability with this solution was only limited by the throughput of the load balancing equipment and the networks attached to it. It was not uncommon for organization replacing software-based load balancing with a hardware-based solution to see a dramatic drop in the utilization of their servers. HA was also dramatically reinforced with a hardware-based solution. Predictability was a core component added by the network-based load balancing hardware since it was much easier to predict where a new connection would be directed and much easier to manipulate. The advent of the network-based load balancer ushered in a whole new era in the architecture of applications. HA discussions that once revolved around "uptime" quickly became arguments about the meaning of "available" (if a user has to wait 30 seconds for a response, is it available? What about one minute?). This is the basis from which Application Delivery Controllers (ADCs) originated. The ADC Simply put, ADCs are what all good load balancers grew up to be. While most ADC conversations rarely mention load balancing, without the capabilities of the network-based hardware load balancer, they would be unable to affect application delivery at all. Today, we talk about security, availability, and performance, but the underlying load balancing technology is critical to the execution of all. Next Steps Ready to plunge into the next level of Load Balancing? Take a peek at these resources: Go Beyond POLB (Plain Old Load Balancing) The Cloud-Ready ADC BIG-IP Virtual Edition Products, The Virtual ADCs Your Application Delivery Network Has Been Missing Cloud Balancing: The Evolution of Global Server Load Balancing22KViews0likes1CommentWhat is iControl?
What is iControl? So, you've arrived at DevCentral and are trying to figure out the various extensibility features of our products. Well, hopefully this article will set you on the right path for whatever your task shall be and help you in deciding which technology you'll want to use to get there. The Two: There may have been One Ring to Rule Them All in J.R.R. Tolkein's Middle Earth, but Mr. Tolkein better watch out for F5. We've got not one, but two rings of power: iRules and iControl. iRules iRules is F5's embedded scripting language used to inspect, modify, and enforce policy on content flowing through the network. Need to inspect all out-going webpages to ensure no sensitive informationis leaked? iRules is your answer. Need to dynamically assign priority to certain classes of client connections? iRules is for you again. Need to add a featureto BIG-IP? It's iRules again. We'll discuss more on iRules in an upcoming article. In the mean time, check out the iRules CodeShare to get an idea what others are doing with them. iRules are very powerful but the solve only one class of management issues. Let's say you are about to deploy a new web application in your production webfarm. How do you automate that process? It's iControl this time around. iControl So, what exactly is iControl? Well, Marketing describes iControl as: iControl is the first open API that enables applications to work in concert with the underlying network based on true software integration. Utilizing SOAP/XML to ensure open communications between dissimilar systems, iControl helps F5 customers, leading independent software vendors ( ISVs ), and Solution Providers realize new levels of automation and configuration management efficiency. Whether monitoring network-level traffic statistics, automating network configuration and management, or facilitating next generation service-oriented architectures, iControl gives organizations the power and flexibility to ensure that applications and the network work together for increased reliability, security, and performance. Further, iControl has proven itself as a valuable technology that can help reduce the cost of managing complex environments. Got all that? To put it in simpler terms, iControl is a Remote Management Application Programming Interface (API). Think of it as an extension to the management GUI or Command Line Interface (CLI) allowing custom scripts/applications/processes/etc. to be created that would normally be achieved by operators working within the Admin interfaces. iControl was built using Web Services technologies (SOAP and WSDL) which makes integration into developer platforms (VS.Net, WebLogic, Eclipse, JDeveloper, ...) a sinch. Heck, iControl even works with dynamic scripting lanaguages like Perl and Microsoft Windows PowerShell. Unlike the awkwardness of SNMP, iControl has a very robust and rich set of interfaces and methods allowing management of everything from system daemons, user accounts, load balancing (local and global), security, iRules, and everything else you could do with the GUI. Under the covers, iControl is implemented as a set of SOAP Based Web Services described by a set of Web Service Description Language (WSDL) files. Implemented over HTTP+SSL, iControl applications communicate to the BIG-IP devices in a similar way that a browser would connect to the administration GUI. The application opens a secure HTTP connection to the BIG-IP's iControl Portal using the same authentication credentials setup for the Admin GUI accounts. The portal then acts as a broker for all iControl method calls and forwards them to the appropriate subsystem on the device. Here are a few examples of how iControl is being used by customers and partners. Automatic Content Deployment to webfarms (1000s of servers daily) Enabling server provisioning in dynamic grid networks Secure remote access for vending machines (via the FirePass iControl Client API) Server identification masking (hiding specifics of backend servers) Response content filtering (scrubbing Credit Card and Social Security numbers) Extracting and archiving statistics for reporting and auditing Custom monitoring and alerting applications Notification alerting agents (rss feeds, email, pager) Listening application for network configuration change events The iRules Editor Many Many more... Are you hesitant to get started because you aren't a developer? Never fear, we have an article series of getting started with iControl, and we are always available on the forums to help out with coding/debugging issues that come up. Coupled with support for almost all mainstream development environments, you are bound to have someone in your organization with relevant experience. So what are you waiting for? Grab your favorite editor and get on the road to automation bliss!4.2KViews0likes3CommentsWhat Is BIG-IP?
tl;dr - BIG-IP is a collection of hardware platforms and software solutions providing services focused on security, reliability, and performance. F5's BIG-IP is a family of products covering software and hardware designed around application availability, access control, and security solutions. That's right, the BIG-IP name is interchangeable between F5's software and hardware application delivery controller and security products. This is different from BIG-IQ, a suite of management and orchestration tools, and F5 Silverline, F5's SaaS platform. When people refer to BIG-IP this can mean a single software module in BIG-IP's software family or it could mean a hardware chassis sitting in your datacenter. This can sometimes cause a lot of confusion when people say they have question about "BIG-IP" but we'll break it down here to reduce the confusion. BIG-IP Software BIG-IP software products are licensed modules that run on top of F5's Traffic Management Operation System® (TMOS). This custom operating system is an event driven operating system designed specifically to inspect network and application traffic and make real-time decisions based on the configurations you provide. The BIG-IP software can run on hardware or can run in virtualized environments. Virtualized systems provide BIG-IP software functionality where hardware implementations are unavailable, including public clouds and various managed infrastructures where rack space is a critical commodity. BIG-IP Primary Software Modules BIG-IP Local Traffic Manager (LTM) - Central to F5's full traffic proxy functionality, LTM provides the platform for creating virtual servers, performance, service, protocol, authentication, and security profiles to define and shape your application traffic. Most other modules in the BIG-IP family use LTM as a foundation for enhanced services. BIG-IP DNS - Formerly Global Traffic Manager, BIG-IP DNS provides similar security and load balancing features that LTM offers but at a global/multi-site scale. BIG-IP DNS offers services to distribute and secure DNS traffic advertising your application namespaces. BIG-IP Access Policy Manager (APM) - Provides federation, SSO, application access policies, and secure web tunneling. Allow granular access to your various applications, virtualized desktop environments, or just go full VPN tunnel. Secure Web Gateway Services (SWG) - Paired with APM, SWG enables access policy control for internet usage. You can allow, block, verify and log traffic with APM's access policies allowing flexibility around your acceptable internet and public web application use. You know.... contractors and interns shouldn't use Facebook but you're not going to be responsible why the CFO can't access their cat pics. BIG-IP Application Security Manager (ASM) - This is F5's web application firewall (WAF) solution. Traditional firewalls and layer 3 protection don't understand the complexities of many web applications. ASM allows you to tailor acceptable and expected application behavior on a per application basis . Zero day, DoS, and click fraud all rely on traditional security device's inability to protect unique application needs; ASM fills the gap between traditional firewall and tailored granular application protection. BIG-IP Advanced Firewall Manager (AFM) - AFM is designed to reduce the hardware and extra hops required when ADC's are paired with traditional firewalls. Operating at L3/L4, AFM helps protect traffic destined for your data center. Paired with ASM, you can implement protection services at L3 - L7 for a full ADC and Security solution in one box or virtual environment. BIG-IP Hardware BIG-IP hardware offers several types of purpose-built custom solutions, all designed in-house by our fantastic engineers; no white boxes here. BIG-IP hardware is offered via series releases, each offering improvements for performance and features determined by customer requirements. These may include increased port capacity, traffic throughput, CPU performance, FPGA feature functionality for hardware-based scalability, and virtualization capabilities. There are two primary variations of BIG-IP hardware, single chassis design, or VIPRION modular designs. Each offer unique advantages for internal and collocated infrastructures. Updates in processor architecture, FPGA, and interface performance gains are common so we recommend referring to F5's hardware pagefor more information.71KViews3likes3CommentsWhat is BIG-IQ?
tl;dr - BIG-IQ centralizes management, licensing, monitoring, and analytics for your dispersed BIG-IP infrastructure. If you have more than a few F5 BIG-IP's within your organization, managing devices as separate entities will become an administrative bottleneck and slow application deployments. Deploying cloud applications, you're potentially managing thousands of systems and having to deal with traditionallymonolithic administrative functions is a simple no-go. Enter BIG-IQ. BIG-IQ enables administrators to centrally manage BIG-IP infrastructure across the IT landscape. BIG-IQ discovers, tracks, manages, and monitors physical and virtual BIG-IP devices - in the cloud, on premise, or co-located at your preferred datacenter. BIG-IQ is a stand alone product available from F5 partners, or available through the AWS Marketplace. BIG-IQ consolidates common management requirements including but not limited to: Device discovery and monitoring: You can discovery, track, and monitor BIG-IP devices - including key metrics including CPU/memory, disk usage, and availability status Centralized Software Upgrades: Centrally manage BIG-IP upgrades (TMOS v10.20 and up) by uploading the release images to BIG-IQ and orchestrating the process for managed BIG-IPs. License Management: Manage BIG-IP virtual edition licenses, granting and revoking as you spin up/down resources. You can create license pools for applications or tenants for provisioning. BIG-IP Configuration Backup/Restore: Use BIG-IQ as a central repository of BIG-IP config files through ad-hoc or scheduled processes. Archive config to long term storage via automated SFTP/SCP. BIG-IP Device Cluster Support: Monitor high availability statuses and BIG-IP Device clusters. Integration to F5 iHealth Support Features: Upload and read detailed health reports of your BIG-IP's under management. Change Management: Evaluate, stage, and deploy configuration changes to BIG-IP. Create snapshots and config restore points and audit historical changes so you know who to blame. 😉 Certificate Management: Deploy, renew, or change SSL certs. Alerts allow you to plan ahead before certificates expire. Role-Based Access Control (RBAC): BIG-IQ controls access to it's managed services with role-based access controls (RBAC). You can create granular controls to create view, edit, and deploy provisioned services. Prebuilt roles within BIG-IQ easily allow multiple IT disciplines access to the areas of expertise they need without over provisioning permissions. Fig. 1 BIG-IQ 5.2 - Device Health Management BIG-IQ centralizes statistics and analytics visibility, extending BIG-IP's AVR engine. BIG-IQ collects and aggregates statistics from BIG-IP devices, locally and in the cloud. View metrics such as transactions per second, client latency, response throughput. You can create RBAC roles so security teams have private access to view DDoS attack mitigations, firewall rules triggered, or WebSafe and MobileSafe management dashboards. The reporting extends across all modules BIG-IQ manages, drastically easing the pane-of-glass view we all appreciate from management applications. For further reading on BIG-IQ please check out the following links: BIG-IQ Centralized Management @ F5.com Getting Started with BIG-IQ @ F5 University DevCentral BIG-IQ BIG-IQ @ Amazon Marketplace8.2KViews1like1Comment