series-visibility-and-orchestration-with-f5
10 TopicsImplementing BIG-IP WAF logging and visibility with ELK
Scope This technical article is useful for BIG-IP users familiar with web application security and the implementation and use of the Elastic Stack.This includes, application security professionals, infrastructure management operators and SecDevOps/DevSecOps practitioners. The focus is for WAF logs exclusively.Firewall, Bot, or DoS mitigation logging into the Elastic Stack is the subject of a future article. Introduction This article focusses on the required configuration for sending Web Application Firewall (WAF) logs from the BIG-IP Advanced WAF (or BIG-IP ASM) module to an Elastic Stack (a.k.a. Elasticsearch-Logstash-Kibana or ELK). First, this article goes over the configuration of BIG-IP.It is configured with a security policy and a logging profile attached to the virtual server that is being protected. This can be configured via the BIG-IP user interface (TMUI) or through the BIG-IP declarative interface (AS3). The configuration of the Elastic Strack is discussed next.The configuration of filters adapted to processing BIP-IP WAF logs. Finally, the article provides some initial guidance to the metrics that can be taken into consideration for visibility.It discusses the use of dashboards and provides some recommendations with regards to the potentially useful visualizations. Pre-requisites and Initial Premise For the purposes of this article and to follow the steps outlined below, the user will need to have at least one BIG-IP Adv. WAF running TMOS version 15.1 or above (note that this may work with previous version but has not been tested).The target BIG-IP is already configured with: A virtual Server A WAF policy An operational Elastic Stack is also required. The administrator will need to have configuration and administrative privileges on both the BIG-IP and Elastic Stack infrastructure.They will also need to be familiar with the network topology linking the BIG-IP with the Elastic Search cluster/infrastructure. It is assumed that you want to use your Elastic Search (ELK) logging infrastructure to gain visibility into BIG-IP WAF events. Logging Profile Configuration An essential part of getting WAF logs to the proper destination(s) is the Logging Profile.The following will go over the configuration of the Logging Profile that sends data to the Elastic Stack. Overview of the steps: Create Logging Profile Associate Logging Profile with the Virtual Server After following the procedure below On the wire, logs lines sent from the BIG-IP are comma separated value pairs that look something like the sample below: Aug 25 03:07:19 localhost.localdomainASM:unit_hostname="bigip1",management_ip_address="192.168.41.200",management_ip_address_2="N/A",http_class_name="/Common/log_to_elk_policy",web_application_name="/Common/log_to_elk_policy",policy_name="/Common/log_to_elk_policy",policy_apply_date="2020-08-10 06:50:39",violations="HTTP protocol compliance failed",support_id="5666478231990524056",request_status="blocked",response_code="0",ip_client="10.43.0.86",route_domain="0",method="GET",protocol="HTTP",query_string="name='",x_forwarded_for_header_value="N/A",sig_ids="N/A",sig_names="N/A",date_time="2020-08-25 03:07:19",severity="Error",attack_type="Non-browser Client,HTTP Parser Attack",geo_location="N/A",ip_address_intelligence="N/A",username="N/A",session_id="0",src_port="39348",dest_port="80",dest_ip="10.43.0.201",sub_violations="HTTP protocol compliance failed:Bad HTTP version",virus_name="N/A",violation_rating="5",websocket_direction="N/A",websocket_message_type="N/A",device_id="N/A",staged_sig_ids="",staged_sig_names="",threat_campaign_names="N/A",staged_threat_campaign_names="N/A",blocking_exception_reason="N/A",captcha_result="not_received",microservice="N/A",tap_event_id="N/A",tap_vid="N/A",vs_name="/Common/adv_waf_vs",sig_cves="N/A",staged_sig_cves="N/A",uri="/random",fragment="",request="GET /random?name=' or 1 = 1' HTTP/1.1\r\n",response="Response logging disabled" Please choose one of the methods below.The configuration can be done through the web-based user interface (TMUI), the command line interface (TMSH), directly with a declarative AS3 REST API call, or with the BIG-IP native REST API.This last option is not discussed herein. TMUI Steps: Create Profile Connect to the BIG-IP web UI and login with administrative rights Navigate to Security >> Event Logs >> Logging Profiles Select “Create” Fill out the configuration fields as follows: Profile Name (mandatory) Enable Application Security Set Storage Destination to Remote Storage Set Logging Format to Key-Value Pairs (Splunk) In the Server Addresses field, enter an IP Address and Port then click on Add as shown below: Click on Create Add Logging Profile to virtual server with the policy Select target virtual server and click on the Security tab (Local Traffic >> Virtual Servers : Virtual Server List >> [target virtualserver] ) Highlight the Log Profile from the Available column and put it in the Selected column as shown in the example below (log profile is “log_all_to_elk”): Click on Update At this time the BIG-IP will forward logs Elastic Stack. TMSH Steps: Create profile ssh into the BIG-IP command line interface (CLI) from the tmsh prompt enter the following: create security log profile [name_of_profile] application add { [name_of_profile] { logger-type remote remote-storage splunk servers add { [IP_address_for_ELK]:[TCP_Port_for_ELK] { } } } } For example: create security log profile dc_show_creation_elk application add { dc_show_creation_elk { logger-type remote remote-storage splunk servers add { 10.45.0.79:5244 { } } } } 3. ensure that the changes are saved: save sys config partitions all Add Logging Profile to virtual server with the policy 1.From the tmsh prompt (assuming you are still logged in) enter the following: modify ltm virtual [VS_name] security-log-profiles add { [name_of_profile] } For example: modify ltm virtual adv_waf_vs security-log-profiles add { dc_show_creation_elk } 2.ensure that the changes are saved: save sys config partitions all At this time the BIG-IP sends logs to the Elastic Stack. AS3 Application Services 3 (AS3) is a BIG-IP configuration API endpoint that allows the user to create an application from the ground up.For more information on F5’s AS3, refer to link. In order to attach a security policy to a virtual server, the AS3 declaration can either refer to a policy present on the BIG-IP or refer to a policy stored in XML format and available via HTTP to the BIG-IP (ref. link). The logging profile can be created and associated to the virtual server directly as part of the AS3 declaration. For more information on the creation of a WAF logging profile, refer to the documentation found here. The following is an example of a pa rt of an AS3 declaration that will create security log profile that can be used to log to Elastic Stack: "secLogRemote": { "class": "Security_Log_Profile", "application": { "localStorage": false, "maxEntryLength": "10k", "protocol": "tcp", "remoteStorage": "splunk", "reportAnomaliesEnabled": true, "servers": [ { "address": "10.45.0.79", "port": "5244" } ] } In the sample above, the ELK stack IP address is 10.45.0.79 and listens on port 5244 for BIG-IP WAF logs.Note that the log format used in this instance is “Splunk”.There are no declared filters and thus, only the illegal requests will get logged to the Elastic Stack.A sample AS3 declaration can be found here. ELK Configuration The Elastic Stack configuration consists of creating a new input on Logstash.This is achieved by adding an input/filter/ output configuration to the Logstash configuration file.Optionally, the Logstash administrator might want to create a separate pipeline – for more information, refer to this link. The following is a Logstash configuration known to work with WAF logs coming from BIG-IP: input { syslog { port => 5244 } } filter { grok { match => { "message" => [ "attack_type=\"%{DATA:attack_type}\"", ",blocking_exception_reason=\"%{DATA:blocking_exception_reason}\"", ",date_time=\"%{DATA:date_time}\"", ",dest_port=\"%{DATA:dest_port}\"", ",ip_client=\"%{DATA:ip_client}\"", ",is_truncated=\"%{DATA:is_truncated}\"", ",method=\"%{DATA:method}\"", ",policy_name=\"%{DATA:policy_name}\"", ",protocol=\"%{DATA:protocol}\"", ",request_status=\"%{DATA:request_status}\"", ",response_code=\"%{DATA:response_code}\"", ",severity=\"%{DATA:severity}\"", ",sig_cves=\"%{DATA:sig_cves}\"", ",sig_ids=\"%{DATA:sig_ids}\"", ",sig_names=\"%{DATA:sig_names}\"", ",sig_set_names=\"%{DATA:sig_set_names}\"", ",src_port=\"%{DATA:src_port}\"", ",sub_violations=\"%{DATA:sub_violations}\"", ",support_id=\"%{DATA:support_id}\"", "unit_hostname=\"%{DATA:unit_hostname}\"", ",uri=\"%{DATA:uri}\"", ",violation_rating=\"%{DATA:violation_rating}\"", ",vs_name=\"%{DATA:vs_name}\"", ",x_forwarded_for_header_value=\"%{DATA:x_forwarded_for_header_value}\"", ",outcome=\"%{DATA:outcome}\"", ",outcome_reason=\"%{DATA:outcome_reason}\"", ",violations=\"%{DATA:violations}\"", ",violation_details=\"%{DATA:violation_details}\"", ",request=\"%{DATA:request}\"" ] } break_on_match => false } mutate { split => { "attack_type" => "," } split => { "sig_ids" => "," } split => { "sig_names" => "," } split => { "sig_cves" => "," } split => { "staged_sig_ids" => "," } split => { "staged_sig_names" => "," } split => { "staged_sig_cves" => "," } split => { "sig_set_names" => "," } split => { "threat_campaign_names" => "," } split => { "staged_threat_campaign_names" => "," } split => { "violations" => "," } split => { "sub_violations" => "," } } if [x_forwarded_for_header_value] != "N/A" { mutate { add_field => { "source_host" => "%{x_forwarded_for_header_value}"}} } else { mutate { add_field => { "source_host" => "%{ip_client}"}} } geoip { source => "source_host" } } output { elasticsearch { hosts => ['localhost:9200'] index => "big_ip-waf-logs-%{+YYY.MM.dd}" } } After adding the configuration above to the Logstash parameters, you will need to restart the Logstash instance to take the new logs into configuration.The sample above is also available here. The Elastic Stack is now ready to process the incoming logs.You can start sending traffic to your policy and start seeing logs populating the Elastic Stack. If you are looking for a test tool to generate traffic to your Virtual Server, F5 provides a simpleWAF tester tool that can be found here. At this point, you can start creating dashboards on the Elastic Stack that will satisfy your operational needs with the following overall steps: ·Ensure that the log index is being created (Stack Management >> Index Management) ·Create a Kibana Index Pattern (Stack Management>>Index patterns) ·You can now peruse the logs from the Kibana discover menu (Discover) ·And start creating visualizations that will be included in your Dashboards (Dashboards >> Editing Simple WAF Dashboard) A complete Elastic Stack configuration can be found here – note that this can be used with both BIG-IP WAF and NGINX App Protect. Conclusion You can now leverage the widely available Elastic Stack to log and visualize BIG-IP WAF logs.From dashboard perspective it may be useful to track the following metrics: -Request Rate -Response codes -The distribution of requests in term of clean, blocked or alerted status -Identify the top talkers making requests -Track the top URL’s being accessed -Top violator source IP An example or the dashboard might look like the following:13KViews5likes6CommentsDashboards for NGINX App Protect
Introduction NGINX App Protect is a new generation WAF from F5 which is built in accordance with UNIX philosophy such that it does one thing well everything else comes from integrations. NGINX App Protect is extremely good at HTTP traffic security. It inherited a powerful WAF engine from BIG-IP and light footprint and high performance from NGINX. Therefore NGINX App Protect brings fine-grained security to all kinds of insertion points where NGINX use to be either on-premises or cloud-based. Therefore NGINX App Protect is a powerful and flexible security tool but without any visualization capabilities which are essential for a good security product. As mentioned above everything besides primary functionality comes from integrations. In order to introduce visualization capabilities, I've developed an integration between NGINX App Protect and ELK stack (Elasticsearch, Logstash, Kibana) as one of the most adopted stacks for log collection and visualization. Based on logs from NGINX App Protect ELK generates dashboards to clearly visualize what WAF is doing. Overview Dashboard Currently, there are two dashboards available. "Overview" dashboard provides a high-level view of the current situation and also allows to discover historical trends. You are free to select any time period of interest and filter data simply by clicking on values right on the dashboard. Table at the bottom of the dashboard lists all requests within a time frame and allows to see how exactly a request looked like. False Positives Dashboard Another useful dashboard called "False Positives" helps to identify false positives and adjust WAF policy based on findings. For example, the chart below shows the number of unique IPs that hit a signature. Under normal conditions when traffic is mostly legitimate "per signature" graphs should fluctuate around zero because legitimate users are not supposed to hit any of signatures. Therefore if there is a spike and amount of unique IPs which hit a signature is close to the total amount of sources then likely there is a false positive and policy needs to be adjusted. Conclusion This is an open-source and community-driven project. The more people contribute the better it becomes. Feel free to use it for your projects and contribute code or ideas directly to the repo. The plan is to make these dashboards suitable for all kinds of F5 WAF flavors including AWAF and EAP. This should be simple because it only requires logstash pipeline adjustment to unify logs format stored in elasticsearch index. If you have a project for AWAF or EAP going on and would like to use dashboards please feel free to develop and create a pull request with an adjusted logstash pipeline to normalize logs from other WAFs. Github repo: https://github.com/464d41/f5-waf-elk-dashboards Feel free to reach me with questions. Good luck!2KViews4likes1CommentVisibility and Orchestration
Introduction Applications living in public clouds such as AWS, Azure, GCP, Alibaba Cloud, delivered via CI/CD pipelines and deployed using Kubernetes or Docker, introduce significant challenges from a security visibility and orchestration perspective. This article is the first of series, and aims at defining the tenets of effective security visibility and explore a journey to achieving end-to-end visibility into the application construct.The consideration of visibility from the inception of service or application is crucial for a successful implementation of security.Comprehensive visibility enables ‘security and performance’ oriented insights, generation of alerts and orchestration of action based on pertinent events. The end-goal of integrating visibility and orchestration in application management is to provide a complete framework for the operation of adaptive applications with appropriate level of security. Initial Premise and Challenge For the purposes of this article, an application is defined as a collection of loosely coupled microservices (ref. https://en.wikipedia.org/wiki/Microservices) that communicate over the back and/or front-end network to support the frontend.Microservices are standalone services that provide one or more capabilities of the application.They are deployed across multiple clouds and application management infrastructures.The following shows a sample application built on microservices (ref. https://github.com/GoogleCloudPlatform/microservices-demo). The challenge posed by the infrastructure above lies in providing comprehensive visibility for this application. From a confidentiality, integrity and availability standpoint, it is essential to have this visibility to ensure that the application is running optimally and that the users are guaranteed a reasonable level of reliability. Visibility In this article we consider two distinct aspects of security: posture assessment and monitoring.The first is based on the review and assessment of the configuration of the security measures protecting the application.For example, the analysis aims to answer questions such as: Is the application protected by an advanced web application firewall? What threats are the application and/or the microservices protected from? DoS? Bot? Application vulnerabilities? Is the communication between microservices confidential? The second aspect of security visibility is dynamic in nature.It relies on the collection of logs and intelligence from the infrastructure and control points.The information collected is then processed and presented in an intelligible fashion to the administrator.For example, log gathering aims to answer questions such as: How much of the traffic reaching my applications and its components, “good traffic” and how much of it is coming from Bots and Scans? Does the application traffic hitting my servers conform to what I am expecting?Is there any rogue user traffic attempting to exploit possible vulnerabilities to disrupt availability of my application or to gain access to the data it references? Are the protection policies in place appropriate and effective? The figure below gives a graphical representation on possible feeds into the visibility infrastructure. Once the data is ingested and available from the assessment and monitoring, we can use the data to produce reports, trigger alerts and extract insights. Strategies for reporting, alerting and insights will vary depending on requirements, infrastructure and organizational/operational imperatives. In following articles, we will offer different possibilities to generate and distribute reports, strategies for alerting and providing insights (dashboards, chatops etc.) From an F5-centric perspective, the figure below shows the components that can contribute to the assessment and monitoring of the infrastructure: Orchestration In order to be able to consume security visibility with an adaptive application, it needs to be implemented as part of the CI/CD pipeline.The aim is to deploy it alongside the application.Any dev modification will trigger the inclusion and testing of the security visibility. Also, any change in the security posture (new rules, modified policy etc.) will trigger the pipeline and hence testing of deployment and validation of security efficacy. From an application deployment point of view, inserting F5 solutions in the application through the pipeline would look something like the picture below. In the picture above, BIG-IP, NGINX Plus / NGINX App Protect, F5 Cloud Services, Silverline/Shape are inserted at key points in the infrastructure and provide control points for the traffic traversing the application.Each element in the chain also provides a point that will generate telemetry and other information regarding the component protected by the F5 element. In conjunction with the control points, security can also be provided through the implementation of an F5 AspenMesh service mesh for Kubernetes environments. The implementation of security is integrated into the CI/CD pipeline so as to tightly integrate, testing and deployment of security with the application.An illustration of such a process is provided as an example in the figure below. In the figure above, changes in the security of the application is tightly coupled with the CI/CD pipeline. It is understood that this might not always be practical for different reasons, however, it is a desirable goal that simplifies the security and overall operations considerably. The figure shows that any change in the service or workload code, its security or the API endpoints it serves triggers the pipeline for validation and deployment in the different environments (prod, pre-prod/staging, testing).The figure does not show different integration points, testing tools (unit and integration testing) or static code analysis tools, as those tools are beyond the scope of this discussion. Eventually, the integration of artificial AI/ML, and other tooling should enable security orchestration, automation and response (SOAR).With greater integration and automation with other network and infrastructure security tools (DoS prevention, firewall, CDN, DNS protection, intrusion prevention) and the automated framework, it is conceivable to offer important SOAR capabilities.The insertion and management of the visibility, inference, and automation in the infrastructure enables coordinated and automated action on any element of the cyber kill-chain thus providing enhanced security and end-to-end remediation. Journey It is understood that the implementation of security frameworks cannot occur overnight.This is true for both existing and net-new infrastructure.The journey to integration will be fraught with resistance from both the Security or secdevops group and the Application or devops organization.The former needs to adapt to tools that seem foreign but are incredibly accessible and powerful.The latter category feels that security is a hindrance and delays implementation of the application and is anything but agile.Neither sentiment is based on more than anecdotal evidence. F5 offers a set of possible integrations for automated pipelines, this includes: A comprehensive automation toolchain for BIG-IP (link). F5 supported integrations with multiple infrastructure-as-code and automation tools such as Terraform (link)or Ansible (link). Integrations with all major cloud providers including Amazon Web Services (link), Google Cloud Platform (link) or Microsoft Azure (link). Full RESTful API’s to manage all F5 products including Cloud Services, Shape and NGINX Plus instances. These integrations are supported and maintained by F5 and available across all products. The following are possible steps in the journey to achieve complete visibility with the use of F5 products – the following steps as well as recommendations will be discussed in greater detail in follow-up articles: Architect the insertion of F5 control points (Cloud Services, BIG-IP, NGINX, Shape etc.) wherever possible. This differs somewhat based on the type of infrastructure (e.g. deployment in on premises datacenter versus cloud deployments) Automate the insertion and configuration of F5 control points leveraging the centralized management platforms, and other F5-provided tools to make the process easy. Automate the configuration of logging and telemetry of different control points in such a manner so that they feed into the centralized management platforms, and other visibility tools such as Beacon, ELK or other 3 rd -party utilities (e.g. SIEM etc). Based on available tools, define logging/telemetry/signaling strategy to collect and build a global view of the infrastructure – this might include the layering of technologies to distill the information as it is interpreted and aggregated, for example, leverage BIG-IQ as the first line of logging and visibility for BIG-IP Leveraging F5 Beacon or other 3 rd party tools, and build custom dashboards offering various insights to appropriate party involved in delivery and security of applications. Enhance the visibility provided through a reporting strategy that will allow the sharing of information among different groups as well as different platforms (e.g. SIEM etc.) Build a centralized solution collecting metrics, logs and telemetry from the different platforms to build meaningful insights. (Optional) Integrate with flexible ops tools such as chatops (Slack, MS Teams etc.) and other visibility tools via webhooks. (Optional) Integrate with ML/AI tools to speed the discovery and creation of insights (Optional) Progressively build SOAR and other capabilities leveraging F5 and/or 3 rd -party tools. Keep in mind that applications are fluid and adaptive.This raises important challenges with regards, to maintaining and adjusting the visibility so that it remains relevant and efficient.This justifies the integration of security in the CI/CD pipeline to remain nimble and adapt at the speed of business. Conclusion This article aims at helping you define a journey to building a comprehensive resilient and scalable application security visibility framework.The context for this journey is that of adaptive applications that evolve constantly. Do note that the same principles apply to common off-the-shelf applications as well as they are extended and communicate with third-party applications. The journey may begin by defining the goals for implementing security visibility and then a strategy to get to the goals.To be successful, the security strategy should include a “shift-left” plan to integrate into the CI/CD pipeline, removing friction and complexity from an implementation standpoint.Operational efficiency and usability should remain the guiding principles at the forefront of the strategy to ensure continued use, evolution and rapid change. The next articles in this series will help you along this journey, leveraging F5 technologies such as BIG-IP, BIG-IQ, NGINX Plus, NGINX App Protect, Essential App Protect and other F5 Cloud Services to gather telemetry and configuration information.Other articles will discuss avenues for visibility and alerting services leveraging F5 services such as Beacon, as well as 3 rd party utilities and infrastructure. At the heart of this article is the implementation of proxies and control-points throughout the networked infrastructure to gather data and assess the application’s security posture.As a continuation of this discussion, the reader is encouraged to consider other infrastructure aspects such as static code analysis, infrastructure isolation (what process/container/hypervisor connects to which peer etc.) and monitoring.1.3KViews1like0CommentsEvaluate WAF Policy with BIG-IQ Policy Analyzer
Introduction With BIG-IQ 8.0, F5 introduced a policy analyzer feature for web application security.It allows you to have an evaluation of your policy with respect to F5 recommended practices.It results in giving your team suggestions on enhancing your application’s security posture from a Web Application Firewall perspective. This article will take you through the process of using the Policy Analyzer feature. The resulting report can be exported to PDF for wider consumption. Using the Policy Analyzer The “Policy Analyzer” feature is available from the Configuration menu on BIG-IQ. Ensure that you login to the BIG-IQ web interface with sufficient privileges to access and view the Application Security Policies and their contents. The figure below shows how to access the policy by selecting the Configuration tab, highlighting the Security menu, expanding the item labelled Web Application Security Selecting the Policies The analyzer feature is available from the “More” menu as shown below: The Policy Analyzer screen provides the 4 main sections outlined below: The Security Score shown above provides a synthetic assessment of the policy based on the severity and number of recommendations. To look into more detail, refer to the recommendations table shown in the figure below. From the screen above, you can select and choose to ignore the recommendations.You can also click on the recommendation to access the feature configuration screen directly.This will allow you modify the policy directly from the Analyzer screen.For example, clicking on the “More than 10% or attack signatures are in staging (…)” entry, points to the policy configuration screen shown below: This allows you to review and hone your policy accordingly and adhere to recommended practices.Once the changes are made, makes sure to Save & Close . Keep in mind that you will need to go through the policy deployment process for the policy to become effective on the BIG-IP.(Deployment >> Web Application Security). Conclusion BIG-IQ’s Policy Analyzer can be used to gain better visibility into your security posture from one central location for your entire application security infrastructure. The insights provided by the Policy Analyzer Tool provide a starting point to gaining visibility in the efficacy of the protection in place.1.1KViews2likes1CommentShift-left Security Visibility
5 Minute Read Applications evolve rapidly whether your organization runs off-the-shelf applications or builds its own applications and frameworks.Increasingly, workloads are developed in a matter of hours, packaged, instantiated and torn down in timespans measured in minutes.This is the nature of continuous deployment pipelines. Securing applications and workloads that are highly dynamic possibly living across different colocation facilities (Colos) and cloud infrastructures (public and private), can only be achieved following continuous integration and continuous deployment (CI/CD) methodologies. In practice, this has resulted in integrating security in the software development and deployment lifecycles with the automation of: ·Code scanning when checked in to software versioning management tools (Git repository) using static application security testing (SAST), ·Integrating web application firewall (WAF) in the CI/CD pipeline as can be seen with BIG-IP Advanced WAF or NGINX App Protect and Kubernetes ·Verifying artifacts used for application instantiation such as packages and containers during creation, before they are stored to ensure that they are free of malware ·Scanning applications/workloads at runtime as part of the pipeline before deployment in production leveraging dynamic application security testing (DAST) tools ·Securing runtime environments enforcing admission control, use of least-privilege during spin-up, and implementing monitoring ·Building in compliance at every stage of the development and deployment process as needed (HIPAA, PCI etc.) Building automated and orchestrated ·Monitoring Kubernetes clusters and other cloud and physical infrastructure where workloads run, however ephemerally, tracking things like container health, container orchestration and ownership across the cluster ·Monitoring workloads and cloud services following site reliability engineering (SRE) guidelines – following the “Golden Signals” (Source: SRE Book) ·Performing regular vulnerability scanning of applications in production Building security in the early stages of the software development lifecycle is part of a “shift-left” strategy, and is discussed here or here. In order to support specific service level objectives (SLO), monitoring at the application and infrastructure level is built into the infrastructure from the perspective of latency, traffic, errors and saturation. More information on adopting SRE with F5 can be found here. What about security visibility? The fundamental need to have a holistic view of the application and its components is essential to be able to mitigate threats.You need to visualize your applications and possible threats in order to assess risk, identify threat vectors before attackers do, and possibly investigate in the event of an issue. In a previous article, we focused on the posture assessment (static view) and monitoring (dynamic view) axis as fundamentals of security visibility.Implementing this visibility entails building and deploying security and security visibility within the pipeline along-side the application, workload or infrastructure. The above figure shows the insertion of WAF policy and logging the software building pipeline.In a shift-left effort the logging, telemetry and security policy configuration is integrated alongside the application’s SDLC.The intent is to leverage the same pipeline infrastructure and have the security and visibility aspects integrated in the project with the workload code.WAF only provides one aspect of the application security suite and is the focus of this exercise.Other infrastructure-centric security configurations, such as mTLS use within a Kubernetes Service Mesh or authentication/authorization framework configurations, can also be inserted in the pipeline. In order to simplify testing and deployment, the security infrastructure needs to be defined as code for consistent deployment for testing and production environments.Within an infrastructure-as-code and security-as-code framework, this article is meant to outline the need for visibility-as-code, and more precisely security-visibility-as-code.All F5 WAF products and their associated logging, monitoring and telemetry capabilities are easily automated and defined as-code.They can be inserted anywhere programmatically in all environment for testing as well as production.812Views0likes0CommentsBIG-IQ Access Configuration Interface Quick Introduction
Introduction Prior to the introduction of BIG-IQ 8.0, you had to use the BIG-IQ graphical user interface (GUI) and the visual policy editor (VPE) to configure access policies and associate them with a virtual server. Starting with BIG-IQ 8.0, a new REST endpoint was created to simplify Access configuration workflows.The aim is to make the configuration of BIG-IP APM configuration using a one-shot API call.You are now able to store the configuration in your versioning tool (Git, SVN, etc.), and easily integrate the configuration of BIG-IP APM in your automation and pipeline tools. The focus of the endpoint described below is to create a security assertion mark-up language (SAML) service provider (SP) on a BIG-IP APM device and associate it to a new virtual server. The BIG-IP will then insert a header on the serve-rside providing session information to the back-end server. For more information about SAML SP, please refer to this short video.The F5 BIG-IP APM implementation and use-cases of SAML are discussed in this support page. As a reminder the BIG-IQ API reference documentation can be found here.Documentation for the Access Simplified Workflow can be found here. The API is designed to achieve the following on (a) target BIG-IP(s): ·Create an access policy with the following elements: oConfigure a SAML Service Provider (SP) on BIG-IQ oAssociate up to two (2) idP connectors to a SP profiles oConfigure an access policy to take into account up to two (2) key-value-pairs in the SAML assertion oAdd up to two (2) authentication contexts oIntegrate back-end single-sign-on functionality ·Create a virtual server with the following characteristics: oUp to 2 pool members oAn associated access profile (as created above) The figure below shows a simplified workflow of the configuration process. A few shortcuts are taken in the figure above as it is meant to illustrate the advantage of the simplified workflow. Configuration For the configuration the administrator needs to: -Create a JSON blurb or payload that will be sent to the BIG-IQ API -Authenticate to the BIG-IQ API -Send the payload to the BIG-IQ -Ensure that the workflow completes successfully -Deploy the newly created policy and virtual server to the BIG-IP The following aims to provide a step-by-step manual configuration leveraging the API.In practice, the steps will be automated and will be included in the pipeline used to deploy the application leveraging the enterprise tooling and processes in place. 1.- Authenticate to the API API interactions with the BIG-IQ API requires the use of a token.The initial REST call should look like the following: REST Endpoint : /mgmt/shared/authn/login HTTP Method: POST Headers: -content-type: application/json Content: { "username": "", "password": "", "loginProviderName": "" } Example: POST https://10.0.0.1/mgmt/shared/authn/login HTTP/1.1 Headers: content-type: application/json Content: { "username": "username", "password": "complicatedPassword!", "loginProviderName": "RadiusServer" } The call above will authenticate the user “bob” to the API.The result of a successful authentication is the response from the BIG-IQ API with a token. 2.- Push the configuration to BIG-IQ To send the configuration to the BIG-IQ you will need to send: -HTTP POST Request -To the following URI: /mgmt/cm/access/workflow/access-workflow -With the custom authentication header: X-F5-Auth-Token with a value of the authentication token resulting from the previous authentication. The payload of the POST request will look like the following (broken up into segments for clarity - full sample file is in attachment.): To start the JSON payload, you will need to give it a name and make sure the type is "samlSP" as shown below and define which BIG-IQ Access Group to associate the configuration with: { "name": "workflow_saml_3", "type": "samlSP", "accessDeviceGroup": "BIG-IQ-Access-Device-Group", "configurations": { In the "configurations" sections, you will need to define the SP Service - this will look like the sample below: "samlSPService": { "entityId": "https://www.testsaml.lab", "idpConnectors": [ { "connector": { "entityId": "https://www.testidp.lab", "ssoUri": "https://www.testidp.lab/sso" } } ], "attributeConsumingServices": [ { "service": { "serviceName": "service_01", "isDefault": true, "attributes": [ { "attributeName": "service_attribute" } ] } } ], "authContextClassList": { "classes": [ { "value": "urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport" } ] } }, In the declaration above the following items are configured: -idP SAML connector URL -idP SAML connector URI -Name of the service -Define the SP service as the default -The attributes that will be used for processing in the assertion -Authentication Context information In the "virtualServers" section of the configuration JSON payload focusses on the creation of virtual server and related pool and pool-member configuration. In the sample blurb below, we have: -A new virtual server listening on [IP address]:443 -A device in the BIG-IQ access device group to deploy the configuration to -New client and server SSL profiles based on the default profiles -Related pool/pool members and monitoring parameters "virtualServers": [ { "port": "443", "destinationIpAddress": "[IP address]", "targetDevice": "BOS-vBIGIP01.termmarc.com", "clientsideSsl": "/Common/clientssl", "serversideSsl": "/Common/serverssl", "poolServer": { "monitors": { "https": [ "/Common/https" ] }, "members": [ { "ipAddress": "[IP address]", "port": "443", "priorityGroup": 10 }, { "ipAddress": "[IP address]", "port": "443" } ] } } ], "accessProfile": {}, "singleSignOn": { "type": "httpHeaders", "httpHeaders": [ { "headerName": "Authorization", "headerValue": "%{session.saml.last.identity}" }, { "headerName": "Authorization2", "headerValue": "%{session.saml.last.identity2}" } ] }, You can also chose to deploy endpoint checks for the configuration. This will allow device posture checking before granting access to the protected resource. A sample endpoinCheck configuration is provided below: "endpointCheck": { "clientOS": { "windows": { "windows7": true, "windows10": true, "windows8_1": true, "antivirus": {}, "firewall": {}, "machineCertAuth": {} }, "windowsRT": { "antivirus": {}, "firewall": {} }, "linux": { "antivirus": { "dbAge": 102, "lastScan": 102 }, "firewall": {} }, "macOS": { "antivirus": { "dbAge": 103, "lastScan": 103 } }, "iOS": {}, "android": {}, "chromeOS": { "antivirus": { "dbAge": 104, "lastScan": 104 }, "firewall": {} } } } } } After the HTTP POST, the BIG-IQ will respond with a transaction id.A sample of what looks like is given below: { […] "accessDeviceGroup":" BIG-IQ-Access-Device-Group ", "id":"edc17b06-8d97-47e1-9a78-3d47d2db70a6", "status":"STARTED", "name":"workflow_saml_3", […] } The initial status of the workflow is “STARTED” as shown above. To check on the status of the workflow, you can send an HTTP GET Request to the following BIG-IQ URI: https://[BIG-IQ Mgmt IP Addess or Hostname]/mgmt/cm/access/workflow/access-workflow/[workflow_id] Once the status returns with FINISHED two new items are available on BIG-IQ: -One new SAML SP Access Policy (name matches the workflow name in the JSON payload) -One new Virtual Server with associated Pool and Pool Members (name matches the workflow name in the JSON payload) 3.- Deploy the changes to BIG-IP This is achieved with the usual deployment process that you are familiar with. Conclusion You are now able to create a new policy and associated artifacts (Virtual Server, Pool, Pool Members) using a single call to the BIG-IQ API.These items can then be manipulated, assigned and deployed as needed on the managed BIG-IPs.722Views2likes0CommentsF5 SSLO Unified Configuration API Quick Introduction
Introduction Prior to the introduction of BIG-IQ 8.0, you had to use the BIG-IQ graphical user interface (GUI) to configure F5 SSL Orchestrator (SSLO) Topologies and their dependencies. Starting with BIG-IQ 8.0, a new REST unified, supported and documented REST API endpoint was created to simplify SSLO configuration workflows. The aim is to simplify the configuration of F5 SSLO using standardized API calls.You are now able to store the configuration in your versioning tool (Git, SVN, etc.), and easily integrate the configuration of F5 SSLO in your automation and pipeline tools. For more information about F5 SSLO, please refer to this introductory video.An overview of F5 SSL Orchestrator is provided in K1174564. As a reminder the BIG-IQ API reference documentation can be found here.Documentation for the Access Simplified Workflow can be found here. The figure below shows a possible use for the SSLO Unified API. A few shortcuts are taken in the figure above as it is meant to illustrate the advantage of the simplified workflow. Example Configuration For the configuration the administrator needs to: -Create a JSON blurb or payload that will be sent to the BIG-IQ API -Authenticate to the BIG-IQ API -Send the payload to the BIG-IQ -Ensure that the workflow completes successfully The following aims to provide a step-by-step configuration of SSLO leveraging the API.In practice, the steps may be automated and may be included in the pipeline used to deploy the application leveraging the enterprise tooling and processes in place. 1.- Authenticate to the API API interactions with the BIG-IQ API requires the use of a token.The initial REST call should look like the following: REST Endpoint : /mgmt/shared/authn/login HTTP Method: POST Headers: -content-type: application/json Content: { "username": "", "password": "", "loginProviderName": "" } Example: POST https://10.0.0.1/mgmt/shared/authn/login HTTP/1.1 Headers: content-type: application/json Content: { "username": "username", "password": "complicatedPassword!", "loginProviderName": "RadiusServer" } The call above will authenticate the user “bob” to the API.The result of a successful authentication is the response from the BIG-IQ API with a token. 2.- Push the configuration to BIG-IQ The headers and HTTP request should look like the following: URI: mgmt/cm/sslo/api/topology HTTP Method: POST Headers: -content-type: application/json -X-F5-Auth-Token: [token obtained from the authentication process above] To send the configuration to the BIG-IQ you will need to send the following payload - the blurb is cut up in smaller pieces for readability. The JSON blurb is divided in multiple parts - the full concatenated text is available in the file in attachment. Start by defining an new topology with the following characteristics: Name: "sslo_NewTopology" Listening on the "/Common/VLAN_TRAP" VLAN The topology is of type "topology_l3_outbound" The SSL settings defined below named: "ssloT_NewSsl_Dec" The policy is called: "ssloP_NewPolicy_Dec" The JSON payload starts with the following: { "template": { "TOPOLOGY": { "name": "sslo_NewTopology ", "ingressNetwork": { "vlans": [ { "name": "/Common/VLAN_TAP" } ] }, "type": "topology_l3_outbound", "sslSetting": "ssloT_NewSsl_Dec", "securityPolicy": "ssloP_NewPolicy_Dec" }, The SSL settings used above are defined in the following JSON that creates a new profile with default values: "SSL_SETTINGS": { "name": "ssloT_NewSsl_Dec" }, The security policy is configured as follows: name: ssloP_NewPolicy_Dec function: introduces a pinning policy doing a policy lookup - matching requests are bypassed (no ssl decryp) with the associated service chain "ssloSC_NewServiceChain_Dec" that is defined further down below. "SECURITY_POLICY": { "name": "ssloP_NewPolicy_Dec", "rules": [ { "mode": "edit", "name": "Pinners_Rule", "action": "allow", "operation": "AND", "conditions": [ { "type": "SNI Category Lookup", "options": { "category": [ "Pinners" ] } }, { "type": "SSL Check", "options": { "ssl": true } } ], "actionOptions": { "ssl": "bypass", "serviceChain": "ssloSC_NewServiceChain_Dec" } }, { "mode": "edit", "name": "All Traffic", "action": "allow", "isDefault": true, "operation": "AND", "actionOptions": { "ssl": "intercept" } } ] }, The service chain configuration is defined below to forward the traffic to the "ssloS_ICAP_Dec" service. this is done with the following JSON: "SERVICE_CHAIN": { "ssloSC_NewServiceChain_Declarative": { "name": "ssloSC_NewServiceChain_Dec", "orderedServiceList": [ { "name":"ssloS_ICAP_Dec" } ] } }, The "ssloS_ICAP_Dec" service is defined with the JSON below with IP 3.3.3.3 on port 1344 "SERVICE": { "ssloS_ICAP_Declarative": { "name": "ssloS_ICAP_Dec", "customService": { "name": "ssloS_ICAP_Dec", "serviceType": "icap", "loadBalancing": { "devices": [ { "ip": "3.3.3.3", "port": "1344" } ] } } } } }, The configuration will be deployed to the target defined below: "targetList": [ { "type": "DEVICE", "name": "my.bigip.internal" } ] } After the HTTP POST, the BIG-IQ will respond with a transaction id.A sample of what looks like is given below: { […] "id":"edc17b06-8d97-47e1-9a78-3d47d2db70a6", "status":"STARTED", […] } You can check on the status of the deployment task by submitting a request as follows: -HTTP GET Method -Authenticated with the use of the custom authentication header X-F5-Auth-Token -Sent to the BIG-IQ to URI GET mgmt/cm/sslo/tasks/api/{{status_id}} HTTP/1.1 -With Content-Type header set to: Application/JSON Once the status of the task changes to FINISHED.The configuration is successfully completed.You can now check the F5 SSLO interface to make sure the new topology has been created.The BIG-IQ interface will show the new topology as depicted in the example below: The new topology has been deployed to the BIG-IP automatically.You can connect to the BIG-IP to verify, the interface should like the one depicted below: Congratulations, you now have successfully deployed a fully functional topology that your users can start using. Note that, you can also use the BIG-IQ REST API to delete the items that were just created.This is done by sending HTTP DELETE to the different API endpoints for the topology, service, security profile etc. For example, for the example above, you would be sending HTTP DELETE requests to the following URI’s: -For the topology: /mgmt/cm/sslo/api/topology/sslo_NewTopology_Dec -For the service chain: /mgmt/cm/sslo/api/service-chain/ssloSC_NewServiceChain_Dec -For the ICAP service: /mgmt/cm/sslo/api/ssl/ssloT_NewSsl_Dec All the requests listed above need to be sent to the BIG-IQ system to its management IP address with the following 2 headers: -content-type: application/json -X-F5-Auth-Token: [value of the authentication token obtained during authentication] Conclusion BIG-IQ makes it easier to manage SSLO Topologies thanks to its REST API.You can now make supported, standardized API calls to the BIG-IQ to create and modify topologies and deploy the changes directly to BIG-IP.700Views1like0CommentsUsing BIG-IQ to Determine Differences Between Advanced WAF Policies
Introduction With BIG-IQ 8.0, F5 introduced a policy comparison feature.This allows you to bring up 2 web application firewall (WAF) policies and look at them side-by-side in a table format.The policies can be deployed on different BIG-IP’s or BIG-IP pairs, on virtual servers deployed for different applications. This feature also allows the administrator to export report in PDF format for consumption outside of the BIG-IP/BIG-IQ.It also very useful to determine policy drift in cases where a policy is used to spawn other policies that, in turn, are tuned for the applications they protect. This article will take you through the process of comparing 2 policies and exporting the report to PDF. Policy Comparison The “Compare Policies” feature is available from the Configuration menu on BIG-IQ. Ensure that you login to BIG-IQ with sufficient privileges to access and view the Application Security Policies and their contents. The figure below shows how to access the policy by ·selecting the Configuration tab, ·highlighting the Security menu, ·expanding the item labelled Web Application Security, ·Selecting the Policies Note that, selecting a policy in the window above provides valuable information about the policy and related configured items.In the example below, the asm-lab3 is selected and the interface shows an overview of the policy content and the related items such as the virtual server the policy is associated with. Once on the Policies screen, you can select 2 and compare them as shown below: The two policies configuration now appear side-by-side for inspection The two-column view can be exported in PDF format for external consumption by clicking on the Export button. Also, you can get the comparison in JSON format through BIG-IQ’s REST API by following these main steps: 1.Obtain a token to access the API to use with all subsequent requests 2.List the policies and find the references to the policies of interest by issuing a GET request to the following end-point: https://{{BIG-IQ}}/mgmt/cm/asm/working-config/policies 3.Create a policy compare task referencing the 2 policies of interest by posting a request to the following end-point: https://{{BIG-IQ}}/mgmt/cm/asm/tasks/policy-diff with a payload referencing the 2 policies (to and from): { "fromPolicyReference": { "link": "" }, "toPolicyReference": { "link": "" } } It will result in the creation of a task – the task information will be provided in the response in the form: "selfLink": "https://localhost/mgmt/cm/asm/tasks/policy-diff/XXX" Where XXX is a unique identifier. 4.The task generated in the step above will take some time – you can check on the status by sending a request to: https://{{BIG-IQ}}/mgmt/cm/asm/tasks/policy-diff/XXX Where XXX is the identifier gleaned from Step 3 above. In the payload of the response, there will be a link to the results of the task in the form: "diffReportReference": { "link": "https://localhost/mgmt/cm/asm/reports/policy-diff/YYY” }, 5.Once the task is done (e.g. "currentStep": "DONE"), the result of the comparison can be found in JSON format at: https://{{BIG-IQ}}/mgmt/cm/asm/reports/policy-diff/YYY Where YYY is the id referenced in Step 4 above. Conclusion You have 2 options to get to a comparison between 2 policies deployed anywhere in your security infrastructure, enhancing your team’s visibility.609Views0likes0CommentsHigh-level Pathways to Security Visibility
Editor's Note: The F5 Beacon capabilities referenced in this article hosted on F5 Cloud Services are planning a migration to a new SaaS Platform - Check out the latesthere. (10 Minute Read) Introduction In previous articles we identified the elements needed to gain visibility into adaptive application security postures. This entails observing the security configuration (static) and monitoring telemetry (dynamic) coming from different control points (ref. Visibility and Orchestration).We also suggested that security visibility should be integrated in the software development and/or deployment lifecycle as part of a shift-left strategy (ref. Shift-left Security Visibility). Now, we’ll focus on identifying a high-level pathway to achieve application security visibility. First, we need to identify the constraints that frame the effort.We will then identify concrete examples of insertion with F5 technologies. The end-goal is to ensure that you keep close control over the application security by embracing a holistic approach to visibility integrated in the software development/deployment lifecycle. Constraints Inserting security visibility in your enterprise is part of the shift-left strategy (ref. url-to-shift-left-sec-vis.). (https://www.f5.com/company/blog/beyond-visibility-is-operability) In order to be practical, we need to make sure that the pathway adheres to the following guidelines: Friction – The solution should not introduce any friction into the pipeline - For example, the tools used by the DEVOPS and SECOPS teams (e.g. Gitlab, Jenkins) should be the same avoiding gated interdependencies where a change by one group is blocked/delayed by the other. Programmability – The security-centric solutions implemented during the journey need to be highly programmable – This will ensure that the tools adapt to the environment (e.g. services, micro-services), the supporting infrastructure (e.g. cloud, containers), and the application. Automation – Enabling automation is key. This can be achieved by ensuring the tools deployed can be automatically configured without intervention as part of a pipeline.One way to ensure this is to leverage declarative application programing interfaces (API) (link-to-f5-declarative-interface) Scalability – Applications can span across infrastructure that is infinitely scalable like public cloud, across availability zones and geographies.This requires that any solution that is deployed to secure/protect applications and workloads be able to scale.To scale horizontally, the solution can be implemented across multiple workloads in multiple instances.To scale vertically, the solution should be able to handle increasing amounts of traffic in single/few instances. Transparency – From a performance and functionality standpoint, the solutions inserted to gain security visibility cannot impact the application.For example, when a proxy is inserted, it cannot add latency between the client and the workload. It also cannot affect the functionality provided by the workload. Resiliency – Inserting a solution to support your applications security and visibility should be resilient.Any failure of the process providing visibility should be flagged and not affect the application’s/workload’s performance or availability. Visibility Insertion All F5 solutions can be inserted in the application delivery infrastructure to provide security visibility.This comes in the form security-aware proxies.The BIG-IP or NGINX Plus platforms are particularly well-suited for insertion in infrastructure requiring inline low-latency and powerful application security and visibility.Deploying F5 solutions can easily be done observing all the constraints mentioned above. Friction Thanks to the available form factors and programmatic templates provided, implementing BIG-IP or NGINX Plus in the infrastructure is easily achieved using appropriate templates. For example, when working with AWS, a BIG-IP can easily be deployed using a Cloud Formation Template (CFT) found here.From the enterprise git (Gitlab, Github, Bitbucket etc.) repository, BIG-IP can be deployed directly by cloning/forking the F5 repository and integrating with the pipeline (ref. Clouddocs Article). Programmability The BIG-IP Advanced Web Application Firewall (Advanced WAF) configuration is highly programable.The advantage is that the configuration can be stored and or modified easily outside of the BIG-IP. For example, an base policy aimed at protecting against OWASP Top 10 Risks can look like the following: { "policy": { "name": "Complete_OWASP_Top_Ten", "description": "A generic, OWASP Top 10 protection items v1.0", "template": { "name": "POLICY_TEMPLATE_RAPID_DEPLOYMENT" }, "fullPath": "/Common/Complete_OWASP_Top_Ten", "enforcementMode":"transparent", "signature-settings":{ "signatureStaging": false, "minimumAccuracyForAutoAddedSignatures": "high" }, "protocolIndependent": true, "caseInsensitive": true, "general": { "trustXff": true }, "data-guard": { "enabled": true }, "policy-builder-server-technologies": { "enableServerTechnologiesDetection": true }, "blocking-settings": { "violations": [ { "alarm": true, "block": true, "description": "ASM Cookie Hijacking", "learn": false, "name": "VIOL_ASM_COOKIE_HIJACKING" }, { "alarm": true, "block": true, "description": "Access from disallowed User/Session/IP/Device ID", "name": "VIOL_SESSION_AWARENESS" }, { "alarm": true, "block": true, "description": "Modified ASM cookie", "learn": true, "name": "VIOL_ASM_COOKIE_MODIFIED" }, { "alarm": true, "block": true, "description": "XML data does not comply with format settings", "learn": true, "name": "VIOL_XML_FORMAT" }, { "name": "VIOL_FILETYPE", "alarm": true, "block": true, "learn": true } ], "evasions": [ { "description": "Bad unescape", "enabled": true, "learn": true }, { "description": "Apache whitespace", "enabled": true, "learn": true }, { "description": "Bare byte decoding", "enabled": true, "learn": true }, { "description": "IIS Unicode codepoints", "enabled": true, "learn": true }, { "description": "IIS backslashes", "enabled": true, "learn": true }, { "description": "%u decoding", "enabled": true, "learn": true }, { "description": "Multiple decoding", "enabled": true, "learn": true, "maxDecodingPasses": 3 }, { "description": "Directory traversals", "enabled": true, "learn": true } ] }, "xml-profiles": [ { "name": "Default", "defenseAttributes": { "allowDTDs": false, "allowExternalReferences": false } } ], "session-tracking": { "sessionTrackingConfiguration": { "enableTrackingSessionHijackingByDeviceId": true } } } } In the example above, aspects of a security policy like evasion techniques, or cookie consumption settings can easily be programmed in the configuration and handled like any other application code for versioning, editing or storing.The standard JSON format can be managed in a Git repository for use in any environment.Documentation for JSON representations of WAF policies can be found here. This is also true for all F5 security platforms including NGINX App Protect or Essential App Protect (ref.NGINX Configuration Guide and EAP API Users Guide). Similarly, configuring BIG-IP to forward security information telemetry to appropriate facilities can be achieved with the use of the Telemetry Streaming framework.For example, in order to configure BIG-IP to send telemetry data to a centralized visibility tool (F5 Beacon, or ELK for example) it can be configured with a declaration like: "class": "Telemetry", "controls": { "class": "Controls", "logLevel": "debug" }, "TS_Poller": { "class": "Telemetry_System_Poller", "interval": 60 }, "TS_Listener": { "class": "Telemetry_Listener", "port": 6514 }, "TS_Consumer": { "class": "Telemetry_Consumer", "type": "Generic_HTTP", "host": "my.visibility-host.url", "protocol": "http", "port": 8888, "path": "/", "method": "POST", "headers": [ { "name": "content-type", "value": "application/json" } ] } } The above declaration identifies the host where it will send telemetry – in this case debug data Scalability, Transparency and Resiliency F5 provides highly scalable, resilient and transparent solutions that can be inserted in any infrastructure to secure and provide visibility into web applications.Discussing these aspects of BIG-IP, NGINX Plus, or NGINX App Protect is beyond the scope of this article.For more information on scalability and high-availability you can refer to Performance of NGINX and NGINX Plus, NGINX App Protect Application Security Testing or BIG-IP Datasheet. Conclusion This article is meant to offer a path to visibility using F5 technology by inserting BIG-IP and configure it to provide application security and generate telemetry to gain visibility into the application's security posture. The aim is for you build a blueprint to systematically watch over your adaptive valuable applications and workloads across your infrastructure.359Views0likes0CommentsNGINX Controller Image Building Automation
Proper application deployment assumes that all stages are automated to eliminate human factor and make all operations as quick and smooth as possible. NGINX Controller is an application. Mission critical application which requires 100% automation. It should start from the very bottom at controller image creation and cover all other NGINX Controller operations. This article presents a small aspect of NGINX Controller automation process - automated creation of NGINX Controller image to use in public cloud. Production application deployment assumes that administrator always know what exactly code is currently running. In case of commercial software it is often impossible to take specific code version directly from a repo and deploy it on a server because software usually distributed as a package. However a package may become this immutable asset to build image on top. The concept behind this project is to take NGINX Controller package and use CICD pipeline to describe automation steps as a code. Such approach allows to statically define all procedures and parameters for NGINX Controller installation and image. Diagram below explains what are the building workflow components and how do they interact with each other. As you can see everything starts from CICD platform which stores image automation pipeline code. When pipeline gets triggered with code commit it uses Hashicorp Packer tool to create temporary AWS EC2 instance and installs NGINX Controller software there then convert it to an AMI image. Writing a pipeline is always challenging. However there is a gitlab project which already has the pipeline defined (link). In case you need to build your own image just clone template, set variables and let the pipeline to do the rest. Example below gives step be step instructions. Step 1. Import the template repo to a brand new Gitlab project Step 2. Set variables. Secrets go as CICD environment variables All other go to ".env". Directly to the code Step 3. Once changes submitted the pipeline starts and builds an image using Hashicorp Packer tool. In case pipeline succeeds you can find NGINX Controller image in the list of AWS AMIs. Such approach allows to formalize NGINX Controller image creation to make it immutable piece of infrastructure to eliminate any inconsistencies. Each build gets tagged with commit hash so you know exactly how controller installation is setup. This project doesn't stop on AWS implementation. Other public clouds are in roadmap. If you are interested in expanding project to other clouds please contact me (m.fedorov@f5.com) and join development. Good luck!236Views1like0Comments