Telemetry Streaming
4 TopicsDeploying BIG-IP Telemetry Streaming with Azure Sentinel as its consumer.
AZURE SENTINEL and BIG-IP ...with Telemetry Streaming! This work was completed as a collaboration of Remo Mattei r.mattei@f5.com and Bill Wester b.wester@f5.com, feel free to email us if you have questions. One of the things that I have discovered recently is how neat it is to be able to leverage Azures new Sentinel to receive and display telemetry data from F5's BIG-IP devices. The devices don't even have to be in Azure, you could have dedicated hardware BIG-IPs and still send via Telemetry Streaming to Sentinel as your destination for statistics and logs. Let us explore a bit more on how to get all of the moving pieces together to a single cohesive implementation. Telemetry Streaming is a way for you to forward events and statistics from the BIG-IP system to your preferred data consumer and visualization application. You can do all of this byPOSTinga single JSON declaration to a declarative REST API endpoint. Telemetry Streaming uses a declarative model, meaning you provide a JSON declaration rather than a set of imperative commands. More info can be found here: https://clouddocs.f5.com/products/extensions/f5-telemetry-streaming/latest/userguide/about-telemetry.html BIG-IP allows you to send logs to several external providers. Splunk, awell knownone, is one of the most used out there. However, the new Azure Sentinel, a cloud solution, is something that many customers can take advantages from.This section,will help in understanding on how to setup BIG-IP to get the logs to Azure Sentinel. Setup BIG-IP First of all, this is broken into two parts, one shows the logs of the BIG-IP System Metrics, like what OS, what modules are installed etc. The second, is about themodule ASM. The two have a few things in common. They use the TS RPM file which is added to the BIG-IP, and the declaration, which tells the BIG-IP where to send the stream of data. To send data relate to BIG-IP System Metrics it is required to have AVR provisioned on the device. ASM is not required but we use it here as an example of how to enable another module. Here is a screenshot from the Azure which shows the required modules.One more important thing is that ASM will need to have AFM also enabled otherwise you will not get logs in Azure. ASM Once enabled the required modules it will show System Metrics Common components that you must install for this to work First you need Telemetry Streaming: The TS RPM can be found here on GITHUB: https://github.com/F5Networks/f5-telemetry-streaming/releases/ You can use Visual Studio Code to install the RPM or your favorite way... Here are some screen shots form VS Code, using the F5 Plugin. NOTE: in order to useVSCodeto push AS3, DOetcyou must install the F5 Plugin. Use the command options in Mac it’scommand+shift+P(here you can search for RPM by just typing it in the box) Select AS3and make sure to install both AS3 and TS: Select the version: (probably latest is best here) The Telemetry Streaming declaration looks like this: { "class": "Telemetry", "My_Listener": { "class": "Telemetry_Listener", "port": 6514 }, "Poller": { "class": "Telemetry_System_Poller", "interval": 60, "enable": true, "trace": false, "allowSelfSignedCert": false, "host": "localhost", "port": 8100, "protocol": "http", "actions": [ { "enable": true, "includeData": {}, "locations": { "system": true, "virtualServers": true, "httpProfiles": true, "clientSslProfiles": true, "serverSslProfiles": true } } ] }, "Pull_Consumer": { "class": "Telemetry_Pull_Consumer", "type": "default", "systemPoller": [ "Poller" ] }, "Azure_Consumer": { "class": "Telemetry_Consumer", "type": "Azure_Log_Analytics", "workspaceId": "workspaceID", "passphrase": { "cipherText": "primkey" } }, "schemaVersion": "1.12.0" } NOTE: You will need to get theworkspaceIDand theprimarykey. You can use the azure cli for that: azmonitor log-analytics workspace list --out table CustomerId Location Name ProvisioningState PublicNetworkAccessForIngestion PublicNetworkAccessForQuery ResourceGroup RetentionInDays ------------------------------------ ------------- ---------------------------------------------------------- ------------------- --------------------------------- ----------------------------- ------------------------- ----------------- a05d4bfb-27c8-49a6-96e2-351d2dc78c61 eastus adrianLA Succeeded Enabled Enabled adrian_rg_01 7 63be43ed-b3f5-4e9f-bc92-226bb3393d11 eastus DefaultWorkspace-77c6ebef-d849-4527-a355-742d8d7d3fdc-EUS Succeeded Enabled Enabled defaultresourcegroup-eus 30 2ccbd35a-dfdf-4a5e-ab5f-1d5314f52e4b southeastasia DefaultWorkspace-77c6ebef-d849-4527-a355-742d8d7d3fdc-SEA Succeeded Enabled Enabled defaultresourcegroup-sea 30 9436f742-069a-4e29-aac0-e1258f7b1f87 westus2 calalangakslog Succeeded Enabled Enabled calalang-rg 30 ac071b51-f0c6-43b6-8bef-16b9197fde0f westus2 edgar-log Succeeded Enabled Enabled defaultresourcegroup-eus 31 555ae8d5-75bc-4058-becf-df510c09f8d3 westus2 DefaultWorkspace-77c6ebef-d849-4527-a355-742d8d7d3fdc-WUS2 Succeeded Enabled Enabled defaultresourcegroup-wus2 30 f633bdb1-d560-43cd-a664-cc7a93ed8781 westus2 edgar-log-analytics Succeeded Enabled Enabled edgar-rg 30 9334eb7c-16fc-4db9-a84f-5824a7177ccb centralus DefaultWorkspace-77c6ebef-d849-4527-a355-742d8d7d3fdc-CUS Succeeded Enabled Enabled defaultresourcegroup-cus 30 091c2cf3-853d-4297-9001-41d2109c28ec westus DefaultWorkspace-77c6ebef-d849-4527-a355-742d8d7d3fdc-WUS Succeeded Enabled Enabled defaultresourcegroup-wus 30 52471748-d9c7-46ba-9f9f-72ed8e92a201 westus remo-analytics Succeeded Enabled Enabled remo-telemetry 30 bc8e90ca-f59c-4fbf-a28b-213fe1cfcfda westus wester-log Succeeded Enabled Enabled wester_rg 30 Here you can see the name of the resource group then run the following command: azmonitor log-analytics workspace get-shared-keys --resource-groupwester_rg--workspace-name wester-log Which will print out theprimarykey The workspace isCustomerIdfrom the main table. To install this declaration you can use POSTMAN, curl, or Visual Studio Code; we used Visual Studio Code. Copy the text into a newVScodetab, make sure it’s in json format and then use the command pallet to post it Verify by using the TS version at the bottom ofVSCode, it will execute a GET to the BIG-IP that is connected. ASM In order to use ASM you will need to configure a VIP with the IP of 255.255.255.254, and the port to the 6514, as well as aniRule. This can be done with an AS3 declaration or TMSH. Sample of AS3 declaration { "class":"ADC", "schemaVersion":"3.10.0", "remark":"Example depicting creation of BIG-IP module log profiles", "Common": { "Shared": { "class":"Application", "template":"shared", "telemetry_local_rule": { "remark":"Only required when TS is a local listener", "class":"iRule", "iRule":"when CLIENT_ACCEPTED {\nnode127.0.0.1 6514\n}" }, "telemetry_local": { "remark":"Only required when TS is a local listener", "class":"Service_TCP", "virtualAddresses": [ "255.255.255.254" ], "virtualPort":6514, "iRules": [ "telemetry_local_rule" ] }, "telemetry": { "class":"Pool", "members": [ { "enable":true, "serverAddresses": [ "255.255.255.254" ], "servicePort":6514 } ], "monitors": [ { "bigip":"/Common/tcp" } ] }, "telemetry_hsl": { "class":"Log_Destination", "type":"remote-high-speed-log", "protocol":"tcp", "pool": { "use":"telemetry" } }, "telemetry_formatted": { "class":"Log_Destination", "type":"splunk", "forwardTo": { "use":"telemetry_hsl" } }, "telemetry_publisher": { "class":"Log_Publisher", "destinations": [ { "use":"telemetry_formatted" } ] }, "telemetry_traffic_log_profile": { "class":"Traffic_Log_Profile", "requestSettings": { "requestEnabled":true, "requestProtocol":"mds-tcp", "requestPool": { "use":"telemetry" }, "requestTemplate":"event_source=\"request_logging\",hostname=\"$BIGIP_HOSTNAME\",client_ip=\"$CLIENT_IP\",server_ip=\"$SERVER_IP\",http_method=\"$HTTP_METHOD\",http_uri=\"$HTTP_URI\",virtual_name=\"$VIRTUAL_NAME\",event_timestamp=\"$DATE_HTTP\"" } }, "telemetry_security_log_profile": { "class":"Security_Log_Profile", "application": { "localStorage":false, "remoteStorage":"splunk", "protocol":"tcp", "servers": [ { "address":"255.255.255.254", "port":"6514" } ], "storageFilter": { "requestType":"illegal-including-staged-signatures" } }, "network": { "publisher": { "use":"telemetry_publisher" }, "logRuleMatchAccepts":false, "logRuleMatchRejects":true, "logRuleMatchDrops":true, "logIpErrors":true, "logTcpErrors":true, "logTcpEvents":true } } } } } To post an AS3 declaration like above use Visual Studio Code Use the command menu and select F5 Post an AS3 Declaration from the tab you have pasted the code OUTPUT from the declaration above: iRuleused Assign the Telemetry Policy to the Virtual Service by selecting the option in the advanced menu Once you have the modules installed, and configured the appropriate settings, like above, then you will see data coming in Azure Sentinel. Here is an example: ASM System Metrics For System Metrics to work, you will need to have AVR installed, you do not need an AS3 declaration or aniRule. Once you have AVR installed, and have pushed the declaration to the BIG-IP, you will need to execute the following command in your BIG-IP. tmshmodify analytics global-settings{offbox-protocoltcpoffbox-tcp-addresses add { 127.0.0.1 }offbox-tcp-port 6514 use-offboxenabled } tmshsave /sys config Check the logs in your BIG-IP less /var/log/restnoded/restnoded.log You will see something like: Fri, 18 Sep 2020 06:36:04 GMT - info: [telemetry] Starting systempollerPoller::Poller. Interval = 60 sec. Fri, 18 Sep 2020 06:36:04 GMT - info: [telemetry] 1 consumer plug-in(s) loaded Next you will need to go into the Azure Portal, and you can find a nice pre-defined Sentinel Workbook to view and start to work with: You will select the "template" and then fill out the correct workspace from the dropdown, then select the correct hostname from the dropdown and you will start to see data showing up. Azure Sentinel displaying the workbook As you enable more modules, they will show up in the Azure Sentinel and will show how it’s enabled.You can also add / modify / enhance the workbook to show more data that is in Sentinel sent from the BIG-IP. Remo and I hope you found this article helpful and enjoy using BIG-IPs with Sentinel!4.3KViews3likes9CommentsHigh-level Pathways to Security Visibility
Editor's Note: The F5 Beacon capabilities referenced in this article hosted on F5 Cloud Services are planning a migration to a new SaaS Platform - Check out the latesthere. (10 Minute Read) Introduction In previous articles we identified the elements needed to gain visibility into adaptive application security postures. This entails observing the security configuration (static) and monitoring telemetry (dynamic) coming from different control points (ref. Visibility and Orchestration).We also suggested that security visibility should be integrated in the software development and/or deployment lifecycle as part of a shift-left strategy (ref. Shift-left Security Visibility). Now, we’ll focus on identifying a high-level pathway to achieve application security visibility. First, we need to identify the constraints that frame the effort.We will then identify concrete examples of insertion with F5 technologies. The end-goal is to ensure that you keep close control over the application security by embracing a holistic approach to visibility integrated in the software development/deployment lifecycle. Constraints Inserting security visibility in your enterprise is part of the shift-left strategy (ref. url-to-shift-left-sec-vis.). (https://www.f5.com/company/blog/beyond-visibility-is-operability) In order to be practical, we need to make sure that the pathway adheres to the following guidelines: Friction – The solution should not introduce any friction into the pipeline - For example, the tools used by the DEVOPS and SECOPS teams (e.g. Gitlab, Jenkins) should be the same avoiding gated interdependencies where a change by one group is blocked/delayed by the other. Programmability – The security-centric solutions implemented during the journey need to be highly programmable – This will ensure that the tools adapt to the environment (e.g. services, micro-services), the supporting infrastructure (e.g. cloud, containers), and the application. Automation – Enabling automation is key. This can be achieved by ensuring the tools deployed can be automatically configured without intervention as part of a pipeline.One way to ensure this is to leverage declarative application programing interfaces (API) (link-to-f5-declarative-interface) Scalability – Applications can span across infrastructure that is infinitely scalable like public cloud, across availability zones and geographies.This requires that any solution that is deployed to secure/protect applications and workloads be able to scale.To scale horizontally, the solution can be implemented across multiple workloads in multiple instances.To scale vertically, the solution should be able to handle increasing amounts of traffic in single/few instances. Transparency – From a performance and functionality standpoint, the solutions inserted to gain security visibility cannot impact the application.For example, when a proxy is inserted, it cannot add latency between the client and the workload. It also cannot affect the functionality provided by the workload. Resiliency – Inserting a solution to support your applications security and visibility should be resilient.Any failure of the process providing visibility should be flagged and not affect the application’s/workload’s performance or availability. Visibility Insertion All F5 solutions can be inserted in the application delivery infrastructure to provide security visibility.This comes in the form security-aware proxies.The BIG-IP or NGINX Plus platforms are particularly well-suited for insertion in infrastructure requiring inline low-latency and powerful application security and visibility.Deploying F5 solutions can easily be done observing all the constraints mentioned above. Friction Thanks to the available form factors and programmatic templates provided, implementing BIG-IP or NGINX Plus in the infrastructure is easily achieved using appropriate templates. For example, when working with AWS, a BIG-IP can easily be deployed using a Cloud Formation Template (CFT) found here.From the enterprise git (Gitlab, Github, Bitbucket etc.) repository, BIG-IP can be deployed directly by cloning/forking the F5 repository and integrating with the pipeline (ref. Clouddocs Article). Programmability The BIG-IP Advanced Web Application Firewall (Advanced WAF) configuration is highly programable.The advantage is that the configuration can be stored and or modified easily outside of the BIG-IP. For example, an base policy aimed at protecting against OWASP Top 10 Risks can look like the following: { "policy": { "name": "Complete_OWASP_Top_Ten", "description": "A generic, OWASP Top 10 protection items v1.0", "template": { "name": "POLICY_TEMPLATE_RAPID_DEPLOYMENT" }, "fullPath": "/Common/Complete_OWASP_Top_Ten", "enforcementMode":"transparent", "signature-settings":{ "signatureStaging": false, "minimumAccuracyForAutoAddedSignatures": "high" }, "protocolIndependent": true, "caseInsensitive": true, "general": { "trustXff": true }, "data-guard": { "enabled": true }, "policy-builder-server-technologies": { "enableServerTechnologiesDetection": true }, "blocking-settings": { "violations": [ { "alarm": true, "block": true, "description": "ASM Cookie Hijacking", "learn": false, "name": "VIOL_ASM_COOKIE_HIJACKING" }, { "alarm": true, "block": true, "description": "Access from disallowed User/Session/IP/Device ID", "name": "VIOL_SESSION_AWARENESS" }, { "alarm": true, "block": true, "description": "Modified ASM cookie", "learn": true, "name": "VIOL_ASM_COOKIE_MODIFIED" }, { "alarm": true, "block": true, "description": "XML data does not comply with format settings", "learn": true, "name": "VIOL_XML_FORMAT" }, { "name": "VIOL_FILETYPE", "alarm": true, "block": true, "learn": true } ], "evasions": [ { "description": "Bad unescape", "enabled": true, "learn": true }, { "description": "Apache whitespace", "enabled": true, "learn": true }, { "description": "Bare byte decoding", "enabled": true, "learn": true }, { "description": "IIS Unicode codepoints", "enabled": true, "learn": true }, { "description": "IIS backslashes", "enabled": true, "learn": true }, { "description": "%u decoding", "enabled": true, "learn": true }, { "description": "Multiple decoding", "enabled": true, "learn": true, "maxDecodingPasses": 3 }, { "description": "Directory traversals", "enabled": true, "learn": true } ] }, "xml-profiles": [ { "name": "Default", "defenseAttributes": { "allowDTDs": false, "allowExternalReferences": false } } ], "session-tracking": { "sessionTrackingConfiguration": { "enableTrackingSessionHijackingByDeviceId": true } } } } In the example above, aspects of a security policy like evasion techniques, or cookie consumption settings can easily be programmed in the configuration and handled like any other application code for versioning, editing or storing.The standard JSON format can be managed in a Git repository for use in any environment.Documentation for JSON representations of WAF policies can be found here. This is also true for all F5 security platforms including NGINX App Protect or Essential App Protect (ref.NGINX Configuration Guide and EAP API Users Guide). Similarly, configuring BIG-IP to forward security information telemetry to appropriate facilities can be achieved with the use of the Telemetry Streaming framework.For example, in order to configure BIG-IP to send telemetry data to a centralized visibility tool (F5 Beacon, or ELK for example) it can be configured with a declaration like: "class": "Telemetry", "controls": { "class": "Controls", "logLevel": "debug" }, "TS_Poller": { "class": "Telemetry_System_Poller", "interval": 60 }, "TS_Listener": { "class": "Telemetry_Listener", "port": 6514 }, "TS_Consumer": { "class": "Telemetry_Consumer", "type": "Generic_HTTP", "host": "my.visibility-host.url", "protocol": "http", "port": 8888, "path": "/", "method": "POST", "headers": [ { "name": "content-type", "value": "application/json" } ] } } The above declaration identifies the host where it will send telemetry – in this case debug data Scalability, Transparency and Resiliency F5 provides highly scalable, resilient and transparent solutions that can be inserted in any infrastructure to secure and provide visibility into web applications.Discussing these aspects of BIG-IP, NGINX Plus, or NGINX App Protect is beyond the scope of this article.For more information on scalability and high-availability you can refer to Performance of NGINX and NGINX Plus, NGINX App Protect Application Security Testing or BIG-IP Datasheet. Conclusion This article is meant to offer a path to visibility using F5 technology by inserting BIG-IP and configure it to provide application security and generate telemetry to gain visibility into the application's security posture. The aim is for you build a blueprint to systematically watch over your adaptive valuable applications and workloads across your infrastructure.369Views0likes0CommentsHow I did it - "Visualizing Metrics with F5 Telemetry Streaming and Datadog"
In some recent installments of the “How I Did it” series, we’ve taken a look at how F5 Telemetry Streaming, (TS) integrates with third-party analytics providers like Splunk and Elastic.In this article we continue our analytics vendor journey with the latest supported vendor, Datadog. Datadog is an analytics and monitoring platform offered in a Software-as-a-Service (SaaS) model.The platform provides a centralized visibility and monitoring of applications and infrastructure assets.While Datadog typically relies upon its various agents to capture logs and transmit back to the platform, there is an option for sending telemetry over HTTP.This is where F5’s TS comes into play. For the remainder of this article, I'll provide a brief overview of the services required to integrate the BIG-IP with Datadog.Rather than including the step-by-step instructions, I've included a video walkthrough of the configuration process.After all, seeing is believing! Application Services 3 Extension (AS3) There are several resources, (logging profiles, log publishers, iRules, etc.) that must be configured on the BIG-IP to enable remote logging. I utilized AS3 to deploy and manage these resources. I used Postman to apply a single REST API declaration to the AS3 endpoint. Telemetry Streaming (TS) F5's Telemetry Streaming, (TS) service enables the BIG-IP to stream telemetry data to a variety of third-party analytics providers. Aside from the aforementioned resources, configuring TS to stream to a consumer, (Datadog in this instance), is simply a REST call away. Just as I did for AS3, I utilized Postman to post a single declaration to the BIG-IP. Datadog Preparing my Datadog environment to receive telemetry from BIG-IPs via telemetry streaming is extremely simple.I will need to generate an API key which will in turn be used by a telemetry streaming extension to authenticate and access the Datadog platform.Additionally, the TS consumer provides for additional options, (Datadog region, compression settings, etc.) to be configured on the TS side. Dashboard Once my Datadog environment starts to ingest telemetry data from my BIG-IP, I’ll visualize the datausing a custom dashboard.The dashboard, (community-supported and may not be suitable for framing) report various relevant BIG-IP performance metrics and information. F5 BIG-IP Performance Metrics Check it Out Rather than walk you through the entire configuration, how about a movie? Click on the link (image) below for a brief walkthrough demo integrating F5's BIG-IP with Datadog using F5 Telemetry Streaming. Try it Out Liked what you saw? If that's the case, (as I hope it was) try it out for yourself. Checkout F5's CloudDocs for guidance on configuring your BIG-IP(s) with the F5 Automation Toolchain.The various configuration files, (including the above sample dashboards) used in the demo are available on the GitHub solution repository Enjoy!1.7KViews1like0CommentsHow I did it - "Visualizing Data with F5 TS and the Elastic ELK Stack"
With the F5 BIG-IP and Telemetry Streaming I have the ability to send BIG-IP metrics to a variety of third-party analytics vendors. One of the more popular of these is Elastic. Elastic's ELK Stack, (acronym for Elasticsearch, Logstash, Kibana) provides a platform where I can store, search, analyze and visualize my BIG-IP telemetry data. With said, here's an overview of "How I did it"; integrating and visualizing data with the ELK Stack. P.S. Make sure to stay for the movie. Application Services 3 Extension (AS3) There are several resources, (logging profiles, log publishers, iRules, etc.) that must be configured on the BIG-IP to enable remote logging. I utilized AS3 to deploy and manage these resources. I used Postman to apply a single REST API declaration to the AS3 endpoint. Telemetry Streaming (TS) F5's Telemetry Streaming, (TS) service enables the BIG-IP to stream telemetry data to a variety of third-party analytics providers. Aside from the aforementioned resources, configuring TS to stream to a consumer, (Logstash in this instance), is simply a REST call away. Just as I did for AS3, I utilized Postman to post a single declaration to the BIG-IP. Elastic (ELK) Stack Configuring the ELK stack to receive and ingest BIG-IP telemetry is a fairly simple process. Logstash, (the "L" in ELK) is the data processor I used to ingest data into the stack. To accomplish this, I applied the sample Logstash configuration file. The configuration file specifies, (among other items) the listener port, message format, and the Elasticsearch index naming format. Dashboards Getting telemetry data into Elasticsearch is great but only if you can make use of it. If I'm going to utilize the data, I need to visualize the data; (should probably trademark that). For visualization, i created a couple sample dashboards. The dashboards, (community-supported and perhaps not suitable for framing) report various relevant BIG-IP performance metrics and WAF incident information. F5 BIG-IP Advanced WAF Insights F5 BIG-IP Performance Metrics Check it Out Rather than walk you through the entire configuration, how about a movie? Click on the link (image) below for a brief walkthrough demo integrating F5's BIG-IP with Elastic's ELK stack using F5 Telemetry Streaming. Try it Out Liked what you saw? If that's the case, (as I hope it was) try it out for yourself. Checkout F5's CloudDocs for guidance on configuring your BIG-IP(s) with the F5 Automation Toolchain.The various configuration files, (including the above sample dashboards) used in the demo are available on the GitHub solution repository Enjoy!3.6KViews0likes0Comments