analytics
11 TopicsF5 Insight for ADSP - A Closer Look
Introduction F5 Insight for ADSP, a key component of the F5 Application Delivery and Security Platform (ADSP), helps teams monitor and secure apps that are spread across hybrid, multi-cloud and AI environments. In this article, I’ll highlight some of the key features and use cases addressed by F5 Insight. Demo Video Demo Video: F5 Insight for ADSP - A Closer Look The F5 Insight Home Screen The F5 Insight Home Screen provides comprehensive monitoring for your F5 infrastructure, applications, and security posture. It features intelligent anomaly detection and performance optimization tools, giving administrators and users a centralized view of their BIG-IP fleet health and operational status. System Report Cards The System Report Cards display health indicators ranked Good, Warning, and Critical for the following: Anomaly Detection Monitors the connection count, pool availability, CPU utilization, and memory usage. Application Performance Monitors application-level health based on response time, 4xx, and 5xx error codes. Security Monitors the expiration of SSL/TLS certificates and BIG-IP WAF events. BIG-IP Metrics Monitors for BIG-IP health issues with device resources and operational status. Fleet Status Displays a summary of all BIG-IP devices and their operational status. The Fleet Status shows all the BIG-IP devices with a status of Up, Down or Degraded. Ask AI Assistant Allows you to type queries in plain English to retrieve device statistics, configuration information, security events, device health, application performance and much more. The AI Assistant connects to a configurable Large Language Model (LLM) backend. Supported providers include OpenAI, Anthropic, or a customer provided LLM. An example query: Have there been any outages in the past 24 hours for all devices in all data centers? The AI Assistant understands the question and has identified all the data centers. The AI Assistant then checks the device statistics for any outages or issues. The AI Assistant compiles a detailed summary report of the query. Configuration of Large Language Model (LLM) Large language model (LLM) Insights bring natural language intelligence to F5 Insight, enabling you to query your BIG-IP configurations and logs conversationally. Instead of manually searching through configurations or parsing log files, you can ask questions like “Why is pool member X marked down?” or “Show me all virtual IPs (VIPs) with SSL offloading enabled” and receive immediate, contextualized, clear answers. In the toolbar on the left under Manage, select LLM Insights. Select your LLM Provider Enter your API Token/Key Enter your Enterprise API URL Click Test Connection to verify it’s working Click Save Configuration when the connection is validated. Conclusion F5 Insight for ADSP offers customizable visualizations and dashboards to help you surface metrics and KPIs tailored to your organization. It provides access to useful telemetry data for a deeper understanding of your environment, application behaviors, and complex BIG-IP deployments, all centralized in a single location. Identification of root causes during outages/tickets. Solves issues and struggles with Day 2 analysis of your BIG-IP Fleet and the applications therein. Mitigates the problem of a lack of detailed visual information on your BIG-IP Fleet. Set a foundation for the utilization of open-source tools and their benefits. Related Content Introducing F5 Insight for ADSP F5 Insight for ADSP Documentation F5 Insight Product Page
53Views2likes0CommentsIntroducing F5 Insight for ADSP
Introduction F5 Insight for ADSP, a key component of the F5 Application Delivery and Security Platform (ADSP), helps teams monitor and secure apps that are spread across hybrid, multi-cloud and AI environments. In this article, I’ll highlight some of the key features and use cases addressed by F5 Insight. F5 Insight: Actionable intelligence to foster operational excellence Demo Videos Demo Video: Introduction to F5 Insight for ADSP Demo Video: F5 Insight - A Closer Look What is F5 Insight for ADSP? F5 Insight is a holistic solution that unifies every aspect of operating applications. It provides end-to-end visibility and operational narratives. It allows you to prioritize to-dos with health scores, anomaly detection, and report cards. It delivers clarity and value faster with views built by F5 experts. It provides expert guidance and optimization recommendations using natural language interactions. F5 Insight is not intended to replace SIEM solutions like Splunk or Sentinel but serves a different, complementary purpose. It’s an open-source tool designed specifically for monitoring and analyzing metrics from your BIG-IP devices. By leveraging open-source telemetry tools, it collects and presents data in a central, easy-to-read dashboard. This eliminates the need to log into individual interfaces like the CLI or GUI to sift through logs and metrics, offering streamlined visibility into your BIG-IP estate for simplified monitoring and analysis. Why is F5 Insight important? Gain out-of-the-box actionable intelligence to optimize application delivery and security: Get critical application and infrastructure performance data, operational analytics, security issues, and other telemetry in a unified tool. Surface important KPIs and data points fast by querying data using natural language with model context protocol (MCP) support. Optimize application delivery and security, as well as underlying resources, with built-in F5 expertise and guidance. Share data with F5 and use F5 AI Data Fabric for application health scores, security grades, and automatic identification and categorization of apps by type and workload (In Limited Availability) Speeds mean-time-to-innocence (MTTI) and mean-time-to-restore (MTTR) with actionable intelligence and proactive alerts. Streamlines monitoring and analysis while being able to run on its own and integrate with your existing Grafana/VictoriaMetrics stacks. Leverage data to make the business case and prove ROI for more resources, application migrations, or system refreshes. How does F5 Insight work? F5 Insight is deployed as a Virtual Machine. This gives you full access and control of your F5 BIG-IP telemetry data. The configuration is simple, log into the F5 Insight portal and add your BIG-IP devices. There is no configuration needed on BIG-IP itself. Ready to get started? Log into the F5 Insight portal: By default you will arrive at the Home screen. From the navigation menu, under Manage, click BIG-IP Settings to add your BIG-IP devices. Before we add the BIG-IP devices click the Data Centers tab and then Add Data Center. This allows you to specify a location for the BIG-IP devices. Give it a Name, San Jose, CA in this example. Click Add Data Center. Go back to the Devices tab and click Add Device. Note that you can add a single device from here or add multiple devices using the Upload YAML Files (more on this later). For now, let’s add a single device using the management address or Endpoint, Username and Password. Scroll down and specify the Certificate Authority if using custom TLS certificates on BIG-IP devices. Under Data Center select the Data Center created previously, San Jose, CA in this example. Note: if you didn’t create a Data Center you can still do it now. Under Modules select the BIG-IP Modules you are using. In this example I selected Policy Firewall (or AFM). Click Add Device. The BIG-IP from San Jose has been added. From the navigation menu select BIG-IP Device then Device Overview to see more details. Note: you can select the specific Device you want to view. Important details are shown on this screen. Some items of interest are the BIG-IP version, system model or VM, Licenses and Enabled Modules. The Home Screen displays System Report Cards and allows you to drill down into the individual widgets. System Report Cards provide at-a-glance health indicators for four critical monitoring categories. Each card displays a status badge (Good, Warning, or Critical) based on deviation thresholds. Note: you can filter the Home Screen to display a specific Data Center. Adding Multiple BIG-IPs using YAML File Upload For bulk onboarding or infrastructure-as-code workflows, import devices using YAML configuration. Using YAML streamlines bulk onboarding, ensures consistency, improves scalability, simplifies automation, and increases accuracy. It also ensures integration with IaC workflows and CI/CD pipelines—enabling reusable, version-controlled configurations. From the BIG-IP Settings screen select Add Device. Upload your Defaults and Receiver YAML files here or click Paste YAML to copy/paste them. Note: YAML import also supports configuring F5 Insight features such as high availability, LLM Insights, AIDF, and data retention policies alongside device definitions. Both BIG-IPs are now connected to F5 Insight When you return to the BIG-IP Settings screen it should look like this: A correctly configured ast-defaults.yaml file will look like the following. Note: enter the username and password to log into your BIG-IPs A correctly configured ast-receivers.yaml file will look like the following. Note: enter a Device Name and Endpoint address. Conclusion F5 Insight for ADSP offers customizable visualizations and dashboards to help teams surface actionable metrics and KPIs tailored to your organization. It provides access to useful telemetry data for a deeper understanding of your environment, application behaviors, and complex BIG-IP deployments, all centralized in a single location. Identification of root causes during outages/tickets. Solves issues and struggles with Day 2 analysis of your BIG-IP Fleet and the applications therein. Mitigates the problem of a lack of detailed visual information on your BIG-IP Fleet. Set a foundation for the utilization of open-source tools and their benefits. Related Content F5 Insight for ADSP BLOG F5 Insight Documentation F5 Insight Product Page585Views3likes0CommentsHow I did it - "Remote Logging with the F5 XC Global Log Receiver and Elastic"
Welcome to configuring remote logging to Elastic, where we take a look at the F5 Distributed Cloud’s global log receiver service and we can easily send event log data from the F5 distributed cloud services platform to Elastic stack.997Views1like0CommentsWhat is BIG-IQ?
tl;dr - BIG-IQ centralizes management, licensing, monitoring, and analytics for your dispersed BIG-IP infrastructure. If you have more than a few F5 BIG-IP's within your organization, managing devices as separate entities will become an administrative bottleneck and slow application deployments. Deploying cloud applications, you're potentially managing thousands of systems and having to deal with traditionally monolithic administrative functions is a simple no-go. Enter BIG-IQ. BIG-IQ enables administrators to centrally manage BIG-IP infrastructure across the IT landscape. BIG-IQ discovers, tracks, manages, and monitors physical and virtual BIG-IP devices - in the cloud, on premise, or co-located at your preferred datacenter. BIG-IQ is a stand alone product available from F5 partners, or available through the AWS Marketplace. BIG-IQ consolidates common management requirements including but not limited to: Device discovery and monitoring: You can discovery, track, and monitor BIG-IP devices - including key metrics including CPU/memory, disk usage, and availability status Centralized Software Upgrades: Centrally manage BIG-IP upgrades (TMOS v10.20 and up) by uploading the release images to BIG-IQ and orchestrating the process for managed BIG-IPs. License Management: Manage BIG-IP virtual edition licenses, granting and revoking as you spin up/down resources. You can create license pools for applications or tenants for provisioning. BIG-IP Configuration Backup/Restore: Use BIG-IQ as a central repository of BIG-IP config files through ad-hoc or scheduled processes. Archive config to long term storage via automated SFTP/SCP. BIG-IP Device Cluster Support: Monitor high availability statuses and BIG-IP Device clusters. Integration to F5 iHealth Support Features: Upload and read detailed health reports of your BIG-IP's under management. Change Management: Evaluate, stage, and deploy configuration changes to BIG-IP. Create snapshots and config restore points and audit historical changes so you know who to blame. 😉 Certificate Management: Deploy, renew, or change SSL certs. Alerts allow you to plan ahead before certificates expire. Role-Based Access Control (RBAC): BIG-IQ controls access to it's managed services with role-based access controls (RBAC). You can create granular controls to create view, edit, and deploy provisioned services. Prebuilt roles within BIG-IQ easily allow multiple IT disciplines access to the areas of expertise they need without over provisioning permissions. Fig. 1 BIG-IQ 5.2 - Device Health Management BIG-IQ centralizes statistics and analytics visibility, extending BIG-IP's AVR engine. BIG-IQ collects and aggregates statistics from BIG-IP devices, locally and in the cloud. View metrics such as transactions per second, client latency, response throughput. You can create RBAC roles so security teams have private access to view DDoS attack mitigations, firewall rules triggered, or WebSafe and MobileSafe management dashboards. The reporting extends across all modules BIG-IQ manages, drastically easing the pane-of-glass view we all appreciate from management applications. For further reading on BIG-IQ please check out the following links: BIG-IQ Centralized Management @ F5.com Getting Started with BIG-IQ @ F5 University DevCentral BIG-IQ BIG-IQ @ Amazon Marketplace9.2KViews1like1CommentHow I did it - "Visualizing Metrics with F5 Telemetry Streaming and Datadog"
In some recent installments of the “How I Did it” series, we’ve taken a look at how F5 Telemetry Streaming, (TS) integrates with third-party analytics providers like Splunk and Elastic. In this article we continue our analytics vendor journey with the latest supported vendor, Datadog. Datadog is an analytics and monitoring platform offered in a Software-as-a-Service (SaaS) model. The platform provides a centralized visibility and monitoring of applications and infrastructure assets. While Datadog typically relies upon its various agents to capture logs and transmit back to the platform, there is an option for sending telemetry over HTTP. This is where F5’s TS comes into play. For the remainder of this article, I'll provide a brief overview of the services required to integrate the BIG-IP with Datadog. Rather than including the step-by-step instructions, I've included a video walkthrough of the configuration process. After all, seeing is believing! Application Services 3 Extension (AS3) There are several resources, (logging profiles, log publishers, iRules, etc.) that must be configured on the BIG-IP to enable remote logging. I utilized AS3 to deploy and manage these resources. I used Postman to apply a single REST API declaration to the AS3 endpoint. Telemetry Streaming (TS) F5's Telemetry Streaming, (TS) service enables the BIG-IP to stream telemetry data to a variety of third-party analytics providers. Aside from the aforementioned resources, configuring TS to stream to a consumer, (Datadog in this instance), is simply a REST call away. Just as I did for AS3, I utilized Postman to post a single declaration to the BIG-IP. Datadog Preparing my Datadog environment to receive telemetry from BIG-IPs via telemetry streaming is extremely simple. I will need to generate an API key which will in turn be used by a telemetry streaming extension to authenticate and access the Datadog platform. Additionally, the TS consumer provides for additional options, (Datadog region, compression settings, etc.) to be configured on the TS side. Dashboard Once my Datadog environment starts to ingest telemetry data from my BIG-IP, I’ll visualize the data using a custom dashboard. The dashboard, (community-supported and may not be suitable for framing) report various relevant BIG-IP performance metrics and information. F5 BIG-IP Performance Metrics Check it Out Rather than walk you through the entire configuration, how about a movie? Click on the link (image) below for a brief walkthrough demo integrating F5's BIG-IP with Datadog using F5 Telemetry Streaming. Try it Out Liked what you saw? If that's the case, (as I hope it was) try it out for yourself. Checkout F5's CloudDocs for guidance on configuring your BIG-IP(s) with the F5 Automation Toolchain.The various configuration files, (including the above sample dashboards) used in the demo are available on the GitHub solution repository Enjoy!2.1KViews1like0CommentsTelemetry streaming - One click deploy using Ansible
In this article we will focus on using Ansible to enable and install telemetry streaming (TS) and associated dependencies. Telemetry streaming The F5 BIG-IP is a full proxy architecture, which essentially means that the BIG-IP LTM completely understands the end-to-end connection, enabling it to be an endpoint and originator of client and server side connections. This empowers the BIG-IP to have traffic statistics from the client to the BIG-IP and from the BIG-IP to the server giving the user the entire view of their network statistics. To gain meaningful insight, you must be able to gather your data and statistics (telemetry) into a useful place.Telemetry streaming is an extension designed to declaratively aggregate, normalize, and forward statistics and events from the BIG-IP to a consumer application. You can earn more about telemetry streaming here, but let's get to Ansible. Enable and Install using Ansible The Ansible playbook below performs the following tasks Grab the latest Application Services 3 (AS) and Telemetry Streaming (TS) versions Download the AS3 and TS packages and install them on BIG-IP using a role Deploy AS3 and TS declarations on BIG-IP using a role from Ansible galaxy If AVR logs are needed for TS then provision the BIG-IP AVR module and configure AVR to point to TS Prerequisites Supported on BIG-IP 14.1+ version If AVR is required to be configured make sure there is enough memory for the module to be enabled along with all the other BIG-IP modules that are provisioned in your environment The TS data is being pushed to Azure log analytics (modify it to use your own consumer). If azure logs are being used then change your TS json file with the correct workspace ID and sharedkey Ansible is installed on the host from where the scripts are run Following files are present in the directory Variable file (vars.yml) TS poller and listener setup (ts_poller_and_listener_setup.declaration.json) Declare logging profile (as3_ts_setup_declaration.json) Ansible playbook (ts_workflow.yml) Get started Download the following roles from ansible galaxy. ansible-galaxy install f5devcentral.f5app_services_package --force This role performs a series of steps needed to download and install RPM packages on the BIG-IP that are a part of F5 automation toolchain. Read through the prerequisites for the role before installing it. ansible-galaxy install f5devcentral.atc_deploy --force This role deploys the declaration using the RPM package installed above. Read through the prerequisites for the role before installing it. By default, roles get installed into the /etc/ansible/role directory. Next copy the below contents into a file named vars.yml. Change the variable file to reflect your environment # BIG-IP MGMT address and username/password f5app_services_package_server: "xxx.xxx.xxx.xxx" f5app_services_package_server_port: "443" f5app_services_package_user: "*****" f5app_services_package_password: "*****" f5app_services_package_validate_certs: "false" f5app_services_package_transport: "rest" # URI from where latest RPM version and package will be downloaded ts_uri: "https://github.com/F5Networks/f5-telemetry-streaming/releases" as3_uri: "https://github.com/F5Networks/f5-appsvcs-extension/releases" #If AVR module logs needed then set to 'yes' else leave it as 'no' avr_needed: "no" # Virtual servers in your environment to assign the logging profiles (If AVR set to 'yes') virtual_servers: - "vs1" - "vs2" Next copy the below contents into a file named ts_poller_and_listener_setup.declaration.json. { "class": "Telemetry", "controls": { "class": "Controls", "logLevel": "debug" }, "My_Poller": { "class": "Telemetry_System_Poller", "interval": 60 }, "My_Consumer": { "class": "Telemetry_Consumer", "type": "Azure_Log_Analytics", "workspaceId": "<<workspace-id>>", "passphrase": { "cipherText": "<<sharedkey>>" }, "useManagedIdentity": false, "region": "eastus" } } Next copy the below contents into a file named as3_ts_setup_declaration.json { "class": "ADC", "schemaVersion": "3.10.0", "remark": "Example depicting creation of BIG-IP module log profiles", "Common": { "Shared": { "class": "Application", "template": "shared", "telemetry_local_rule": { "remark": "Only required when TS is a local listener", "class": "iRule", "iRule": "when CLIENT_ACCEPTED {\n node 127.0.0.1 6514\n}" }, "telemetry_local": { "remark": "Only required when TS is a local listener", "class": "Service_TCP", "virtualAddresses": [ "255.255.255.254" ], "virtualPort": 6514, "iRules": [ "telemetry_local_rule" ] }, "telemetry": { "class": "Pool", "members": [ { "enable": true, "serverAddresses": [ "255.255.255.254" ], "servicePort": 6514 } ], "monitors": [ { "bigip": "/Common/tcp" } ] }, "telemetry_hsl": { "class": "Log_Destination", "type": "remote-high-speed-log", "protocol": "tcp", "pool": { "use": "telemetry" } }, "telemetry_formatted": { "class": "Log_Destination", "type": "splunk", "forwardTo": { "use": "telemetry_hsl" } }, "telemetry_publisher": { "class": "Log_Publisher", "destinations": [ { "use": "telemetry_formatted" } ] }, "telemetry_traffic_log_profile": { "class": "Traffic_Log_Profile", "requestSettings": { "requestEnabled": true, "requestProtocol": "mds-tcp", "requestPool": { "use": "telemetry" }, "requestTemplate": "event_source=\"request_logging\",hostname=\"$BIGIP_HOSTNAME\",client_ip=\"$CLIENT_IP\",server_ip=\"$SERVER_IP\",http_method=\"$HTTP_METHOD\",http_uri=\"$HTTP_URI\",virtual_name=\"$VIRTUAL_NAME\",event_timestamp=\"$DATE_HTTP\"" } } } } } NOTE: To better understand the above declarations check out our clouddocs page: https://clouddocs.f5.com/products/extensions/f5-telemetry-streaming/latest/telemetry-system.html Next copy the below contents into a file named ts_workflow.yml - name: Telemetry streaming setup hosts: localhost connection: local any_errors_fatal: true vars_files: vars.yml tasks: - name: Get latest AS3 RPM name action: shell wget -O - {{as3_uri}} | grep -E rpm | head -1 | cut -d "/" -f 7 | cut -d "=" -f 1 | cut -d "\"" -f 1 register: as3_output - debug: var: as3_output.stdout_lines[0] - set_fact: as3_release: "{{as3_output.stdout_lines[0]}}" - name: Get latest AS3 RPM tag action: shell wget -O - {{as3_uri}} | grep -E rpm | head -1 | cut -d "/" -f 6 register: as3_output - debug: var: as3_output.stdout_lines[0] - set_fact: as3_release_tag: "{{as3_output.stdout_lines[0]}}" - name: Get latest TS RPM name action: shell wget -O - {{ts_uri}} | grep -E rpm | head -1 | cut -d "/" -f 7 | cut -d "=" -f 1 | cut -d "\"" -f 1 register: ts_output - debug: var: ts_output.stdout_lines[0] - set_fact: ts_release: "{{ts_output.stdout_lines[0]}}" - name: Get latest TS RPM tag action: shell wget -O - {{ts_uri}} | grep -E rpm | head -1 | cut -d "/" -f 6 register: ts_output - debug: var: ts_output.stdout_lines[0] - set_fact: ts_release_tag: "{{ts_output.stdout_lines[0]}}" - name: Download and Install AS3 and TS RPM ackages to BIG-IP using role include_role: name: f5devcentral.f5app_services_package vars: f5app_services_package_url: "{{item.uri}}/download/{{item.release_tag}}/{{item.release}}?raw=true" f5app_services_package_path: "/tmp/{{item.release}}" loop: - {uri: "{{as3_uri}}", release_tag: "{{as3_release_tag}}", release: "{{as3_release}}"} - {uri: "{{ts_uri}}", release_tag: "{{ts_release_tag}}", release: "{{ts_release}}"} - name: Deploy AS3 and TS declaration on the BIG-IP using role include_role: name: f5devcentral.atc_deploy vars: atc_method: POST atc_declaration: "{{ lookup('template', item.file) }}" atc_delay: 10 atc_retries: 15 atc_service: "{{item.service}}" provider: server: "{{ f5app_services_package_server }}" server_port: "{{ f5app_services_package_server_port }}" user: "{{ f5app_services_package_user }}" password: "{{ f5app_services_package_password }}" validate_certs: "{{ f5app_services_package_validate_certs | default(no) }}" transport: "{{ f5app_services_package_transport }}" loop: - {service: "AS3", file: "as3_ts_setup_declaration.json"} - {service: "Telemetry", file: "ts_poller_and_listener_setup_declaration.json"} #If AVR logs need to be enabled - name: Provision BIG-IP with AVR bigip_provision: provider: server: "{{ f5app_services_package_server }}" server_port: "{{ f5app_services_package_server_port }}" user: "{{ f5app_services_package_user }}" password: "{{ f5app_services_package_password }}" validate_certs: "{{ f5app_services_package_validate_certs | default(no) }}" transport: "{{ f5app_services_package_transport }}" module: "avr" level: "nominal" when: avr_needed == "yes" - name: Enable AVR logs using tmsh commands bigip_command: commands: - modify analytics global-settings { offbox-protocol tcp offbox-tcp-addresses add { 127.0.0.1 } offbox-tcp-port 6514 use-offbox enabled } - create ltm profile analytics telemetry-http-analytics { collect-geo enabled collect-http-timing-metrics enabled collect-ip enabled collect-max-tps-and-throughput enabled collect-methods enabled collect-page-load-time enabled collect-response-codes enabled collect-subnets enabled collect-url enabled collect-user-agent enabled collect-user-sessions enabled publish-irule-statistics enabled } - create ltm profile tcp-analytics telemetry-tcp-analytics { collect-city enabled collect-continent enabled collect-country enabled collect-nexthop enabled collect-post-code enabled collect-region enabled collect-remote-host-ip enabled collect-remote-host-subnet enabled collected-by-server-side enabled } provider: server: "{{ f5app_services_package_server }}" server_port: "{{ f5app_services_package_server_port }}" user: "{{ f5app_services_package_user }}" password: "{{ f5app_services_package_password }}" validate_certs: "{{ f5app_services_package_validate_certs | default(no) }}" transport: "{{ f5app_services_package_transport }}" when: avr_needed == "yes" - name: Assign TCP and HTTP profiles to virtual servers bigip_virtual_server: provider: server: "{{ f5app_services_package_server }}" server_port: "{{ f5app_services_package_server_port }}" user: "{{ f5app_services_package_user }}" password: "{{ f5app_services_package_password }}" validate_certs: "{{ f5app_services_package_validate_certs | default(no) }}" transport: "{{ f5app_services_package_transport }}" name: "{{item}}" profiles: - http - telemetry-http-analytics - telemetry-tcp-analytics loop: "{{virtual_servers}}" when: avr_needed == "yes" Now execute the playbook: ansible-playbook ts_workflow.yml Verify Login to the BIG-IP UI Go to menu iApps->Package Management LX. Both the f5-telemetry and f5-appsvs RPM's should be present Login to BIG-IP CLI Check restjavad logs present at /var/log for any TS errors Login to your consumer where the logs are being sent to and make sure the consumer is receiving the logs Conclusion The Telemetry Streaming (TS) extension is very powerful and is capable of sending much more information than described above. Take a look at the complete list of logs as well as consumer applications supported by TS over on CloudDocs: https://clouddocs.f5.com/products/extensions/f5-telemetry-streaming/latest/using-ts.html1.2KViews3likes0CommentsA Catch from the Codeshare: F5 Analytics iApp
On the side of the road in northern Missouri just north of Mark Twain’s stomping grounds, there is a slice of hillside removed just to the side of the highway. In Arkansas, there’s a nondescript field tucked away in a state park. Short of word of mouth and this thing they call the internet, you wouldn’t be any the wiser that buried or surfaced in these two locations are an amazing variety of geodes and diamonds, respectively. In this article series I will explore recent and well-aged gems from the codeshare, highlighting inventive solutions contributed by you, the community. Please join me on this great adventure as we oscillate on the Mohs’ scale of codeshare geekery. I’ve been careful to focus most entries on community contributed content, but this release 10 days ago is too cool for school, so I’ll make an exception for this F5 contributed Analytics iApp from Ken Bocchino. It is early access with some open items still under development, but this iApp is the outcome of a close partnership with F5 and Splunk to take analytics data from BIG-IP and display the various data points within the Splunk F5 Application. A couple screen shots from the app: This iApp has been tested on BIG-IP versions 11.4 through 12.0. Take a peek at the installation video and then download the iApp! Thanks for checking out this latest catch from the codeshare!290Views0likes0CommentsBig issues for Big Data
It’s one of those technologies that seem to divide opinion, but Big Data is beginning to make a real impact on the enterprise. Despite some experts saying that it’s nothing more than hype, new figures from Gartner suggest that CIOs and IT decision makers are thinking very seriously about Big Data, how they can use it and what advantages it could generate for their business. The study from Gartner revealed that 64% of businesses around the world are investing in Big Data technology in 2013 or are planning to do so. That’s a rise from 58% in 2012. The report also shows that 30% of businesses have already invested, while 19% say they plan to do so over the next 12 months and a further 15% say they plan to invest over the next two years. This shows that businesses are embracing Big Data to enhance their decision making. There is a huge amount of insight into a business, its workers and its customers just waiting to be discovered via capturing, storing and then analysing the huge volume of data that is being created these days. But what we at F5 have noticed is that sometimes Big Data projects can fail because the underlying architecture isn’t able to cope with all the data and the different devices attached to the network that want to access that data. Businesses must also have in place infrastructure that can scale as data and demand increases. The key to a successful Big Data project is enabling secure access to the data when it’s needed. When I say secure what I mean is that workers are only allowed to access certain applications; the ones they need to do their job. Controlling access to the data reduces potential security risks. To get the best out of Big Data it is also vital that it is able to travel around your network freely whenever it’s needed. That’s a key point: getting the best decisions out of your data requires immediacy; making instant decisions and putting them into action is what can give a business the edge in this increasingly competitive world we operate in. It’s no use if the relevant data cannot be accessed at the time it’s needed, or if the network is so saturated that it takes too long to get where it needs to be. If the moment passes, so does the opportunity. This is what I mean when I talk about having the right infrastructure in place to enable Big Data... access, security and an intelligent network that can always guarantee that data can move around freely.266Views0likes0CommentsVisibility: Keystone to Rapid Recovery Everywhere
#caim Because knowing is half the battle in application performance management One of the processes used to determine the root cause of a problem by Six Sigma is called The Five Whys. In a nutshell, it's a method of continually asking "why" something happened to eventually arrive at an answer that is actually the real root cause. This method is probably derived from philosophy and its study of causality, something most often associated with Aristotle and the desire to trace all of existence back to a "first cause". The Five Whys doesn't require that level of philosophical debate, but it does require digging deeper at every step. It's not enough to answer "Why did you run out of gas" with "Because I failed to fill the car up this morning" because there's a reason you failed to do so, and perhaps a reason for that reason, and so on. At some point you will find the root cause, a cause that if not addressed will only end up with the same result some time in the future. Thus, questions like "Why did the web application become available" can't simply be answered with "because the database stopped responding" because that isn't the root cause of the actual incident. The actual root cause might end up being a failed NIC on the database server, or a misconfiguration in a switch that caused responses from the database to be routed somewhere else. Simply knowing "the database stopped responding" isn't enough to definitely resolve the real problem. Thus, the ability to find the real problem, the root cause, is paramount to successfully not only resolving an incident but also preventing it from happening in the future, if possible. Finding it fast is essential to keeping the associated costs under control. The study, completed in 2011, uncovered a number of key findings related to the cost of downtime. Based on cost estimates provided by survey respondents, the average cost of data center downtime was approximately $5,600 per minute. Based on an average reported incident length of 90 minutes, the average cost of a single downtime event was approximately $505,500. -- Understanding the Cost of Data Center Downtime: An Analysis of the Financial Impact on Infrastructure Vulnerability Unfortunately, when an outage or incident occurs, you likely don't have to time to get philosophical on the problem and sit around asking questions. You still have to ask the same questions, but generally speaking it's a lost more fast and furious than one might expect Aristotle was used to. One of the most helpful tools in the IT toolbox to being able to rapidly walk through the five (or fifteen in some cases) whys is monitoring and the alerts/information that they often provide. VISIBILITY is NUMBER ONE There are any number of ways in which visibility is achieved. Agents are an age-old method of achieving visibility across tiers of architecture and are becoming just as commonplace for solving visibility challenges in cloud environments where use of other mechanisms might not be feasible. Monitoring up and down the full stack is vital to ensuring availability of all services and is used for a variety of purposes across the data center. You may recall that Axiom #1 for application delivery is: "Applications are not servers, hypervisors, or operating systems.” Monitoring of the OS tells you very little about the application's performance or availability, and it can't indicate any kind of status with respect to whether or not the application is actually working correctly. For example, load balancing and application delivery services can monitor applications for availability, for correctness, and for performance. doing so enables the use of more robust load distribution algorithms as well as more immediate response to failure. This information is also critical for cloud delivery brokers which must be able to decide which environment is best able to meet requirements for a given application. Without performance and availability data, there's no way to compare two environments against what are very basic SLA metrics used by business and operations alike. And with increasing devices and applications and always-connected applications, there's more and more traffic to be analyzed and more possible points of failure in the application delivery chain. Through 2015 at least 50% of hyperconverged solutions will suffer from poor performance, end-user dissatisfaction and failure to meet business needs. -- New Innovations – New Pain Points (CA + Gartner) And while some infrastructure makes use of monitoring data in real-time for operational decision making, other solutions use the information to manage performance and resolve problems. Most APM (Application Performance Management) solutions have matured beyond aggregating data and presenting aesthetically appealing dashboards to providing comprehensive tools to monitor, manage, and investigate performance-related issues. CA Technologies has been playing in this game for as long as I've been in technology and recently announced its latest initiative aimed at improving the ability to aggregate and make use of the massive volume of monitoring data for maximum impact. CA: Converged Infrastructure Management The latest version of CA Infrastructure Management (IM) adds some significant enhancements around scalability and support for cloud environments with multi-tenancy. Offering both agent (required for cloud environments) and agent-less options, CA IM is about to aggregate performance data from data center infrastructure – including non-IP devices (its past acquisitions such as that of Torokina have provided alternative technology to enable data collection from a broader set of devices). But what's important about a modern solution is not just the ability to collect from everything but to aggregate and allow rapid analysis across network and application domains. This is the kind of flow analysis that enables operations to very quickly cycle through the "Five Whys" and get to the root cause of a problem – and resolve it. This kind of analysis is as important if not more important than the data collected in the first place. The sheer volume of information collected by devices and systems across environments is simply overwhelming and one could not hope to manually browse through it in a reasonable amount of time to find any given issue. CA IM is heavily invested in the notion of convergence; convergence of functional domains across environments, convergence of networks to include voice, video, and data, and convergence of the analytics required by operations to determine baselines, calculate trends, establish thresholds, recognize anomalies and trigger investigations. A primary focus for CA remains end-to-end application response time monitoring, enabling operations to understand application response time between infrastructure devices. That's increasingly important in a virtualized data center, where some 80% of traffic is estimated to be between servers (both physical and virtual) on the LAN. East-west traffic patterns are dominating the network, and being able to monitor them and understand the relationship it has on end-user performance is critical. Being able to identify choke points between services that commonly rely on one another for proper functioning, capacity and network planning activities is an important part of not only ensuring the network is optimally configured but that provisioning of capacity is appropriate for the application and its performance requirements.330Views0likes0CommentsF5 Friday: Performance Analytics–More Than Eye-Candy Reports
#v11 Application-centric analytics provide better visibility into performance, capacity and infrastructure utilization Maintaining performance and capacity of web sites and critical applications – especially those of the revenue-generating ilk – can be particularly difficult in complex environments. The mix of infrastructure and integration can pose problems when trying to determine exactly where capacity may be constrained or from where performance troubles are originating. Visibility into the application delivery chain is critical if we are to determine where and at what points in the chain performance is being impaired or constraints on capacity imposed, perhaps artificially. The trouble is that the end-user view of application performance tells you only that a site is or isn’t not performing up to expectations. It’s myopic in that sense, and provides little to no assistance in determining why an application may be performing poorly. Is it the Internet? An external component? An integration point? A piece of the infrastructure? Is it because of capacity (load on servers) or a lack thereof? It’s easy to point out a site is performing poorly or has unacceptable downtime, it’s quite another to determine why such that operations and developers can resolve the problem. Compounding the difficulties inherent in application performance monitoring is the way in which network infrastructure – including application delivery controllers – traditionally report statistics related to performance. It’s very, well, network-focused – with reports that provide details on network-oriented objects such as Virtual IP addresses and network segments. This is problematic because when a specific application is having issues, you want to drill down into the application, not a shared IP address or network segment. And you wouldn’t be digging into performance data if you didn’t already know there was a problem. So providing a simple “total response time” report isn’t very helpful at all in pinpointing the root cause. It’s important to be able to break out, by application, a more granular view of performance that provides insight into client, network, and server-side performance. iApp Analytics BIG-IP v11 introduced iApp, which addresses a real need to provision and manage application delivery services with an application-centric perspective. Along with iApp comes iApp Analytics, a more application-centric view of performance and capacity-related data. This data provides a more holistic view of performance across network, client and server-side components that allows operations to drill down into a specific application and dig around in the data to determine from where performance problems may be originating. This includes per URI reporting, which is critical for understanding API usage and impact on overall capacity. For organizations considered more advanced architectural solutions to addressing performance and capacity, this information is critical. For example, architecting scalability domains as part of a partitioning or hyper-local scalability pattern will need to have a detailed understanding of per-URI utilization. Being able to tie metrics back to a specific business application enables IT to provide a more accurate operational cost to business stakeholders, which enables more a accurate ROI analysis and proactive approach to addressing potential growth issues before they negatively impact performance or worse, availability. Transactions can be captured for diagnosing customer or application issues. Filters make it easy to narrow down the results to specific user agents, geographies, pools or even a client IP. The ability to view the headers and and the response data can shave valuable time off identifying problems. Thresholds on a per application, virtual server or pool member can be configured to identify if transactions or throughput levels increase or decrease significantly. This can be an early warning sign that problems are occurring. An alert can be delivered via syslog, SNMP or email when these thresholds are exceeded. Viewing of analytics can be accomplished through iApp application-specific view which provides the context of the associated business application. Metrics can also be delivered to an off-box SIEM solution such as Splunk. While detailed, per-application performance and usage data does, in fact, make for very nice eye-candy reports, such data is critical to reducing the time associated with troubleshooting and enabling more advanced, integrated scalability-focused architectures to be designed and deployed. Because without the right data, it’s hard to make the right decisions for your applications and your infrastructure. Happy Performance Monitoring! Infrastructure Scalability Pattern: Partition by Function or Type Lots of Little Virtual Web Applications Scale Out Better than Scaling Up Forget Hyper-Scale. Think Hyper-Local Scale. F5 Friday: You Will Appsolutely Love v11 Introducing v11: The Next Generation of Infrastructure BIG-IP v11 Information Page F5 Monday? The Evolution To IT as a Service Continues … in the Network F5 Friday: The Gap That become a Chasm All F5 Friday Posts on DevCentral ABLE Infrastructure: The Next Generation – Introducing v11276Views0likes0Comments