mvp
5 Topics2020 DevCentral MVP Announcement
Congratulations to the 2020 DevCentral MVPs! The DevCentral MVP Award is given annually to an exclusive group of expert users in the technical community who go out of their way to engage with the community by sharing their experience and knowledge with others. This is our way of recognizing their significant contributions, because while all of our users collectively make DevCentral one of the top community sites around and a valuable resource for everyone, MVPs regularly go above and beyond in assisting fellow F5 users both on- and offline. MVPs get badges in their DevCentral profiles so everyone can see that they are recognized experts (you'll also see this if you hover over their name in a thread). This year’s MVPs will receive a glass award, certificate, thank-you gift, and an invitation to attend the MVP Summit at Agility 2020 as guests of F5. The 2020 DevCentral MVPs (by username) are: ·Andy McGrath ·Austin Geraci ·Boneyard ·Dario Garrido ·FrancisD ·Hamish Marson ·Iaine ·Jad Tabbara (JTI) ·jaikumar_f5 ·JG ·Jinshu ·Joel Newton ·Kai Wilke ·Kees van den Bos ·Kevin Davies ·Kevin Worthington ·Lee Sutcliffe ·Leonardo Souza ·Manthey ·Michael Jenkins ·Nathan Britton ·Nicolas Destor ·Niels van Sluis ·Patrik Jonsson ·Philip Jönsson ·Piotr Lewandowski ·Rob_carr ·Samir Jha ·Tim Rupp ·TimRiker ·Vijay ·What Lies Beneath ·Yann Desmaret ·Youssef Make sure to check out theMVP pagefor more info about the program and the MVPs themselves.1KViews0likes2Comments2021 DevCentral MVP Announcement
Congratulations to the 2021 DevCentral MVPs! The DevCentral MVP Award is given annually to an exclusive group of expert users in the technical community who go out of their way to engage with the community by sharing their experience and knowledge with others. This is our way of recognizing their significant contributions, because while all of our users collectively make DevCentral one of the top community sites around and a valuable resource for everyone, MVPs regularly go above and beyond in assisting fellow F5 users both on- and offline.We understand that 2020 was difficult for everyone, and we are extra-grateful to this year's MVPs for going out of their ways to help others. MVPs get badges in their DevCentral profiles so everyone can see that they are recognized experts (you'll also see this if you hover over their name in a thread). This year’s MVPs will receive a glass award, certificate, exclusive thank-you gifts, and invitations to exclusive webinars and behind-the-scenes looks at things like roadmaps and new product sneak-previews. The 2021 DevCentral MVPs (by username) are: ·Andy McGrath ·Austin Geraci ·Amine Kadimi ·Boneyard ·Dario Garrido ·EAA ·FrancisD ·Hamish Marson ·Iaine ·Jad Tabbara (JTI) ·jaikumar_f5 ·JG ·JuniorC · Kai Wilke ·Kees van den Bos ·Kevin Davies ·Leonardo Souza ·lidev ·Manthey ·Mayur Sutare ·Nathan Britton ·Niels van Sluis ·Patrik Jonsson ·Philip Jönsson ·Piotr Lewandowski ·Rob_carr ·Samir Jha ·Sebastian Maniak ·TimRiker ·Vijay ·What Lies Beneath ·Yann Desmaret ·Youssef763Views6likes3CommentsBIG-IP Logging and Reporting Toolkit - part one
Joe Malek, one of the many awesome engineers here at F5, took it upon himself to delve deeply into a very interesting but often unsung part of the BIG-IP advanced configuration world: logging and reporting. It’s my great pleasure to get to share with you his awesome study and the findings therein, along with (eventually) a toolkit to help you get started in the world of custom log manipulation. If you’ve ever questioned or been curious about your options when it comes to information gathering and reporting, this is definitely something you should read. There will be multiple parts, so stay tuned. This one is just the intro. Logging & Reporting Toolkit - Part 1 Logging & Reporting Toolkit - Part 2 Logging & Reporting Toolkit - Part 3 Logging & Reporting Toolkit - Part 4 Description F5 products occupy critical positions in application delivery infrastructure. They serve as gateways, proxies, accelerators and traffic flow arbiters. In these roles customer expectations vary for the degree and amount of event information recorded. Several opportunities exist within our current product capabilities for our customers and partners to produce and consume log messages from and via F5 products. Efforts to date include generating W3C style log messages on LTM via iRules, close integration with leading vendors and ASM (requires askf5 login), and creating relationships with leading vendors to best serve our customers. Significant capabilities exist for customers and partners to create their own logging and reporting solutions. Problems and opportunity In the many products offered by F5, there exists a variety of logging structures. The common log protocols used to emit messages by F5 products are Syslog (requires askf5 login) and SNMP (requires askf5 login), along with built-in iRulescapabilities. Though syslog-ng is commonplace, software components tend to vary in transport, verbosity, message formatting and sometimes syslog facility. This can result in a high degree of data density in our logs, and messages our systems emit can vary from version to version.[i] The combination of these factors results in a challenge that requires a coordinated solution for customers who are compelled by regulation, industry practice, or by business process, to maintain log management infrastructure that consumes messages from F5 devices.[ii] By utilizing the unique product architecture TMOS employs by sharing its knowledge about networks and applications as well as capabilities built into iRules, TMOS can provide much of this information to log management infrastructure in a simple and knowledgeable manner. In effect, we can emit messages about appliance state and offload many message logging tasks from application servers. Based on our connection knowledge we can also improve the utility and value of information obtained from vendor provided log management infrastructure.[iii] Objectives and success criteria The success criteria for including an item in the toolkit is: 1. A capability to deliver reports on select items using the leading platforms without requiring core development work on an F5 product. 2. An identified extensibility capability for future customization and report building. Assumptions and dependencies Vendors to include in the toolkit are Splunk, Q1Labs and PresiNET ASM logging and reporting is sufficient and does not need further explanation Information to be included in sample reports should begin to assist in diagnostic activities, demonstrate ROI by including ROI in an infrastructure and advise on when F5 devices are nearing capacity Vendor products must be able to accept event data emitted by F5 products. This means that some vendors might have more comprehensive support than others. Products currently supported but not in active development are not eligible for inclusion in the toolkit. Examples are older versions of BIG-IP and FirePass, and all WANJet releases. Some vendor products will require code modifications on the vendor’s side to understand the data F5 products send them. [i] As a piece of customer evidence, Microsoft implemented several logging practices around version 9.1. When they upgraded to version 9.4 their log volume increased several-fold because F5 added log messages and changed existing messages. As a result existing message taxonomy needed to be deprecated and we caused them to need to redesign filters, reports and create a new set of logging practices. [ii] Regulations such as the Sarbanes-Oxley Act, Gramm Leach Blyley Act, Federal Information Security Management Act, PCI DSS, and HIPPA. [iii] It is common for F5 products to manipulate connections via OneConnect, NATs and SNATs. These operations are unknown to external log collectors, and pose a challenge when assembling a complete view of the network connections between a client and a server via an F5 device for a single application transaction. What’s Next? In the next installment we’ll get into the details of the different vendors in question, their offerings, how they work and integrate with BIG-IP, and more. Logging and Reporting Toolkit Series: Part Two | Part Three721Views0likes1CommentBIG-IP Logging and Reporting Toolkit - part two
In this second offering from Joe Malek’s delve into some advanced configuration concepts, and more specifically the logging and reporting world, we take a look at the vendors that he investigated, what they offer, and how they integrate with F5 products. He discusses some of the capabilities of each, their strengths and weaknesses and some of the things you might use each for. If you’ve been wondering what your options are for more in-depth log analysis and reporting, take a look to see what his thoughts are on a couple of the leading solutions. Logging & Reporting Toolkit - Part 1 Logging & Reporting Toolkit - Part 2 Logging & Reporting Toolkit - Part 3 Logging & Reporting Toolkit - Part 4 Vendor descriptions: Splunk - http://www.splunk.com/ “IT Search” is Splunk’s self identified core functionality. Splunk’s software contains multiple ways to obtain data from IT systems, indexes the data and reports on the data using a web interface. Splunk has invested in creating a Splunk for F5 application containing dashboard style views into log data for F5 products. Currently included in the application are LTM, GTM, ASM, APM and FirePass. The application is able to consume log messages sent to Splunk servers via syslog – and by extension iRules using High Speed Logging. Splunk is deployed as software to be installed on a customer provided system. Windows, Mac OS, Linux, AIX, and BSD variants are all supported host operating systems. Splunk can receive messages via files, syslog, SNMP, SCP, SFTP, FTP, generic network ports, FIFO queues, directory crawling and scripting. Splunk has a very intuitive and “Google like” interface allowing users to easily navigate and report on data in the system. Users are able to define reports, indices, dashboards and applications to present data as an organization requires. Upon receipt of data, Splunk can process the data according to in-built training or according to a user constructed taxonomy. Q1 Labs - http://www.q1labs.com/ Q1 Labs brings a product called QRadar to market. QRadar combines functionality commonly found in SIEM, log management and network behavior analysis products. Q1 products are able to consume event messages as well as record information on a network connection basis. QRadar is available as a pay-for appliance and a no-charge edition in a virtual machine. Differences between the two editions are the SIEM and advanced correlation functionality. The no-charge edition is a log management tool only. QRadar can receive messages via syslog SNMP, JDBC connectors, SFTP, FTP, SCP, and SDEE. Additionally QRadar con obtain network flow information in a port mirror/span mode. Customizing data views and report building are based on regular expressions. Customers can create their own regular expressions and build upon pre-configured expressions for reporting. In the SIEM module, QRadar includes approximately 250 events that can be sequenced together into complex “Offenses” in a manner similar to building a rule in Microsoft Outlook. “Universal Device Support Modules” can be created and shared among Q1 Labs customers. PresiNET – http://www.presinet.com/ Whereas tcpdump is like an x-ray for your network, Total View One is like an MRI. Total View One enables customers to maximize the use of infrastructure resources and network performance. Total View One sensors collect protocol state information by tracking connections through a network. This is commonly done out-of-line from traffic streams via port mirroring or network tap technologies. Currently PresiNET has implemented the NEDS specification which enables Total View One to receive messages from BIG-IP products to process them as if they’d come from a PresiNET sensor. This integration started with the NEDS iRule and specification and from this PresiNET created their own parser. PresiNET products are delivered as appliances in both a central unit and sensor unit mode. Optionally one may subscribe to PresiNET on a managed service basis. After you install a Total View One product in your network you get access to extensive views of available state information – with little or no additional work. If the included reporting capabilities aren’t enough, you can export data from the system as a CSV file. What’s Next? Now that you know who the players are and what they can do, be sure to check back next week to look at how the F5 products generate logs, how these technologies deal with them, and some testing results. To give you more of an idea of what’s to come, I’ll leave you with a look at the facts that will be delivered to the reporting systems from the F5 device(s) to see how they’re handled: Virtual server accessed, client IP address, client port, LB decision results, http host, http username, user-agent string, content encoding, requested URI, requested path, content type, content length, request time, server string, server port, status code, device identifier, referrer, host header, response time, VLAN id, IP protocol, IP type of service, connection end time, packets, bytes, anything sent to a dashboard, firewall messages, client source geography, extended application log data, health information for back end filers, audit logs, SNMP trap information, dedup efficacy, compression codec efficacy, wom error counters, link characteristics as known, system state Logging and Reporting Toolkit Series: Part One | Part Three447Views0likes0CommentsBIG-IP Logging and Reporting Toolkit – part three
In the first couple installments of this series we’ve talked about what we’re trying to accomplish, the vendor options we have available to us, their offerings, some strengths and weaknesses of each, etc. In this installment, we’re actually going to roll up our sleeves and get to getting. We’ll look at how to get things working in a couple different use cases including scripts, screenshots and config goodies. Before we get too far ahead of ourselves though, first things first – get the BIG-IP to send messages to syslog. Logging & Reporting Toolkit - Part 1 Logging & Reporting Toolkit - Part 2 Logging & Reporting Toolkit - Part 3 Logging & Reporting Toolkit - Part 4 - Bigip v9 syslog { remote server 10.10.200.30 } - Bigip v10 syslog { remote server { splunk { host 10.11.100.30 } } } This will send all syslog messages from the BIG-IP to the Splunk server; both BIG-IP system messages and any messages from iRules. If you’re interested in having iRules log to the Splunk server directly you can use the HSL statements or the log statements with a destination host defined. Ex) RULE_INIT has set ::SplunkHost “10.10.200.30” and then in the iRules event you’re interested in you assemble $log_message and then sent it to the log with log $::SplunkHost $log_message . A good practice would be to also record it locally on something like local0 incase the message doesn’t make it to the Splunk host. For Splunk to receive the message you have to create a Data Input on udp:514 for the log statement. To cover HSL and log statements I’d recommend creating tcp:514 and udp:514 data inputs on Splunk. http://www.splunk.com/base/Documentation/4.0.2/Admin/Monitornetworkports covers this. We’ll get to the scripts part in a bit, first… W3C offload case Now that BIG-IP is setup to send messages to Splunk and Splunk is setup to listen to them, let’s see what it looks like when put together. Open the Splunk Search application and enter ‘w3c-client-logging sourcetype=udp:514’ into the search bar. Here’s one of the things that makes Splunk really easy to work with: it recognized the key-value pairings in the log message without any configuration needed on my part. Next, I opened the Pick fields box and selected user_agent and added it to the list of fields I’m interested in; and now it shows up in alongside the log message and I can now build a report on it by clicking on the arrow. The engineer in us wants to use technical terms to accurately convey the precise information we want to distribute. Splunk makes it easy to bridge the gap from technical terms to terms that are meaningful to non-engineers. So, for example a BIG-IP admin knows what this iRule is and what it’s called (in this case w3c-client-logging) – but those could be foreign concepts to folks in the Creative Service department that only want to know what browsers people are using to access a website. So, let’s employ some natural language too. The w3c-client-logging rule records a message when an HTTP transaction completes; a request and a response. So, let’s call it what it is. On your Splunk system open up the $SPLUNKHOME/etc/system/local/eventtypes.conf file and add this: [httpTransaction] search = “Rule w3c-client-logging” You might need to restart Splunk for this change to take effect. Now, let’s go back to the search console and try out our new event type. This is a basic usage of event types in Splunk, you can learn more here: http://www.splunk.com/base/Documentation/4.0.2/Admin/Eventtypesconf . With transforms.conf and props.conf you can also effectively rename the attributes so, lb_server could be called webServer instead. Now that we have a custom event based off our search string, all we have to do is click the dropdown arrow next to userAgent (in this case) and select the report option from the dropdown. Here's the output we'd see: Heh – lookit’ that; nagios is the most frequent visitor… Network Event Data Stream Case Now that we've seen the W3C example, let's take a look at another example that's much more rich (comma delineated). With no keys, just values, this changes things considerably. Let’s look at the Network Event Data Stream specification and see how it’s been implemented as an iRule. iRule - http://devcentral.f5.com/s/wiki/default.aspx/iRules/NEDSRule.html Doc – http://devcentral.f5.com/s/downloads/techtips/NedsF5v1.doc Since this is an information rich data source, conveyed from the BIG-IP to the Splunk server using comma separated values it takes a few more simple steps for Splunk to be able to extract out the fields just like it did for the key-value pairs. Open up $SPLUNKHOME/etc/system/local/transforms.conf and insert this: [extract_neds.f5.conn.start.v1_csv] DELIMS = "," FIELDS = "EventID","Device","Flow","DateTimeSecs","IngressInterface","Protocol",”DiffServ","TTL","PolicyName","Direction" [extract_neds.f5.conn.end.v1_csv] DELIMS = "," FIELDS = "EventID","Device","Flow","DateTimeSecs","PktsIn","PktsOut","BytesIn","BytesOut" [extract_neds.f5.http.req.v1_csv] DELIMS = "," FIELDS = "EventID","Device","Flow","DateTimeSecs","Request","Host","URI","UserName","UserAgent" [extract_neds.f5.http.resp.v1_csv] DELIMS = "," FIELDS = "EventID","Device","Flow","DateTimeSecs","Reply","ResponseCode","ContentType","ContentLength","LoadBalanceTarget","ServerFlow" This names each list of information we’re interested in, indicates that the fields in the message are comma delimited and names the fields. You can name the fields for what’s appropriate for your environment. Save the file. Next, open $SPLUNKHOME/etc/system/local/props.conf and insert this: [eventtype::F5connectionStartEvent] REPORT-extrac = extract_neds.f5.conn.start.v1_csv [eventtype::F5connectionEndEvent] REPORT-extrac = extract_neds.f5.conn.end.v1_csv [eventtype::F5httpRequestEvent] REPORT-extrac = extract_neds.f5.http.req.v1_csv [eventtype::F5httpResponseEvent] REPORT-extrac = extract_neds.f5.http.resp.v1_csv This instructs the Splunk system to extract the information from the named fields. Save the file. Next, open SPLUNKHOME/etc/system/local/eventtypes.conf and insert this (the ‘sourcetype=udp:514’ part is optional – set it up for your environment or omit the search term): [F5connectionStartEvent] search = neds.f5.conn.start.v1 sourcetype=udp:514 [F5connectionEndEvent] search = neds.f5.conn.end.v1 sourcetype=udp:514 [F5httpRequestEvent] search = neds.f5.http.req.v1 sourcetype=udp:514 [F5httpResponseEvent] search = neds.f5.http.resp.v1 sourcetype=udp:514 Lastly, this defines the event to extract the data from. Save the file, and restart Splunkd. There are a few processes you can restart to avoid a complete Splunkd restart, but my environment is a lab so I just restarted the whole thing. While Splunkd is restarting you should attach the NEDS iRule to a BIG-IP virtual server you want to receive data from and send some traffic though the VIP so your Splunk servers will get some data. Now let’s navigate back to the Search app in the web ui. In the search bar, enter eventtype=F5connectionEndEvent . I opened the Pick fields box and selected BytesIn, BytesOut, Device, PktsIn and PktsOut . As another way to use the Splunk search to report on traffic transiting a BIG-IP enter eventtype=F5connectionEndEvent |timechart avg(PktsOut) avg(BytesOut) into the search bar. This will generate a table for you listing the average number of packets transmitted from the VIP and the average number of bytes transmitted by the VIP for the default 10s time period. I mentioned more to come about the script input at the top of the message. F5 recently added dashboards for WAN optimization and Access Policy management. One thing that I wish the dashboards provided is a historic view of the data so I can see how my infrastructure is changing over time as I upgrade applications and add more traffic to my network. Full disclosure: this BIG-IP interface isn’t a supported interface for anything other than the dashboard. Using BIG-IP 10.1 with a full WAN Optimization license, Perl and Splunk, here’s how I did it. 1) Place this script (http://devcentral.f5.com/s/downloads/techtips/text-dashboard-log.pl) somewhere on your Splunk system and mark it executable – I put mine in $SPLUNKHOME/bin/scripts 2) Ensure you have the proper Perl modules installed for the script to work 3) Add BIG-IP logon data to lines 58 and 59 – the user must be able to access the dashboard 4) Configure a data input for the Splunk server to get the dashboard data. My Splunk system retrieves a data set every 2 minutes. I’ve added in 2 collectors, one for each side of my WOM iSession tunnel. After getting all this setup and letting it run for a while, if you navigate back to your Search console you should see a new Sourcetype show up called BIG-IP_Acceleration_Dashboard. Clicking on the BIG-IP_Acceleration_Dashboard sourcetype displays the log entries sent to the Splunk system. Splunk recognizes the key-value pairings and has automatically extracted the data and created entries in the Pick fields list. That’s a lot of data! Basically it’s the contents of the endpoint_isession_stat table and the endpoint data – you can get this on the CLI via ‘tmctl endpoint_isession_stat’ and ‘b endpoint remote’ . Now I can easily see that from basically March 8 until now my WOM tunnels were only down for about 4 minutes. Another interesting report I’ve built from here from here is the efficacy of adaptive compression for the data transiting my simulated WAN by charting lzo_out_uses, deflate_out_uses and null_out_uses over time. Last, but certainly not least – there’s the Splunk for F5 Networks application available via http://www.splunk.com/wiki/Apps:Splunk_for_F5. You should definitely install it if you’re an ASM or PSM user. Logging and Reporting Toolkit Series: Part One | Part Two375Views0likes0Comments