devcentral mvp
10 Topics2022 DevCentral MVP Announcement
Congratulations to the 2022 DevCentral MVPs! Without users who take time from their busy days to share their experience and knowledge for others, DevCentral would be more of a corporate news site and not an actual user community. To that end, the DevCentral MVP Award is given annually to the outstanding group of individuals – the experts in the technical F5 user community who go out of their way to engage with the user community. The award is our way of recognizing their significant contributions, because while all of our users collectively make DevCentral one of the top community sites around and a valuable resource for everyone, MVPs regularly go above and beyond in assisting fellow F5 users. We understand that 2021 was difficult for everyone, and we are extra-grateful to this year's MVPs for going out of their ways to help others. MVPs get badges in their DevCentral profiles so everyone can see that they are recognized experts. This year’s MVPs will receive a glass award, certificate, exclusive thank-you gifts, and invitations to exclusive webinars and behind-the-scenes looks at things like roadmaps, new product sneak-previews, and innovative concepts in development. The 2022 DevCentral MVPs are: Aditya K Vlogs AlexBCT Amine_Kadimi Austin_Geraci Boneyard Daniel_Wolf Dario_Garrido David.burgoyne Donamato 01 Enes_Afsin_Al FrancisD iaine jaikumar_f5 Jim_Schwartzme1 JoshBecigneul JTLampe Kai Wilke Kees van den Bos Kevin_Davies Lionel Deval (Lidev) LouisK Mayur_Sutare Neeeewbie Niels_van_Sluis Nikoolayy1 P K Patrik_Jonsson Philip Jönsson Rob_Carr Rodolfo_Nützmann Rodrigo_Albuquerque Samstep SanjayP ScottE Sebastian Maniak Stefan_Klotz StephanManthey Tyler.Hatton1.3KViews8likes0Comments2021 DevCentral MVP Announcement
Congratulations to the 2021 DevCentral MVPs! The DevCentral MVP Award is given annually to an exclusive group of expert users in the technical community who go out of their way to engage with the community by sharing their experience and knowledge with others. This is our way of recognizing their significant contributions, because while all of our users collectively make DevCentral one of the top community sites around and a valuable resource for everyone, MVPs regularly go above and beyond in assisting fellow F5 users both on- and offline.We understand that 2020 was difficult for everyone, and we are extra-grateful to this year's MVPs for going out of their ways to help others. MVPs get badges in their DevCentral profiles so everyone can see that they are recognized experts (you'll also see this if you hover over their name in a thread). This year’s MVPs will receive a glass award, certificate, exclusive thank-you gifts, and invitations to exclusive webinars and behind-the-scenes looks at things like roadmaps and new product sneak-previews. The 2021 DevCentral MVPs (by username) are: · Andy McGrath · Austin Geraci · Amine Kadimi · Boneyard · Dario Garrido · EAA · FrancisD · Hamish Marson · Iaine · Jad Tabbara (JTI) · jaikumar_f5 · JG · JuniorC · Kai Wilke · Kees van den Bos · Kevin Davies · Leonardo Souza · lidev · Manthey · Mayur Sutare · Nathan Britton · Niels van Sluis · Patrik Jonsson · Philip Jönsson · Piotr Lewandowski · Rob_carr · Samir Jha · Sebastian Maniak · TimRiker · Vijay · What Lies Beneath · Yann Desmaret · Youssef795Views6likes3CommentsBIG-IP Logging and Reporting Toolkit - part one
Joe Malek, one of the many awesome engineers here at F5, took it upon himself to delve deeply into a very interesting but often unsung part of the BIG-IP advanced configuration world: logging and reporting. It’s my great pleasure to get to share with you his awesome study and the findings therein, along with (eventually) a toolkit to help you get started in the world of custom log manipulation. If you’ve ever questioned or been curious about your options when it comes to information gathering and reporting, this is definitely something you should read. There will be multiple parts, so stay tuned. This one is just the intro. Logging & Reporting Toolkit - Part 1 Logging & Reporting Toolkit - Part 2 Logging & Reporting Toolkit - Part 3 Logging & Reporting Toolkit - Part 4 Description F5 products occupy critical positions in application delivery infrastructure. They serve as gateways, proxies, accelerators and traffic flow arbiters. In these roles customer expectations vary for the degree and amount of event information recorded. Several opportunities exist within our current product capabilities for our customers and partners to produce and consume log messages from and via F5 products. Efforts to date include generating W3C style log messages on LTM via iRules, close integration with leading vendors and ASM (requires askf5 login), and creating relationships with leading vendors to best serve our customers. Significant capabilities exist for customers and partners to create their own logging and reporting solutions. Problems and opportunity In the many products offered by F5, there exists a variety of logging structures. The common log protocols used to emit messages by F5 products are Syslog (requires askf5 login) and SNMP (requires askf5 login), along with built-in iRules capabilities. Though syslog-ng is commonplace, software components tend to vary in transport, verbosity, message formatting and sometimes syslog facility. This can result in a high degree of data density in our logs, and messages our systems emit can vary from version to version.[i] The combination of these factors results in a challenge that requires a coordinated solution for customers who are compelled by regulation, industry practice, or by business process, to maintain log management infrastructure that consumes messages from F5 devices.[ii] By utilizing the unique product architecture TMOS employs by sharing its knowledge about networks and applications as well as capabilities built into iRules, TMOS can provide much of this information to log management infrastructure in a simple and knowledgeable manner. In effect, we can emit messages about appliance state and offload many message logging tasks from application servers. Based on our connection knowledge we can also improve the utility and value of information obtained from vendor provided log management infrastructure.[iii] Objectives and success criteria The success criteria for including an item in the toolkit is: 1. A capability to deliver reports on select items using the leading platforms without requiring core development work on an F5 product. 2. An identified extensibility capability for future customization and report building. Assumptions and dependencies Vendors to include in the toolkit are Splunk, Q1Labs and PresiNET ASM logging and reporting is sufficient and does not need further explanation Information to be included in sample reports should begin to assist in diagnostic activities, demonstrate ROI by including ROI in an infrastructure and advise on when F5 devices are nearing capacity Vendor products must be able to accept event data emitted by F5 products. This means that some vendors might have more comprehensive support than others. Products currently supported but not in active development are not eligible for inclusion in the toolkit. Examples are older versions of BIG-IP and FirePass, and all WANJet releases. Some vendor products will require code modifications on the vendor’s side to understand the data F5 products send them. [i] As a piece of customer evidence, Microsoft implemented several logging practices around version 9.1. When they upgraded to version 9.4 their log volume increased several-fold because F5 added log messages and changed existing messages. As a result existing message taxonomy needed to be deprecated and we caused them to need to redesign filters, reports and create a new set of logging practices. [ii] Regulations such as the Sarbanes-Oxley Act, Gramm Leach Blyley Act, Federal Information Security Management Act, PCI DSS, and HIPPA. [iii] It is common for F5 products to manipulate connections via OneConnect, NATs and SNATs. These operations are unknown to external log collectors, and pose a challenge when assembling a complete view of the network connections between a client and a server via an F5 device for a single application transaction. What’s Next? In the next installment we’ll get into the details of the different vendors in question, their offerings, how they work and integrate with BIG-IP, and more. Logging and Reporting Toolkit Series: Part Two | Part Three763Views0likes1Comment2020 DevCentral MVP Announcement
Congratulations to the 2020 DevCentral MVPs! The DevCentral MVP Award is given annually to an exclusive group of expert users in the technical community who go out of their way to engage with the community by sharing their experience and knowledge with others. This is our way of recognizing their significant contributions, because while all of our users collectively make DevCentral one of the top community sites around and a valuable resource for everyone, MVPs regularly go above and beyond in assisting fellow F5 users both on- and offline. MVPs get badges in their DevCentral profiles so everyone can see that they are recognized experts (you'll also see this if you hover over their name in a thread). This year’s MVPs will receive a glass award, certificate, thank-you gift, and an invitation to attend the MVP Summit at Agility 2020 as guests of F5. The 2020 DevCentral MVPs (by username) are: · Andy McGrath · Austin Geraci · Boneyard · Dario Garrido · FrancisD · Hamish Marson · Iaine · Jad Tabbara (JTI) · jaikumar_f5 · JG · Jinshu · Joel Newton · Kai Wilke · Kees van den Bos · Kevin Davies · Kevin Worthington · Lee Sutcliffe · Leonardo Souza · Manthey · Michael Jenkins · Nathan Britton · Nicolas Destor · Niels van Sluis · Patrik Jonsson · Philip Jönsson · Piotr Lewandowski · Rob_carr · Samir Jha · Tim Rupp · TimRiker · Vijay · What Lies Beneath · Yann Desmaret · Youssef Make sure to check out the MVP page for more info about the program and the MVPs themselves.1KViews0likes2Comments2019 DevCentral MVP Announcement
Congratulations to the 2019 DevCentral MVPs! The DevCentral MVP Award is given to a select group of exemplary people in the technical community who actively engage and share their experience and knowledge with others. We recognize their significant contributions to our community and the larger technical industry, and we want to say thank you. While all of our users collectively make DevCentral one of the top community sites around and a valuable resource for everyone, MVPs regularly go above and beyond in assisting fellow F5 users both on- and offline. It all starts with a single post… MVPs all get badges in their DevCentral profiles so everyone can see that they’re in the presence of greatness (you'll also see it if you hover over their name in a thread). This year’s MVPs will receive a certificate, award, and thank-you gift, access to select Beta programs, and the devout gratitude of the users they've helped as well as the DevCentral team here at F5. The 2019 DevCentral MVPs (by username) are: Andy McGrath Austin Geraci Boneyard De coug Fulmetal Hamish Marson Iaine jaikumar_f5 Jie Gao Jinshu Joel King Joel Newton JTI Kai Wilke Kees van den Bos Kevin Davies Lee Sutcliffe Leonardo Souza Manthey Mark Wall Nathan Britton Nicolas Destor Niels van Sluis Patrik Jonsson Philip Jönsson Piotr Lewandowski Rhazi Youssef Rob_carr Samir Jha Stanislas Piron Tim Rupp Vijay What Lies Beneath Yann Desmaret Make sure to check out the MVP page for more info about the program and the MVPs themselves. DevCentral MVPs – thank you for all your contributions!407Views0likes0CommentsBIGdiff - A Little Help For Software Upgrades
Published on behalf of DevCentral MVP Leonardo Souza If you have been to F5 Agility in Boston and went to my presentation, you should have already an idea of what I will talk about in this article, but you will learn more things, so continue reading. If you haven’t heard of BIGdiff yet, have you been living in Mars? Don’t worry I will explain what that is and how it can help you with software upgrades, and whatever you find useful. It is not an AI that will do the upgrade for you but will help you with the upgrade. Challenges These are the challenges BIGdiff addresses: You are upgrading a F5 device with 1,000 virtual servers and 1,000 wide IPs. How do you know if you have the same number of virtual servers and wide IPs after the upgrade? How do you know if you have the same number of available virtual servers and wide IPs after the upgrade? If the number of available virtual servers or wide IPs changed after upgrade, how can you find what changed? Existing Solutions First Challenge: There are multiple solutions already for this challenge. Both for LTM and GTM, you can take a print screen of the statistics before the upgrade and compare after the upgrade. For LTM, Statistics > Module Statistics > Local Traffic For GTM, Statistics > Module Statistics > DNS > GSLB This in 13.1.0.1, but I think this exist since v9, and will be in similar place in all versions. Qkview and iHealth combination. iHealth will show you configuration totals but is mainly LTM and does not show you GTM objects. Network Map is another option. However, network map is only for LTM. Also, that is a map that start from a virtual server, so if you have a pool that is not linked to a virtual server that will not count in the totals. Second Challenge: The statistics also tell you the status of the objects, so that solution works for both challenges. Third Challenge: There is no automated way to get this. You could run multiple tmsh commands to get the status before the upgrade, or just generate a qkview that will run those commands for you. However, you will still need to compare the objects one by one. If the only slot you got for the software upgrade was 3am in a Sunday, I am sure you will miss some objects or fall asleep. Solution I hope you are thinking about the same, computers don’t need to sleep, and they are better/faster than humans to compare 2 strings or numbers (that is basically 0 or 1, so they are not that smart). So, the conclusion is simple, let the computer do the work comparing objects while you drink another coffee to keep you awake to complete the software upgrade. The idea is simple, get the list of objects, and their respective status, before and after the software upgrade, then compare them and report the result. In this context, object is any entity that has a status in a BIG-IP device that may be affected by the software upgrade. Looking BIG-IP modules, that translate to LTM and GTM objects, for example, virtual servers and wide IPs. That is where BIGdiff script comes to help you and automate that process. You run BIGdiff before the upgrade, upgrade the device, and run again after the upgrade. The script will then generate a HTML file with the results. Technical Bits BIGdiff is a bash script and uses dialog program to generate the graphical menus. Dialog is a common program for CLI menus and is what F5 uses for the config command for example. The script uses snmpwalk to query locally the device for the object status, because so far has been the faster option I tested. That basically generates the same text file before and after the upgrade. Those text files that are used after to compare the objects. The script will generate the results in a HTML format, with tables. If something already exists and do the job well, there is no reason why not to use, so the script uses the TableFilter JavaScript library, that provide filter functionalities for HTML tables. You just need to have the JavaScript library file in the same folder that you have the HTML file, and the magic will happen. If you don’t need the filter functionality, you don’t need to have the JavaScript library, and static tables will be presented. The script is optimized to use mainly bash functionality, to be as faster as possible. I tested the script to compare 13 thousand objects, and it complete the task in a couple minutes. 13K objects is a really big configuration, so even if the device you plan to run the script has a large configuration, that will be just a couple minutes in your change window to run the script. Support The script only supports BIG-IP software, no support for EM or BIG-IQ. The reason is simple, there is no use case for those software. Versions 11.x.x/12.x.x/13.x.x/14.x.x were tested and are supported. As new versions are released, I will be testing to see if any change is needed to support that version. LTM objects are supported and will be listed even if LTM is not provisioned, as majority of the other modules do use LTM internally. GTM objects and partitions are also supported. Using BIGdiff Go to the code share link: BIGdiff Download the tablefilter.js file, if you want to use the table filter functionality as described above. Download the bigdiff.sh that is the script file. In the F5 device, create a folder in /shared/tmp, as /shared is shared between all volumes. Upload the file bigdiff.sh to the F5 device. Change the file permission to run: chmod +x bigdiff.sh Run the script: ./bigdiff.sh Run the script before the upgrade. Upgrade the F5 device. Run the script after the upgrade. Download the file ending in .html from the F5 device. Open the HTML file with your favourite browser. Make sure you have the tablefilter.js in the same folder as the HTML file, if you want the filter functionalities. Other Use Cases The reason I wrote the script was to help with the software upgrades, but you are not limited to software upgrades. You can use the script to compare the objects after you have done something, that can be an upgrade or something else. You can use the software for consolidations, for example 2 devices that will be replaced by a single device. You run the script in the old devices, merge the txt files that are created with the list of objects. Import the configuration in the new device, upload the script and merged txt file you created. Run the script in the new device, and the script will report to you if the objects have the same status as in the old devices. Another use case is for major changes. You can run the script, do the changes, and run the script again. The script will then tell you if you broke something. Silent Mode Silent mode is mainly to be used to integrate with other tools. The image above explains how to use. Conclusion Read the information in the code share page about know issues. I hope you find the script useful.1KViews1like4CommentsCongratulations to the 2018 DevCentral MVPs!
We’re excited to announce 2018 DevCentral MVPs - our largest group of MVPs to date! The DevCentral MVP Award is given to a select group of exemplary people in the technical community who actively engage and share their experience and knowledge with others. We recognize their significant contributions to our community and the larger technical industry, and we want to say thank you. While all of our users collectively make DevCentral one of the top community sites around and a valuable resource for everyone, MVPs regularly go above and beyond in assisting others with their independent expertise. Besides rocking the MVP badge in their DevCentral profiles, this year’s MVPs will receive a certificate, award, and thank-you gift, access to select Beta programs, and are invited to attend and participate in Agility Boston as honored guests. The 2018 DevCentral MVPs (by username) are: Boneyard Hamish Marson Hannes Rapp JTI Jinshu Joel Newton Kai Wilke Kevin Davies Leonardo Souza MrPlastic Nathan Britton Niels Van Sluis Patrik Jonsson Piotr Lewandowski Rob_carr Stanislas Piron Steven Iveson Vijay Yann Desmarest Make sure to check out the MVP page for more info about the program and the MVPs themselves. 2018 MVPs – we salute and thank you, and we know the community at large thanks you as well!741Views0likes4CommentsBIG-IP Logging and Reporting Toolkit – part three
In the first couple installments of this series we’ve talked about what we’re trying to accomplish, the vendor options we have available to us, their offerings, some strengths and weaknesses of each, etc. In this installment, we’re actually going to roll up our sleeves and get to getting. We’ll look at how to get things working in a couple different use cases including scripts, screenshots and config goodies. Before we get too far ahead of ourselves though, first things first – get the BIG-IP to send messages to syslog. Logging & Reporting Toolkit - Part 1 Logging & Reporting Toolkit - Part 2 Logging & Reporting Toolkit - Part 3 Logging & Reporting Toolkit - Part 4 - Bigip v9 syslog { remote server 10.10.200.30 } - Bigip v10 syslog { remote server { splunk { host 10.11.100.30 } } } This will send all syslog messages from the BIG-IP to the Splunk server; both BIG-IP system messages and any messages from iRules. If you’re interested in having iRules log to the Splunk server directly you can use the HSL statements or the log statements with a destination host defined. Ex) RULE_INIT has set ::SplunkHost “10.10.200.30” and then in the iRules event you’re interested in you assemble $log_message and then sent it to the log with log $::SplunkHost $log_message . A good practice would be to also record it locally on something like local0 incase the message doesn’t make it to the Splunk host. For Splunk to receive the message you have to create a Data Input on udp:514 for the log statement. To cover HSL and log statements I’d recommend creating tcp:514 and udp:514 data inputs on Splunk. http://www.splunk.com/base/Documentation/4.0.2/Admin/Monitornetworkports covers this. We’ll get to the scripts part in a bit, first… W3C offload case Now that BIG-IP is setup to send messages to Splunk and Splunk is setup to listen to them, let’s see what it looks like when put together. Open the Splunk Search application and enter ‘w3c-client-logging sourcetype=udp:514’ into the search bar. Here’s one of the things that makes Splunk really easy to work with: it recognized the key-value pairings in the log message without any configuration needed on my part. Next, I opened the Pick fields box and selected user_agent and added it to the list of fields I’m interested in; and now it shows up in alongside the log message and I can now build a report on it by clicking on the arrow. The engineer in us wants to use technical terms to accurately convey the precise information we want to distribute. Splunk makes it easy to bridge the gap from technical terms to terms that are meaningful to non-engineers. So, for example a BIG-IP admin knows what this iRule is and what it’s called (in this case w3c-client-logging) – but those could be foreign concepts to folks in the Creative Service department that only want to know what browsers people are using to access a website. So, let’s employ some natural language too. The w3c-client-logging rule records a message when an HTTP transaction completes; a request and a response. So, let’s call it what it is. On your Splunk system open up the $SPLUNKHOME/etc/system/local/eventtypes.conf file and add this: [httpTransaction] search = “Rule w3c-client-logging” You might need to restart Splunk for this change to take effect. Now, let’s go back to the search console and try out our new event type. This is a basic usage of event types in Splunk, you can learn more here: http://www.splunk.com/base/Documentation/4.0.2/Admin/Eventtypesconf . With transforms.conf and props.conf you can also effectively rename the attributes so, lb_server could be called webServer instead. Now that we have a custom event based off our search string, all we have to do is click the dropdown arrow next to userAgent (in this case) and select the report option from the dropdown. Here's the output we'd see: Heh – lookit’ that; nagios is the most frequent visitor… Network Event Data Stream Case Now that we've seen the W3C example, let's take a look at another example that's much more rich (comma delineated). With no keys, just values, this changes things considerably. Let’s look at the Network Event Data Stream specification and see how it’s been implemented as an iRule. iRule - http://devcentral.f5.com/s/wiki/default.aspx/iRules/NEDSRule.html Doc – http://devcentral.f5.com/s/downloads/techtips/NedsF5v1.doc Since this is an information rich data source, conveyed from the BIG-IP to the Splunk server using comma separated values it takes a few more simple steps for Splunk to be able to extract out the fields just like it did for the key-value pairs. Open up $SPLUNKHOME/etc/system/local/transforms.conf and insert this: [extract_neds.f5.conn.start.v1_csv] DELIMS = "," FIELDS = "EventID","Device","Flow","DateTimeSecs","IngressInterface","Protocol",”DiffServ","TTL","PolicyName","Direction" [extract_neds.f5.conn.end.v1_csv] DELIMS = "," FIELDS = "EventID","Device","Flow","DateTimeSecs","PktsIn","PktsOut","BytesIn","BytesOut" [extract_neds.f5.http.req.v1_csv] DELIMS = "," FIELDS = "EventID","Device","Flow","DateTimeSecs","Request","Host","URI","UserName","UserAgent" [extract_neds.f5.http.resp.v1_csv] DELIMS = "," FIELDS = "EventID","Device","Flow","DateTimeSecs","Reply","ResponseCode","ContentType","ContentLength","LoadBalanceTarget","ServerFlow" This names each list of information we’re interested in, indicates that the fields in the message are comma delimited and names the fields. You can name the fields for what’s appropriate for your environment. Save the file. Next, open $SPLUNKHOME/etc/system/local/props.conf and insert this: [eventtype::F5connectionStartEvent] REPORT-extrac = extract_neds.f5.conn.start.v1_csv [eventtype::F5connectionEndEvent] REPORT-extrac = extract_neds.f5.conn.end.v1_csv [eventtype::F5httpRequestEvent] REPORT-extrac = extract_neds.f5.http.req.v1_csv [eventtype::F5httpResponseEvent] REPORT-extrac = extract_neds.f5.http.resp.v1_csv This instructs the Splunk system to extract the information from the named fields. Save the file. Next, open SPLUNKHOME/etc/system/local/eventtypes.conf and insert this (the ‘sourcetype=udp:514’ part is optional – set it up for your environment or omit the search term): [F5connectionStartEvent] search = neds.f5.conn.start.v1 sourcetype=udp:514 [F5connectionEndEvent] search = neds.f5.conn.end.v1 sourcetype=udp:514 [F5httpRequestEvent] search = neds.f5.http.req.v1 sourcetype=udp:514 [F5httpResponseEvent] search = neds.f5.http.resp.v1 sourcetype=udp:514 Lastly, this defines the event to extract the data from. Save the file, and restart Splunkd. There are a few processes you can restart to avoid a complete Splunkd restart, but my environment is a lab so I just restarted the whole thing. While Splunkd is restarting you should attach the NEDS iRule to a BIG-IP virtual server you want to receive data from and send some traffic though the VIP so your Splunk servers will get some data. Now let’s navigate back to the Search app in the web ui. In the search bar, enter eventtype=F5connectionEndEvent . I opened the Pick fields box and selected BytesIn, BytesOut, Device, PktsIn and PktsOut . As another way to use the Splunk search to report on traffic transiting a BIG-IP enter eventtype=F5connectionEndEvent |timechart avg(PktsOut) avg(BytesOut) into the search bar. This will generate a table for you listing the average number of packets transmitted from the VIP and the average number of bytes transmitted by the VIP for the default 10s time period. I mentioned more to come about the script input at the top of the message. F5 recently added dashboards for WAN optimization and Access Policy management. One thing that I wish the dashboards provided is a historic view of the data so I can see how my infrastructure is changing over time as I upgrade applications and add more traffic to my network. Full disclosure: this BIG-IP interface isn’t a supported interface for anything other than the dashboard. Using BIG-IP 10.1 with a full WAN Optimization license, Perl and Splunk, here’s how I did it. 1) Place this script (http://devcentral.f5.com/s/downloads/techtips/text-dashboard-log.pl) somewhere on your Splunk system and mark it executable – I put mine in $SPLUNKHOME/bin/scripts 2) Ensure you have the proper Perl modules installed for the script to work 3) Add BIG-IP logon data to lines 58 and 59 – the user must be able to access the dashboard 4) Configure a data input for the Splunk server to get the dashboard data. My Splunk system retrieves a data set every 2 minutes. I’ve added in 2 collectors, one for each side of my WOM iSession tunnel. After getting all this setup and letting it run for a while, if you navigate back to your Search console you should see a new Sourcetype show up called BIG-IP_Acceleration_Dashboard. Clicking on the BIG-IP_Acceleration_Dashboard sourcetype displays the log entries sent to the Splunk system. Splunk recognizes the key-value pairings and has automatically extracted the data and created entries in the Pick fields list. That’s a lot of data! Basically it’s the contents of the endpoint_isession_stat table and the endpoint data – you can get this on the CLI via ‘tmctl endpoint_isession_stat’ and ‘b endpoint remote’ . Now I can easily see that from basically March 8 until now my WOM tunnels were only down for about 4 minutes. Another interesting report I’ve built from here from here is the efficacy of adaptive compression for the data transiting my simulated WAN by charting lzo_out_uses, deflate_out_uses and null_out_uses over time. Last, but certainly not least – there’s the Splunk for F5 Networks application available via http://www.splunk.com/wiki/Apps:Splunk_for_F5. You should definitely install it if you’re an ASM or PSM user. Logging and Reporting Toolkit Series: Part One | Part Two439Views0likes0CommentsBIG-IP Logging and Reporting Toolkit - part two
In this second offering from Joe Malek’s delve into some advanced configuration concepts, and more specifically the logging and reporting world, we take a look at the vendors that he investigated, what they offer, and how they integrate with F5 products. He discusses some of the capabilities of each, their strengths and weaknesses and some of the things you might use each for. If you’ve been wondering what your options are for more in-depth log analysis and reporting, take a look to see what his thoughts are on a couple of the leading solutions. Logging & Reporting Toolkit - Part 1 Logging & Reporting Toolkit - Part 2 Logging & Reporting Toolkit - Part 3 Logging & Reporting Toolkit - Part 4 Vendor descriptions: Splunk - http://www.splunk.com/ “IT Search” is Splunk’s self identified core functionality. Splunk’s software contains multiple ways to obtain data from IT systems, indexes the data and reports on the data using a web interface. Splunk has invested in creating a Splunk for F5 application containing dashboard style views into log data for F5 products. Currently included in the application are LTM, GTM, ASM, APM and FirePass. The application is able to consume log messages sent to Splunk servers via syslog – and by extension iRules using High Speed Logging. Splunk is deployed as software to be installed on a customer provided system. Windows, Mac OS, Linux, AIX, and BSD variants are all supported host operating systems. Splunk can receive messages via files, syslog, SNMP, SCP, SFTP, FTP, generic network ports, FIFO queues, directory crawling and scripting. Splunk has a very intuitive and “Google like” interface allowing users to easily navigate and report on data in the system. Users are able to define reports, indices, dashboards and applications to present data as an organization requires. Upon receipt of data, Splunk can process the data according to in-built training or according to a user constructed taxonomy. Q1 Labs - http://www.q1labs.com/ Q1 Labs brings a product called QRadar to market. QRadar combines functionality commonly found in SIEM, log management and network behavior analysis products. Q1 products are able to consume event messages as well as record information on a network connection basis. QRadar is available as a pay-for appliance and a no-charge edition in a virtual machine. Differences between the two editions are the SIEM and advanced correlation functionality. The no-charge edition is a log management tool only. QRadar can receive messages via syslog SNMP, JDBC connectors, SFTP, FTP, SCP, and SDEE. Additionally QRadar con obtain network flow information in a port mirror/span mode. Customizing data views and report building are based on regular expressions. Customers can create their own regular expressions and build upon pre-configured expressions for reporting. In the SIEM module, QRadar includes approximately 250 events that can be sequenced together into complex “Offenses” in a manner similar to building a rule in Microsoft Outlook. “Universal Device Support Modules” can be created and shared among Q1 Labs customers. PresiNET – http://www.presinet.com/ Whereas tcpdump is like an x-ray for your network, Total View One is like an MRI. Total View One enables customers to maximize the use of infrastructure resources and network performance. Total View One sensors collect protocol state information by tracking connections through a network. This is commonly done out-of-line from traffic streams via port mirroring or network tap technologies. Currently PresiNET has implemented the NEDS specification which enables Total View One to receive messages from BIG-IP products to process them as if they’d come from a PresiNET sensor. This integration started with the NEDS iRule and specification and from this PresiNET created their own parser. PresiNET products are delivered as appliances in both a central unit and sensor unit mode. Optionally one may subscribe to PresiNET on a managed service basis. After you install a Total View One product in your network you get access to extensive views of available state information – with little or no additional work. If the included reporting capabilities aren’t enough, you can export data from the system as a CSV file. What’s Next? Now that you know who the players are and what they can do, be sure to check back next week to look at how the F5 products generate logs, how these technologies deal with them, and some testing results. To give you more of an idea of what’s to come, I’ll leave you with a look at the facts that will be delivered to the reporting systems from the F5 device(s) to see how they’re handled: Virtual server accessed, client IP address, client port, LB decision results, http host, http username, user-agent string, content encoding, requested URI, requested path, content type, content length, request time, server string, server port, status code, device identifier, referrer, host header, response time, VLAN id, IP protocol, IP type of service, connection end time, packets, bytes, anything sent to a dashboard, firewall messages, client source geography, extended application log data, health information for back end filers, audit logs, SNMP trap information, dedup efficacy, compression codec efficacy, wom error counters, link characteristics as known, system state Logging and Reporting Toolkit Series: Part One | Part Three483Views0likes0CommentsDevCentral Announces Inaugural MVP Class
DevCentral as a community relies upon the talents and contributions of its users to help peers and those who are new to F5 products and technologies. Without users who are willing to take a moment from their busy day and help resolve the problems of complete strangers, DevCentral would be far less community, resembling more of a corporate news site. Due in large part to the contributions of a select few, the community continues to flourish. They are in the trenches facing challenges daily, and it is their expertise the community craves. Without their help, some of our members might still be struggling to get the most out of their F5 gear, or more likely, the core DevCentral members would be working much longer hours as we attempt to assist our ever-growing user base. We recognize the time and effort put into the DevCentral community. To that end, we have created the DevCentral MVP program to honor those who, without incentive, contribute to the greater good of our community. The 2010 DevCentral MVP Class (by username) hoolio - I have to quote Drago from Rocky 4 here: "He is not human, he is a piece of iron." Mr. forums has more posts than Joe, Colin, and me--combined. bhattman - 2009 iRules contest winner and ever-present in the forums and wiki. hamish - Contributor in the iControl and monitoring/management forums. Contributed several slick templates for the F5 host template. hwidjaja - Perl nut, which excites Colin. Active in several forums. smp - He's gotta change his username. I type snmp every time. Really--every time. Also an active contributor in several of the forums. naladar - Not only a member of our community, but carries the F5 love out to the world with his own TheF5Guy blog. Interview guest on podcast 107. mikejo - Unashamed Firepass specialist. Active contributor in said forum. If you want to hear more about the MVPs, podcast 117 was a dedicated highlight show. Also, make sure to check out the MVP profile pages. MVPs – we salute and thank you, and we know the community at large thanks you as well!167Views1like0Comments