announcement
100 TopicsSecurity Automation with F5 BIG-IP and Event Driven Ansible
Updated (September 19th 2023) INTRODUCTION TO EVENT DRIVEN SECURITY: Event Driven Security is one of the projects I have been working on for the last year or so. The idea of creating automated security that can react similarly to how I would react in situations is fascinating to me, and then comes the BIG Question.... "Can I code it?" Originally our solution we had utilized ELK (Elastic Logstash Kibana) where Elasticsearch was my logging and monitoring tool, Kibana was the frontend GUI for helping me visualize and set up my watchers for my webhook triggers, Logstash would be an intermediary to receive my webhooks to help me execute Ansible related code. While using Logstash, if the Ansible code was simple it had no issues, however when things got more complex (i.e., taking payloads from Elastic and feeding them through Logstash to my playbooks), I would sometimes get intermittent results. Some of this could be my lack of knowledge of the software but for me it needed to be simple! As I want to become more complex with my Event Driven Security, I needed a product that would follow those needs. And luckily in October 2022 that product was announced "Event Driven Ansible" it made it so I didn’t need Logstash anymore i could call Ansible related code directly, it even took in webhooks (JSON based) to trigger the code, so I was already half way there! CODE FOR EVENT DRIVEN SECURITY: So now I have setup the preface let’s get down to the good stuff! I have setup a GitHub repository for the code i have been testing with https://github.com/f5devcentral/f5-bd-ansible-eda-demo which is free for all to use and please feel free to take/fork/expand!!! There are some cool things worth noting in the code specifically the transformation of the watch code into something usable in playbooks. This code will take all the times the watcher finds a match in its filter and then then copies the Source IP from that code and puts it into a CSV list, then it sends the list as a variable within the webhook along with the message to execute the code. Here is the code I am mentioning above about transforming and sending the payloads in an elastic watcher. See the Full code in the GitHub repo. (Github Repo --> elastic --> watch_blocked_ips.json) "actions": { "logstash_exec": { "transform": { "script": { "source": """ def hits = ctx.payload.hits.hits; def transform = ''; for (hit in hits) { transform += hit._source.src_ip; transform += ', ' } return transform; """, "lang": "painless" } }, "webhook": { "scheme": "http", "host": "10.1.1.12", "port": 5000, "method": "post", "path": "/endpoint", "params": {}, "headers": {}, "body": """{ "message": "Ansible Please Block Some IPs", "payload": "{{ctx.payload._value}}" }""" } } } } In the Ansible Rulebook the big thing to note is that from the Pre-GA code (which was all CLI ansible-rulebook based) to the GA version (EDA GUI) rulebooks now are setup to call Ansible Automation Platform (AAP) templates. In the code below you can see that its looking for an existing template "Block IPs" in the organization "Default" to be able to run correctly. (Github Repo --> rulebooks --> webhook-block-ips.yaml) --- - name: Listen for events on a webhook hosts: all ## Define our source for events sources: - ansible.eda.webhook: host: 0.0.0.0 port: 5000 ## Define the conditions we are looking for rules: - name: Block IPs condition: event.payload.message == "Ansible Please Block Some IPs" action: run_job_template: name: "Block IPs" organization: "Default" This shows my template setup in Ansible Automation Platform 2.4.x, there is one CRITICAL piece of information i wanted to share about using EDA GA and AAP 2.4 code is that within the template you MUST tick the checkbox on the "Prompt on launch" in the "variables section". This will allow the payload from EDA (given to it from Elastic) to pass on to the playbook. In the Playbook you can see how we extract the payload from the event using the ansible_eda variable, this allows us to pull in the event we were sent from Elastic to Event Driven Ansible and then sent to the Ansible Automation Platform template to narrow down the specific fields we needed (Message and Payload) from there we create an array from that payload so we can pass it along to our F5 code to start adding Blocked IPs to the WAF Policy. (Github Repo --> playbooks --> block-ips.yaml) --- - name: ASM Policy Update with Blocked IPs hosts: lb connection: local gather_facts: false vars: Blocked_IPs_Events: "{{ ansible_eda.event.payload }}" F5_VIP_Name: VS_WEB F5_VIP_Port: "80" F5_Admin_Port: "443" ASM_Policy_Name: "WAF-POLICY" ASM_Policy_Directory: "/tmp/f5/" ASM_Policy_File: "WAF-POLICY.xml" tasks: - name: Setup provider ansible.builtin.set_fact: provider: server: "{{ ansible_host }}" user: "{{ ansible_user }}" password: "{{ ansible_password }}" server_port: "{{ F5_Admin_Port }}" validate_certs: "no" - name: Blocked IP Events From EDA debug: msg: "{{ Blocked_IPs_Events.payload }}" - name: Create Array from BlockedIPs ansible.builtin.set_fact: Blocked_IPs: "{{ Blocked_IPs_Events.payload.split(', ') }}" when: Blocked_IPs_Events is defined - name: Remove Last Object from Array which is empty array object ansible.builtin.set_fact: Blocked_IPs: "{{ Blocked_IPs[:-1] }}" when: Blocked_IPs_Events is defined ... All of this combined, creates a well-oiled setup that looks like the following diagram below, with the code and the flows setup we can now create proactive event based security! Here is the flow of the code that is in the GitHub repo when executed. The F5 BIG-IP is pushing all the monitoring logs to Elastic. Elastic is taking all that data and storing it while utilizing a watcher with its filters and criteria, The Watcher finds something that matches its criteria and sends the webhook with payload to Event Driven Ansible. Event Driven Ansible's Rulebook triggers and calls a template within Ansible Automation Platform and sends along the payload given to it from Elastic. Ansible Automation Platforms Template executes a playbook to secure the F5 BIG-IP using the payload given to it from EDA (originally from Elastic). In the End we go Full Circle, starting from the F5 BIG-IP and ending at the F5 BIG-IP! Full Demonstration Video: Check out our full demonstration video we recently posted (Sept 13th 2023) is available on-demand via https://www.f5.com/company/events/webinars/f5-and-red-hat-3-part-demo-series This page does require a registration and you can check out our 3 part series. The one related to this lab is the "Event-Driven Automation and Security with F5 and Red Hat Ansible" Proactive Securiy with F5 & Event Driven Ansible Video Demo LINKS TO CODE: https://github.com/f5devcentral/f5-bd-ansible-eda-demo2.3KViews10likes0CommentsTrying to Fill Some Giant Shoes...
Tuesday, May 24th marks the first DCC for me to be an "official" cast member! I'm taking over the vacant chair left by Mr. John Wagnon, as our DCC security specialist. Having seen John in the community for years and years, it seems like a daunting task, but I've got some great people to learn from, for sure. Back in 2008 or so, I was introduced to F5 as a customer and, at the time, was very much an Open Source zealot. I shied away from purchasing anything at all besides servers to run Linux on. I was truly moved by F5's community website and began my transition to becoming an F5 zealot when I found this gem of an article that comletely changed the health and performance of my massive scale SaaS implementation. It's awesome to have my first livestream be a Top5, as the crew had me test out my chops with a Top5 for March: It was the first thing I did in my new home studio, though, so I had to come to grips with moving on from my old Blue Yeti mic and get down to business with a spare Shure SM-57 I had from my snare drum. I'm truly honored to join this cast of characters, buulam, JRahm & PSilva and will work to deliver the best community focused security content possible.922Views10likes4CommentsDevCentral's Featured Member for January - Daniel Wolf
Our Featured Member series is a way for us to show appreciation and highlight active contributors in our community. Communities thrive on interaction and our Featured Series gives you some insight on some of our most engaged folks. DevCentral Member and newly minted MVP Daniel Wolf is our Featured Member to kick off 2022! Let's catch up with Daniel! DevCentral: First, please explain to the DC community a little about yourself, what you do and why it is important. Daniel: I’m an enthusiast. When I was younger, I was a passionate handball player. Later I also became a passionate handball coach for children. Recently I became an avid cook. Almost ten years ago I moved to the Balkans, to the city of Skopje. I fell in love with the region, the people, and the Balkan way of life. I even found my wife there. Almost three years ago my family and I moved back to my hometown, a small city close to Frankfurt in Germany. And I have always been a tech enthusiast. DevCentral: You’ve continued to be an active contributor in the DevCentral community. What keeps you involved? Daniel: I find it interesting to read what challenges others from the community are facing. In case I know an answer to their question, I will reply. In case I don’t know the answer, but I think I can figure it out with a reasonable effort, I will try to. It helps me to broaden my knowledge but even more important to share the answers with others. DevCentral: Tell us a little about the technical expertise you have. Daniel: First time I touched a computer was an Intel 286 with DOS 5.0. After a couple of weeks, I deleted a couple of seemingly useless file to install Monkey Island. Since then, I became pretty good at solving computer problems. Nowadays they are called projects and the problems are often much more complex. The last technology I was responsible before I decided to become an F5 consultant was Microsoft SharePoint and other .NET web apps. Roughly 7 years ago, there was a project to protect an online banking application with a WAF. So, unlike many other F5 specialists, I am not a network specialist but a web server dude. DevCentral: You are a Senior Network Professional at Controlware GmbH. Can you describe your typical workday, how you manage work/life balance and the strong support of F5 solutions? How has the pandemic impacted your work? Daniel: I appreciate that there is not a typical workday. I enjoy a challenging mix between projects, presales activities and occasional L3 support. Most fun for me are projects where I can help my customers to protect their apps and APIs. In the past two years we also had a lot of projects building, improving, or scaling out identity-aware access solutions. So, on a typical day, I’d say I am still solving computer problems. The pandemic has improved my work/life balance, I don’t have to drive to the office anymore and I can have a walk in the field during lunchtime or enjoy a coffee with my wife (she’s also working from home). DevCentral: Do you have any F5 Certifications? If so, why are these important to you and how have they helped with your career? Daniel: I have the 401 since last year. The 401 was a very good exam, passing it required an understanding of many F5 solutions but also of broader security concepts. My employer is promoting to get certified and allowed me to prepare during working hours. DevCentral: Describe one of your biggest Customer challenges and how the community helped in that situation. Daniel: I’d say that this is one of my current projects. We are migrating from an end-of-life proxy platform to BIG-IP and we are building a lot of the content switching and rewrite features with iRules. Devcentral is a goldmine if you are looking for iRule documentation and code snippets. DevCentral: Lastly, if you weren’t doing what you’re doing – what would be your dream career? Like, when you were a kid – what did you want to be when you grew up? Daniel: I always wanted to be some sort of IT guy. I think I am fine where I am now, I enjoy my work. If I was granted a wish, carpenter would be an alternative. I like the idea that, at the end of each day, you can see what you have built with your own hands. The things I build, they are meaningful as long as there is a browser available. ---Thanks Dan! We really appreciate your willingness to share with the DevCentral Community. Stay connected with Daniel and Controlware on social media: Controlware GmbH on LinkedIn Daniel on LinkedIn Controlware GmbH on the Web806Views9likes1CommentICYMI on DevCentral - April 2022
DevCentral publishes a ton of content each month and it's easy for articles/videos/forum posts to get lost on the timeline. Here's a snapshot of the top posts and videos from April 2022, in case you missed it! Solved on the Technical Forum /var full with accepted solutions from Lidev @ and Sebastiansierra LTM Local traffic policy to many options!!! - Help needed! with an accepted solution from Mayur_Sutare HTTP Header insert with an accepted solution from Enes_Afsin_Al Request client cert auth based on URL with an accepted solution from spalande I am adding a custom http header but how to print it out using irules? by David_M who ended up solving this problem for himself. 🙂 Huge thanks to everyone who offered solutions on the Technical Forum! We love to see folks helping each other solve problems and answer questions. F5 Technical Articles: Cisco ACI Endpoint Learning with a BIG-IP HA Failover by Eric_Ji who shared lessons learned to guide the design and troubleshooting of the BIG-IP HA and failover with Cisco Application Centric Infrastructure (ACI). Protect Applications from Spring4Shell. (CVE-2022-22965) by warburtr0n on how to protect from the Spring4Shell vulerabilities. High Availability in a Bare Metal World by Greg_Coward provides guidance for deploying a highly available and scalable application delivery infrastructure on top of Equinix Metal utilizing either VMWare or KVM hypervisors. AFM Protocol Custom Signatures for Spring4Shell and Spring_Cloud (CVE-2022-22963 and -22965) by James_Affeld on using AFM Protocol Inspection to detect exploits. Configuring BIG-IP AFM firewall policies and rules with Ansible by Leon_Seng who demonstrates a sample workflow of building up the automation of AFM firewall policy configuration using Ansible. How to Use F5 Distributed Cloud to Obfuscate Ingress and Egress Traffic by MichaelatF5 navigates government compliace with Cyber Liability Insurance and how to obfuscate ingress and egress traffic. If you've got suggestions for Technical Articles, drop them below in the comments. The DevCentral team can try and track down a SME to write on that topic. Demos and Livestreams RomanJ explains how to use Ansible and BIG-IQ to automate steps to cleanup a BIG-IP and remove unused/expired certificates and keys: JRahm walks through a proof of concept for automating captures, and a road map to take that ground work and expand your capture horizons. Jim MacLeod covers the evolution of cloud services towards distributed, and what benefits of MCN can be realized through this architecture. See you next month! Huge thanks to everyone who contributed to the community in April. If you have any ideas or suggestions, don't hesitate to pop over and write a comment in our Suggestions box. See you out there in the community!1.6KViews8likes0CommentsMaking WAF Simple: Introducing the OWASP Compliance Dashboard
Whether you are a beginner or an expert, there is a truth that I want to let you in on; building and maintaining Web Application Firewall (WAF) security policies can be challenging. How much security do you really need? Is your configuration too much or too little? Have you created an operational nightmare? Many well-intentioned administrators will initially enable every available feature, thinking that it is providing additional security to the application, when in truth, it is hindering it. How, you may ask? False positives and noise. The more noise and false positives, the harder it becomes to find the real attacks and the increased likelihood that you begin disabling features that ARE providing essential security for your applications. So… less is better then? That isn't the answer either, what good are our security solutions if they aren't protecting against anything? The key to success and what we will look at further in this article, is implementing best practice controls that are both measurable and manageable for your organization. The OWASP Application Security Top 10 is a well-respected list of the ten most prevalent and dangerous application layer attacks that you almost certainly should protect your applications from. By first focusing your security controls on the items in the OWASP Top 10, you are improving the manageability of your security solution and getting the most "bang for your buck". Now, the challenge is, how do you take such a list and build real security protections for your applications? Introducing the OWASP Compliance Dashboard Protecting your applications against the OWASP Top 10 is not a new thing, in fact, many organizations have been taking this approach for quite some time. The challenge is that most implementations that claim to "protect" against the OWASP Top 10 rely solely on signature-based protections for only a small subset of the list and provide zero insight into your compliance status. The OWASP Compliance Dashboard introduced in version 15.0 on BIG-IP Advanced WAF reinvents this idea by providing a holistic and interactive dashboard that clearly measures your compliancy against the OWASP Application Security Top 10. The Top 10 is then broken down into specific security protections including both positive and negative security controls that can be enabled, disabled, or ignored directly on the dashboard. We realize that a WAF policy alone may not provide complete protection across the OWASP Top 10, this is why the dashboard also includes the ability to review and track the compliancy of best practices outside the scope of a WAF alone, such as whether the application is subject to routine patching or vulnerability scanning. To illustrate this, let’s assume I have created a brand new WAF policy using the Rapid Deployment policy template and accepted all default settings, what compliance score do you think this policy might have? Let's take a look. Interesting. The policy is 0/10 compliant and only A2 Broken Authentication and A3 Sensitive Data Exposure have partial compliance. Why is that? The Rapid Deployment template should include some protections by default, shouldn't it? Expanding A1 Injection, we see a list of protections required in order to be marked as compliant. Hovering over the list of attack signatures, we see that each category of signature is in 'Staging' mode, aha! Signatures in staging mode are not enforced and therefore cannot block traffic. Until the signature set is in enforced, we do not mark that protection as compliant. For those of you who have mistakenly left entities such as Signatures in staging for longer than desired, this is also a GREAT way to quickly find them. I also told you we could also interact with the dashboard to influence the compliancy score, lets demonstrate that. Each item can be enforced DIRECTLY on the dashboard by selecting the "Enforce" checkmark on the right. No need to go into multiple menus, you can enforce all these security settings on a single page and preview the compliance status immediately. If you are happy with your selection, click on "Review & Update" to perform a final review of what the dashboard will be configuring on your behalf before you can click on "Save & Apply Policy". Note: Enforcing signatures before a period of staging may not be a good idea depending on your environment. Staging provides a period to assess signature matches in order to eliminate false positives. Enforcing these signatures too quickly could result in the denying of legitimate traffic. Let's review the compliancy of our policy now with these changes applied. As you can see, A1 Injection is now 100% compliant and other categories have also had their score updated as a result of enforcing these signatures. The reason for this is because there is overlap in the security controls applied across these other categories. Not all security controls can be fully implemented directly via the dashboard, and as mentioned previously, not all security controls are signature-based. A6 Cross-Site Scripting was recalculated as 50% complaint with the signatures we enforced previously so let's take a look at what else it required for full compliancy. The options available to us are to IGNORE the requirement, meaning we will be granted full compliancy for that item without implementing any protection, or we can manually configure the protection referenced. We may want to ignore a protection if it is not applicable to the application or if it is not in scope for your deployment. Be mindful that ignoring an item means you are potentially misrepresenting the score of your policy, be very certain that the protection you are ignoring is in fact not applicable before doing so. I've selected to ignore the requirement for "Disallowed Meta Characters in Parameters" and my policy is now 100% compliance for A7 Cross-Site Scripting (XSS). Lastly, we will look at items within the dashboard that fall outside the scope of WAF protections. Under A9 Using Components with Known Vulnerabilities, we are presented with a series of best practices such as “Application and system hardening”, “Application and system patching” and “Vulnerability scanner integration”. Using the dashboard, you can click on the checkmark to the right for "Requirement fulfilled" to indicate that your organization implements these best practices. By doing so, the OWASP Compliance score updates, providing you with real-time visibility into the compliancy for your application. Conclusion The OWASP Compliance Dashboard on BIG-IP Advanced WAF is a perfect fit for the security administrator looking to fine-tune and measure either existing or new WAF policies against the OWASP App Security Top 10. The OWASP Compliance Dashboard not only tracks WAF-specific security protections but also includes general best practices, allowing you to use the dashboard as your one-stop-shop to measure the compliancy for ALL your applications. For many applications, protection against the OWASP Top 10 may be enough, as it provides you with best practices to follow without having to worry about which features to implement and where. Note: Keep in mind that some applications may require additional controls beyond the protections included in the OWASP Top 10 list. For teams heavily embracing automation and CI/CD pipelines, logging into a GUI to perform changes likely does not sound appealing. In that case, I suggest reading more about our Declarative Advanced WAF policy framework which can be used to represent the WAF policies in any CI/CD pipeline. Combine this with the OWASP Compliance Dashboard for an at-a-glance assessment of your policy and you have the best of both worlds. If you're not already using the OWASP Compliance Dashboard, what are you waiting for? Look out for Bill Brazill, Victor Granic and myself (Kyle McKay) on June 10th at F5 Agility 2020 where we will be presenting and facilitating a class called "Protecting against the OWASP Top 10". In this class, we will be showcasing the OWASP Compliance Dashboard on BIG-IP Advanced WAF further and providing ample hands-on time fine-tuning and measuring WAF policies for OWASP Compliance. Hope to see you there! To learn more, visit the links below. Links OWASP Compliance Dashboard: https://support.f5.com/csp/article/K52596282 OWASP Application Security Top 10: https://owasp.org/www-project-top-ten/ Agility 2020: https://www.f5.com/agility/attend7.8KViews8likes0CommentsRegional Edge Resiliency Zones and Virtual Sites
Introduction: This article is a follow-up article to my earlier article, F5 Distributed Cloud: Virtual Sites – Regional Edge (RE). In the last article, I talked about how to build custom topologies using Virtual Sites on our SaaS data plane, aka Regional Edges. In this article, we’re going to review an update to our Regional Edge architecture. With this new update to Regional Edges, there are some best practices regarding Virtual Sites that I’d like to review. As F5 has seen continuous growth and utilization of F5’s Distributed Cloud platform, we’ve needed to expand our capacity. We have added capacity through many different methods over the years. One strategic approach to expanding capacity is building new POPs. However, in some cases, even with new POPs, there are certain regions of the world that have a high density of connectivity. This will always cause higher utilization than in other regions. A perfect example of that is Ashburn, Virginia in the United States. Within the Ashburn POP that has high density of connectivity and utilization, we could simply “throw compute at it” within common software stacks. This is not what we’ve decided to do; F5 has decided to provide additional benefits to capacity expansions by introducing what we’re calling “Resiliency Zones”. Introduction to Resiliency Zones: What is a Resiliency Zone? A Resiliency Zone is simply another Regional Edge cluster within the same metropolitan (metro) area. These Resiliency Zones may be within the same POP, or within a common campus of POPs. The Resiliency Zones are made up of dedicated compute structures and have network hardware for different networks that make up our Regional Edge infrastructure. So why not follow in AWS’s footsteps and call these Availability Zones? Well, while in some cases we may split Resiliency Zones across a campus of data centers and be within separate physical buildings, that may not always be the design. It is possible that the Resiliency Zones are within the same facility and split between racks. We didn’t feel this level of separation provided a full Availability Zone-like infrastructure as AWS has built out. Remember, F5’s services are globally significant. While most of the cloud providers services are locally significant to a region and set of Availability Zones (in AWS case). While we strive to ensure our services are protected from catastrophic failures, F5 Distributed Cloud’s global availability of services affords us to be more condensed in our data center footprint within a single region or metro. I spoke of “additional benefits” above; let’s look at those. With Resiliency Zones, we’ve created the ability to scale our infrastructure both horizontally and vertically within our POPs. We’ve also created isolated fault and operational domains. I personally believe the operational domain is most critical. Today, when we do maintenance on a Regional Edge, all traffic to that Regional Edge is rerouted to another POP for service. With Resiliency Zones, while one Regional Edge “Zone” is under maintenance, the other Regional Edge Zone(s) can handle the traffic, keeping the traffic local to the same POP. In some regions of the world, this is critical to maintaining traffic within the same region and country. What to Expect with Resiliency Zones Resiliency Zone Visibility: Now that we have a little background on what Resiliency Zones are, what should you expect and look out for? You will begin to see Regional Edges within Console that have a letter associated to them. Example, “dc12-ash” which is the original Regional Edge; you’ll see another Regional Edge “b-dc12-ash”. We will not be appending an “a” to the original Regional Edge. As I write this article, the Resiliency Zones have not been released for routing traffic. They will be soon (June 2025). You can however, see the first resiliency zone today if you use all regional edges by default. If you navigate to a Performance Dashboard for a Load Balancer, and look at the Origin Servers tab, then sort/filter for dc12-ash, you’ll see both dc12-ash and b-dc12-ash. Customer Edge Tunnels: Customer Edge (CE) sites will not terminate their tunnels onto a Resiliency Zone. We’re working to make sure we have the right rules for tunnel terminations in different POPs. We can also give customers the option to choose if they want tunnels to be in the same POP across Resiliency Zones. Once the logic and capabilities are in place, we’ll allow CE tunnels to terminate on Resiliency Zones Regional Edges. Site Selection and Virtual Sites: The Resiliency Zones should not be chosen as the only site or virtual site available for an origin. We’ve built in some safeguards into the UI that’ll give you an error if you try to assign Resiliency Zone RE sites without the original RE site within the same association. For example, you cannot apply b-dc12-ash without including dc12-ash to an origin configuration. If you’re unfamiliar with Virtual Sites on F5’s Regional Edge data planes, please refer to the link at the top of this article. When setting up a Virtual Site, we use a site selector label. In my article, I highlight these labels that are associated per site. What we see used most often are: Country, Region, and SiteName. If you chose to use SiteName, your Virtual Site will not automatically add the new Resiliency Zone. Example, your site selector uses SiteName in dc12-ash. When b-dc12-ash comes online, it will not be matched and automatically used for additional capacity. Whereas if you used “country in USA” or “region in Ashburn”, then dc12-ash and b-dc12-ash would be available to your services right away. Best Practices for Virtual Sites: What is the best practice when it comes to Virtual Sites? I wouldn’t be in tech if I didn’t say “it depends”. It is ultimately up to you on how much control you want versus operational overhead you’re willing to have. Some people may say they don’t want to have to manage their virtual sites every time F5 changes the capacity. This could mean adding new Regional Edges in new POPs or adding Resiliency Zones into existing POPs. Whereas others may say they want to control when traffic starts routing through new capacity and infrastructure to their origins. Often times this control is to ensure customer-controlled security (firewall rules, network security groups, geo-ip db, etc.) are approved and allowed. As shown in the graph, the more control you want, the more operations you will maintain. What would I recommend? I would go less granular in how I setup Regional Edge Virtual Sites. As I would want as much compute capacity as close to them as possible to serve my clients of my applications for F5 Services. I’d also want attackers, bots, bad guys, or the traffic that isn’t an actual client to have security applied as close as possible to the source. Lastly, as we see L7 DDoS continue to rise, the more points of presence for L7 security I can provide and scale. This gives me the best chance of mitigating the attack. To achieve a less granular approach to virtual sites, it is critical to: Pay attention to our maintenance notices. If we’re adding IP prefixes to our allowed firewall/proxy list of IPs, we will send notice well in advance of these new prefixes becoming active. Update your firewall’s security groups, and verify with your geo-ip database provider Understand your client-side/downstream/VIP strategy vs. server-side/upstream/origin strategy and what the different virtual site models might impact. When in doubt, ask. Ask for help from your F5 account team. Open a support ticket. We’re here to help. Summary: F5’s Distributed Cloud platform needed an additional scaling mechanism to the infrastructure, offering services to its customers. To meet those needs, it was determined to add capacity through more Regional Edges within a common POP. This strategy offers both F5 and Customer operations teams enhanced flexibility. Remember, Resiliency Zones are just another Regional Edge. I hope this article is helpful, and please let me know what you think in the comments below.598Views7likes0CommentsAgility sessions announced
Good news, everyone! This year's virtual Agility will have over 100 sessions for you to choose from, aligned to 3 pillars. There will be Breakouts (pre-recorded 25 minutes, unlimited audience) Discussion Forums (live content up to 45 minutes, interactive for up to 75 attendees) Quick Hits (pre-recorded 10 minutes, unlimited audience) So, what kind of content are we talking about? If you'd like to learn more about how to Simplify Delivery of Legacy Apps, you might be interested in Making Sense of Zero Trust: what’s required today and what we’ll need for the future (Discussion Forum) Are you ready for a service mesh? (breakout) BIG-IP APM + Microsoft Azure Active Directory for stronger cybersecurity defense (Quick Hits) If you'd like to learn more about how to Secure Digital Experiences, you might be interested in The State of Application Strategy 2022: A Sneak Peak (Discussion Forum) Security Stack Change at the Speed of Business (Breakout) Deploy App Protect based WAF Solution to AWS in minutes (Quick Hits) If you'd like to learn more about how to Enable Modern App Delivery at Scale, you might be interested in Proactively Understanding Your Application's Vulnerabilities (Discussion Forum Is That Project Ready for you? Open Source Maturity Models (Breakout) How to balance privacy and security handling DNS over HTTPS (Quick Hits) The DevCentral team will be hosting livestreams, and the DevCentral lounge where we can hang out, connect, and you can interact directly with session presenters and other technical SMEs. Please go to https://agility2022.f5agility.com/sessions.html to see the comprehensive list, and check back with us for more information as we get closer to the conference.483Views7likes1CommentHow I did it - "High-Performance S3 Load Balancing of Dell ObjectScale with F5 BIG-IP"
As AI and data-driven workloads grow, enterprises need scalable, high-performance, and resilient storage. Dell ObjectScale delivers with its cloud-native, S3-compatible design, ideal for AI/ML and analytics. F5 BIG-IP LTM and DNS enhance ObjectScale by providing intelligent traffic management and global load balancing—ensuring consistent performance and availability across distributed environments. This article introduces Dell ObjectScale and its integration with F5 solutions for advanced use cases.1.3KViews6likes1CommentDevCentral's Featured Member for August - Tim Riker
Our Featured Member series is a way for us to show appreciation and highlight active contributors in our community. Communities thrive on interaction and our Featured Series gives you some insight on some of our most engaged folks. DevCentral MVP Tim Riker is our Featured Member for August! Let's catch up with Tim! DevCentral: First, please explain to the DC community a little about yourself, what you do and why it is important. Tim: I have been doing software development and systems administration for many years. I’ve spent many years doing Linux development from embedded systems using BusyBox and ucLinux to large computing clusters. DevCentral: You’ve continued to be an active contributor in the DevCentral community. What keeps you involved? Tim: When I started in my current position, there were 34 different BIG-IP nodes to administer and no central tool to view all of them. BigIPReport was a great solution for this. On report view that includes all the virtual servers, pools, nodes, data groups, etc. all searchable in one interface. DevCentral has helped to find solid answers to real world issues. This has saved time and increased flexibility of our solutions. DevCentral: Tell us a little about the technical expertise you have. Tim: I have a deep understanding of networking, systems, and software development. Development in everything from Linux kernel development in C, Android apps in Java, to web apps in PHP. I have also been a Linux user and proponent for almost 30 years. DevCentral: How is the DevCentral MVP experience? Tim: The contact with peers has been great. I’ve enjoyed working with Patrik and others in the DevCentral community. The wealth of information from the community on DevCentral has made working with F5 systems easier and more productive. DevCentral: You are a Sr. Software Engineer and Linux Technologist. Can you describe your typical workday, how you manage work/life balance and how has the recent pandemic impacted your work? Tim: As most of the systems I administer are remote anyway I am blessed in that my work has not been drastically impacted by the pandemic. I work remotely from home now instead of remotely from the office. Video conferencing tools have been very helpful in team communication. Agility being all online was somewhat of a disappointment last year, but the online sessions have been great. DevCentral: Do you have any F5 Certifications? If so, why are these important to you and how have they helped with your career? Tim: I don’t have any F5 Certifications, but I have attended F5 technical training in person and online. DevCentral: Describe one of your biggest Customer challenges and how the community helped in that situation. Tim: We have many different teams utilizing the F5s in our organization. Each of them has their own customers and needs. Granting such a large group read/write/admin access was difficult to manage. Using BigIPReport to expose current configuration and status to a large group has worked well. This has allowed us to limit read/write access and make change tickets more specific to the current configuration. *Tim has written a central http logging rule in use across the organization that tracks usage as well as performance data. It is available at https://www.devcentral.f5.com/s/articles/logging-irule-1180 DevCentral: Lastly, if you weren’t doing what you’re doing – what would be your dream career? Or better, when you were a kid – what did you want to be when you grew up? Tim: I love solving problems and fixing things. There are so many opportunities to build solutions, and so little time. However, if money were not an issue, I’d be hiking, camping, boating, and other outdoor activities with family and friends, instead of spending as much time working with technology. ---Thanks Tim! We really appreciate your willingness to share with the DevCentral Community. Stay connected with Tim on social media: Tim on LinkedIn Tim on Twitter Tim's Website429Views6likes1Comment