devops
1584 TopicsIntroducing AI Assistant for F5 Distributed Cloud, F5 NGINX One and BIG-IP
This article is an introduction to AI Assistant and shows how it improves SecOps and NetOps speed across all F5 platforms (Distributed Cloud, NGINX One and BIG-IP) , by solving the complexities around configuration, analytics, log interpretation and scripting.122Views2likes0CommentsModern Applications-Demystifying Ingress solutions flavors
In this article, we explore the different ingress services provided by F5 and how those solutions fit within our environment. With different ingress services flavors, you gain the ability to interact with your microservices at different points, allowing for flexible, secure deployment. The ingress services tools can be summarized into two main categories, Management plane: NGINX One BIG-IP CIS Traffic plane: NGINX Ingress Controller / Plus / App Protect / Service Mesh BIG-IP Next for Kubernetes Cloud Native Functions (CNFs) F5 Distributed Cloud kubernetes deployment mode Ingress solutions definitions In this section we go quickly through the Ingress services to understand the concept for each service, and then later move to the use cases’ comparison. BIG-IP Next for Kubernetes Kubernetes' native networking architecture does not inherently support multi-network integration or non-HTTP/HTTPS protocols, creating operational and security challenges for complex deployments. BIG-IP Next for Kubernetes addresses these limitations by centralizing ingress and egress traffic control, aligning with Kubernetes design principles to integrate with existing security frameworks and broader network infrastructure. This reduces operational overhead by consolidating cross-network traffic management into a unified ingress/egress point, eliminating the need for multiple external firewalls that traditionally require isolated configuration. The solution enables zero-trust security models through granular policy enforcement and provides robust threat mitigation, including DDoS protection, by replacing fragmented security measures with a centralized architecture. Additionally, BIG-IP Next supports 5G Core deployments by managing North/South traffic flows in containerized environments, facilitating use cases such as network slicing and multi-access edge computing (MEC). These capabilities enable dynamic resource allocation aligned with application-specific or customer-driven requirements, ensuring scalable, secure connectivity for next-generation 5G consumer and enterprise solutions while maintaining compatibility with existing network and security ecosystems. Cloud Native Functions (CNFs) BIG-IP Next for Kubernetes enables the advanced networking, traffic management and security functionalities; CNFs enables additional advanced services. VNFs and CNFs can be consolidated in the S/Gi-LAN or the N6 LAN in 5G networks. A consolidated approach results in simpler management and operation, reduced operational costs up to reduced TCO by 60% and more opportunities to monetize functions and services. Functions can include DNS, Edge Firewall, DDoS, Policy Enforcer, and more. BIG-IP Next CNFs provide scalable, automated, resilient, manageable, and observable cloud-native functions and applications. Support dynamic elasticity, occupy a smaller footprint with fast restart, and use continuous deployment and automation principles. NGINX for Kubernetes / NGINX One NGINX for Kubernetes is a versatile and cloud-native application delivery platform that aligns closely with DevOps and microservices principles. It is built around two primary models: NGINX Ingress Controller (OSS and Plus): Deployed directly inside Kubernetes clusters, it acts as the primary ingress gateway for HTTP/S, TCP, and UDP traffic. It supports Kubernetes-native CRDs, and integrates easily with GitOps pipelines, service meshes (e.g., Istio, Linkerd), and modern observability stacks like Prometheus and OpenTelemetry. NGINX One/NGINXaaS: This SaaS-delivered, managed service extends the NGINX experience by offloading the operational overhead, providing scalability, resilience, and simplified security configurations for Kubernetes environments across hybrid and multi-cloud platforms. NGINX solutions prioritize lightweight deployment, fast performance, and API-driven automation. NGINX Plus variants offer extended features like advanced WAF (NGINX App Protect), JWT authentication, mTLS, session persistence, and detailed application-layer observability. Some under the hood differences, BIG-IP Next for Kubernetes/CNF make use of F5 own TMM to perform application delivery and security, NGINX rely on Kernel to perform some network level functions like NAT, IP tables and routing. So it’s a matter of the architecture of your environment to go with one or both options to enhance your application delivery and security experience. BIG-IP Container Ingress Services (CIS) BIG-IP CIS works on management flow. The CIS service is deployed at Kubernetes cluster, sending information on created Pods to an integrated BIG-IP external to Kubernetes environment. This allows to automatically create LTM pools and forwarding traffic based on pool members health. This service allows for application teams to focus on microservice development and automatically update BIG-IP, allowing for easier configuration management. Use cases categorization Let’s talk in use cases terms to make it more related to the field and our day-to-day work, NGINX One Access to NGINX commercial products, support for open-source, and the option to add WAF. Unified dashboard and APIs to discover and manage your NGINX instances. Identify and fix configuration errors quickly and easily with the NGINX One configuration recommendation engine. Quickly diagnose bottlenecks and act immediately with real-time performance monitoring across all NGINX instances. Enforce global security polices across diverse environments. Real-time vulnerability management identifies and addresses CVEs in NGINX instances. Visibility into compliance issues across diverse app ecosystems. Update groups of NGINX systems simultaneously with a single configuration file change. Unified view of your NGINX fleet for collaboration, performance tuning, and troubleshooting. NGINX One to automate manual configuration and updating tasks for security and platform teams. BIG-IP CIS Enable self-service Ingress HTTP routing and app services selection by subscribing to events to automatically configure performance, routing, and security services on BIG-IP. Integrate with the BIG-IP platform to scale apps for availability and enable app services insertion. In addition, integrate with the BIG-IP system and NGINX for Ingress load balancing. BIG-IP Next for Kubernetes Supports ingress and egress traffic management and routing for seamless integration to multiple networks. Enables support for 4G and 5G protocols that are not supported by Kubernetes—such as Diameter, SIP, GTP, SCTP, and more. BIG-IP Next for Kubernetes enables security services applied at ingress and egress, such as firewalling and DDoS. Topology hiding at ingress obscures the internal structure within the cluster. As a central point of control, per-subscriber traffic visibility at ingress and egress allows traceability for compliance tracking and billing. Support for multi-tenancy and network isolation for AI applications, enabling efficient deployment of multiple users and workloads on a single AI infrastructure. Optimize AI factories implementations with BIG-IP Next for Kubernetes on Nvidia DPU. F5 Cloud Native Functions (CNFs) Add containerized services for example Firewall, DDoS, and Intrusion Prevention System (IPS) technology Based on F5 BIG-IP AFM. Ease IPv6 migration and improve network scalability and security with IPv4 address management. Deploy as part of a security strategy. Support DNS Caching, DNS over HTTPS (DoH). Supports advanced policy and traffic management use cases. Improve QoE and ARPU with tools like traffic classification, video management and subscriber awareness. NGINX Ingress Controller Provide L4-L7 NGINX services within Kubernetes cluster. Manage user and service identities and authorize access and actions with HTTP Basic authentication, JSON Web Tokens (JWTs), OpenID Connect (OIDC), and role-based access control (RBAC). Secure incoming and outgoing communications through end-to-end encryption (SSL/TLS passthrough, TLS termination). Collect, monitor, and analyze data through prebuilt integrations with leading ecosystem tools, including OpenTelemetry, Grafana, Prometheus, and Jaeger. Easy integration with Kubernetes Ingress API, Gateway API (experimental support), and Red Hat OpenShift Routes F5 Distributed Cloud Kubernetes deployment mode The F5 XC k8s deployment is supported only for Sites running Managed Kubernetes, also known as Physical K8s (PK8s). Deployment of the ingress controller is supported only using Helm. The Ingress Controller manages external access to HTTP services in a Kubernetes cluster using the F5 Distributed Cloud Services Platform. The ingress controller is a K8s deployment that configures the HTTP Load Balancer using the K8s ingress manifest file. The Ingress Controller automates the creation of load balancer and other required objects such as VIP, Layer 7 routes (path-based routing), advertise policy, certificate creation (k8s secrets or automatic custom certificate) Conclusion As you can see, the diverse Ingress controllers tools give you more flexibility, tailoring your architecture based on organization requirements and maintain application delivery and security practices across your applications ecosystem. Related Content and Technical demos BIG-IP Next SPK: a Kubernetes native ingress and egress gateway for Telco workloads F5 BIG-IP Next CNF solutions suite of Kubernetes native 5G Network Functions Deploy WAF on any Edge with F5 Distributed Cloud Announcing F5 NGINX Ingress Controller v4.0.0 | DevCentral JWT authorization with NGINX Ingress Controller My first CRD deployment with CIS | DevCentral BIG-IP Next for Kubernetes BIG-IP Next for Kubernetes (LA) BIG-IP Next Cloud-Native Network Functions (CNFs) CNF Home F5 NGINX Ingress Controller Overview of F5 BIG-IP Container Ingress Services NGINX One133Views1like0CommentsMitigating OWASP API Security Risk: Excessive Data Exposure using F5 XC Platform
This is part of the OWASP API Security TOP 10 mitigation series, and you can refer here for an overview of these categories and F5 Distributed Cloud Platform (F5 XC) Web Application and API protection (WAAP). Introduction to Excessive Data Exposure Application Programming Interfaces (APIs) are the foundation stone of modern evolving web applications which are driving the digital world. They are part of all phases in product development life cycle, starting from design, testing to end customer using them in their day-to-day tasks. Since they don't have restrictions in place, sometimes APIs expose sensitive data such as Personally Identifiable Information (PII), Credit Card Numbers (CCN) and Social Security Numbers (SSN), etc. Because of these issues, they are the most exploited blocks in cybercrime to gain access to customer information which can be sold or further used in other exploits like credential stuffing, etc. Most of the time, the design stage doesn't include this security perspective and relies on 3rd party tools to perform sanitization of the data before displaying the results to customers. Identifying the sensitive information in these huge chunks of API response data is sophisticated and most of the available security tools in the market don't support this capability. So instead of relying on third party tools it's recommended to follow shift left strategies and add security as part of the development phase. During this phase, developers must review and ensure that the API returns only required details instead of providing unnecessary properties to avoid sensitive data exposure. Excessive data exposure attack scenario-1 To showcase this category, we are exposing sensitive details like CCN and SSN in one of the product reviews of Juice shop application (refer links for more info) as below - Overview of Data Guard: Data Guard is F5 XC load balancer feature which shields the responses from exposing sensitive information like CCN/SSN by masking these fields with a string of asterisks (*). Depending on the customer's requirement, they can have multiple rules configured to apply or skip processing for certain paths and routes. Preventing excessive data exposure using F5 Distributed Cloud Step1: Create origin pool - Refer here for more information Step2: Create Web Application Firewall policy (WAF) - Refer here for details Step3: Create https load balancer (LB) with above created pool and WAF policy - Refer here for more information Step4: Upload your application swagger file and add it to above load balancer - Refer here for more details Step5: Configure Data Guard on the load balancer with action and path as below Step6: Validate the sensitive data is masked Open postman/browser, check the product reviews section/API and validate these details are hidden and not exposed as in original application In Distributed Cloud Console expand the security event and check the WAF section to understand the reason why these details are masked as below: Excessive data exposure attack scenario-2 In this demonstration we are using an API based vulnerable application VAmPI (VAmPI is a vulnerable API made with Flask, and it includes vulnerabilities from the OWASP top 10 vulnerabilities for APIs, for more info follow the repo link). Follow below steps to bring up the setup: Step1: Host the VAmPI application inside a virtual machine Step2: Login to XC console, create a HTTP LB and add the hosted application as an origin server Step3: Access the application to check its availability. Step4: Now enable API Discovery and configure sensitive data discovery policy by addingall the compliance frameworks in your HTTP LB config. Step5: Hit the vulnerable API Endpoint '/users/v1/_debug' exposing sensitive data like username, password etc. Step6: Navigate to security overview dashboard in the XC console and select the API Endpoints tab. Check for vulnerable endpoint details. Step7: In the Sensitive Data section, click Ellipsis on the right side to get options for action. Step8: Clicking on the option 'Add Sensitive Data Exposure Rule' will automatically add the entries for sensitive data exposure rule to your existing LB configs. Apply the configuration. Step9: Now again, hit the vulnerable API Endpoint '/users/v1/_debug' Here in the above image, you can see masked values in the response. All letters changed to 'a' and number is converted to '1'. Step10: Optionally you can also manually configure sensitive data exposure rule by adding details about the vulnerable API endpoint. Login back to XC console Start configuring API Protection rule in the created HTTP LB Click Configure in the Sensitive Data Exposure Rules section. Click Add Item to create the first rule. In the Target section, enter the path that will respond to the request. Also enter one or more methods with responses containing sensitive information. In the Values field in Pattern section, enter the JSON field value you want to mask. For example, to mask all emails in the array users, enter “users[_].email”. Note that an underscore between the square brackets indicates the array's elements. Once the above rule gets applied, values in the response will be masked as follows: All letters will change to a or A (matching case) and all numbers will convert to 1. Click Apply to save the rule to the list of Sensitive Data Exposure Rules. Optionally, Click Add Item to add more rules. Click Apply to save the list of rules to your load balancer. Step11: After the completion of Step10, Hit back the vulnerable API Endpoint. Here also in the above image, you can see masked values in the response as per the configurations done in Step 10. Conclusion As we have seen in the above use cases sensitive data exposure occurs when an application does not protect sensitive data like PII, CCN, SSN, Auth Credentials etc. Leaking of such information may lead to serious consequences. Hence it becomes extremely critical for organizations to reduce the risk of sensitive data exposure. As demonstrated above, F5 Distributed Cloud Platform can help in protecting the exposure of such sensitive data with its easy to use API Security solution offerings. For further information check the links below OWASP API Security - Excessive Data Exposure OWASP API Security - Overview article F5 XC Data Guard Overview OWASP Juice Shop VAmPI2.8KViews3likes2CommentsHow To Secure Multi-Cloud Networking with Routing & Web Application and API Protection
Introduction With the proliferation of intra-cloud networking requirements, continuing, organizations are increasingly leveraging multi-cloud solutions to optimize performance, cost, and resilience. However, with this approach comes the challenge of ensuring robust security across diverse cloud and on-prem environments. Secure multi-cloud networking with advanced web application and API protection services, is essential to safeguard digital assets, maintain compliance, and uphold operational integrity. Understanding Multi-Cloud Networking Multi-cloud networking involves orchestrating connectivity between multiple cloud platforms such as AWS, Microsoft Azure, Google Cloud Platform, and others including on-prem and private data centers. This approach allows organizations to avoid vendor lock-in, enhance redundancy, and tailor services to specific workloads. However, managing networking and web application security across these platforms can be complex due to different platform security models, configurations, and interfaces. Key Components of Multi-Cloud Networking Inter-cloud Connectivity: Establishing secure connections between multiple cloud providers to ensure seamless data flow and application interoperability. Unified Management: Implementing centralized management tools to oversee network configurations, policies, and security protocols across all cloud environments. Automated Orchestration: Utilizing automation to provision, configure, and manage network resources dynamically, reducing manual intervention and potential errors. Compliance and Governance: Ensuring adherence to regulatory requirements and best practices for data protection and privacy across all cloud platforms. Securing Multi-Cloud Environments Security is paramount in multi-cloud networking. With multiple entry points and varying security measures across different cloud providers, organizations must adopt a comprehensive strategy to protect their assets. Strategies for Secure Multi-Cloud Networking Zero Trust Architecture: Implementing a zero-trust model that continuously verifies and validates every request, irrespective of its source, to mitigate risks. Encryption: Utilizing advanced encryption methods for data in transit and at rest to protect against unauthorized access. Continuous Monitoring: Deploying monitoring tools to detect, analyze, and respond to threats in real-time. Application Security: Using a common framework for web application and API security reduces the number of steps needed to identify and remediate security risks and misconfigurations across disparate infrastructures. Web Application and API Protection Services Web applications and APIs are critical components of modern digital ecosystems. Protecting these assets from cyber threats is crucial, especially in a multi-cloud environment where they may be distributed across various platforms. Comprehensive Web Application Protection Web Application Firewalls (WAFs) play a vital role in safeguarding web applications. They filter and monitor HTTP traffic between a web application and the internet, blocking malicious requests and safeguarding against common threats such as SQL injection, cross-site scripting (XSS), and DDoS attacks. Advanced Threat Detection: Employing machine learning and artificial intelligence to identify and block sophisticated attacks. Application Layer Defense: Providing protection at the application layer, where traditional network security measures may fall short. Scalability and Performance: Ensuring WAF solutions can scale and perform adequately in response to varying traffic loads and attack volumes. Securing APIs in Multi-Cloud Environments APIs are pivotal for integration and communication between services. Securing APIs involves protecting them from unauthorized access, misuse, and exploitation. Authentication and Authorization: Implement strong authentication mechanisms such as OAuth and JWT to ensure only authorized users and applications can access APIs. Rate Limiting: Controlling the number of API calls to prevent abuse and ensure fair usage across consumers. Input Validation: Validating input data to prevent injection attacks and ensure data integrity. Threat Detection: Monitoring API traffic for anomalies and potential threats, and responding swiftly to mitigate risks. Best Practices for Secure Multi-Cloud Networking To effectively manage and secure multi-cloud networks, organizations should adhere to best practices that align with their operational and security objectives. Adopt a Holistic Security Framework A holistic security framework encompasses the entire multi-cloud environment, focusing on integration and coordination between different security measures across cloud platforms. Unified Policy Enforcement: Implementing consistent security policies across all cloud environments to ensure uniform protection. Regular Audits: Conducting frequent security audits to identify vulnerabilities, assess compliance, and improve security postures. Incident Response Planning: Developing and regularly updating incident response plans to handle potential breaches and disruptions efficiently. Leverage Security Automation Automation can significantly enhance security in multi-cloud environments by reducing human errors and ensuring timely responses to threats. Automated Compliance Checks: Using automation to continuously monitor and enforce compliance with security standards and regulations. Real-time Threat Mitigation: Implementing automated remediation processes to address security threats as they are detected. Demo: Bringing it Together with F5 Distributed Cloud Using services in Distributed Cloud, F5 brings everything together. Deploying and orchestrating connectivity between hybrid and multi-cloud environments, Distributed Cloud not only connects these environments, it also secures them with universal Web App and API Protection policies. The following solution uses Distributed Cloud Network Connect, App Connect, and Web App & API Protection to connect, deliver, and secure an application with services that exist in different cloud and on-prem environments. This video shows how each of the features in this solution come together: Bringing it together with Automation Orchestrating this end-to-end, including the Distributed Cloud components itself is trivial using the combination of GitHub Workflow Actions and Terraform. The following automation workflow guide and companion article at DevCentral provides all the steps and modular code necessary to build a complete multicloud environment, manage Kubernetes clusters, and deploy the sample functional multi-site application, Arcadia Finance. GitHub Repository: https://github.com/f5devcentral/f5-xc-terraform-examples/tree/main/workflow-guides/smcn/mcn-distributed-apps-l3 Conclusion Secure multi-cloud networking, combined with robust web application and API protection services, is vital for organizations seeking to leverage the benefits of a multi-cloud strategy without compromising security. By adopting comprehensive security measures, enforcing best practices, and leveraging advanced technologies, organizations can safeguard their digital assets, ensure compliance, and maintain operational integrity in a dynamic and ever-evolving cloud landscape. Additional Resources Introducing Secure MCN features on F5 Distributed Cloud Driving Down Cost & Complexity: App Migration in the Cloud The App Delivery Fabric with Secure Multicloud Networking Scale Your DMZ with F5 Distributed Cloud Services Seamless Application Migration to OpenShift Virtualization with F5 Distributed Cloud Automate Multicloud Networking w/ Terraform: routing and app connect on F5 Distributed Cloud132Views1like0CommentsThe table Command: Examples
With version 10.1, we've given the session table some long-sought functionality, and revamped its iRules interface completely to give users a new, cleaner, full-featured way to keep track of global data. We are very proud to introduce the table command... In this last part of the series on the usage of the new table command, we discuss various examples of the use of the table command. Examples Now that we've covered all of the pieces of the table command, let's see some larger real-world uses of making them work together, not only simplfying existing iRules, but making new things possible. Limiting Connections To A VIP In the counting article, we gave one powerful example iRule that would have been too bulky and/or unreliable to do before: limit a VIP to a certain number of connections. That example can be easily modified to limit those connections by IP address as well. If you compare it to this example when CLIENT_ACCEPTED { set tbl "connlimit:[IP::client_addr]" set key "[TCP::client_port]" if { [table keys -subtable $tbl -count] > 1000 } { event CLIENT_CLOSED disable reject } else { table set -subtable $tbl $key "ignored" 180 set timer [after 60000 -periodic { table lookup -subtable $tbl $key }] } } when CLIENT_CLOSED { after cancel $timer table delete -subtable $tbl $key } you can see that only the first two lines changed! We've gone from one global subtable to a subtable for each IP address. This is an example of the power of subtables and using them for scoping. Blocking DNS Flood Attacks This example to blacklist IPs if they make too many DNS queries per second, uses global arrays to keep track of the number of requests made in the current second, as well as the blacklist of IPs. when RULE_INIT { set ::maxquery 100 set ::holdtime 600 } when CLIENT_DATA { set srcip [IP::remote_addr] set curtime [clock second] if { [ info exists ::blacklist($srcip) ] } { if { $::holdtime > [expr ${curtime} - $::blacklist($srcip) ] } { drop return } unset ::blacklist($srcip) } if { [ info exists ::usertable(time,$srcip)] and $curtime == $::usertable(time,$srcip) } { incr ::usertable(freq,$srcip) if { $::usertable(freq,$srcip) > $::maxquery } { set ::blacklist($srcip) $curtime unset ::usertable(freq,$srcip) unset ::usertable(time,$srcip) drop return } } else { set ::usertable(freq,$srcip) 1 set ::usertable(time,$srcip) $curtime } } It has to do all of the work to remove old data from those arrays manually, so not only can that cause all those array entries to just accumulate in memory, but it demotes the VIP it gets attached to (because it uses global variables) . If we use the table command instead as shown in the following code, a lot of the complexity is handled by the system, so it's now cleaner, simpler, and CMP-compatible. A win all around! when RULE_INIT { set ::maxquery 100 set ::holdtime 600 } when CLIENT_DATA { set srcip [IP::remote_addr] set curtime [clock second] if { [ info exists ::blacklist($srcip) ] } { if { $::holdtime > [expr ${curtime} - $::blacklist($srcip) ] } { drop return } unset ::blacklist($srcip) } if { [ info exists ::usertable(time,$srcip)] and $curtime == $::usertable(time,$srcip) } { incr ::usertable(freq,$srcip) if { $::usertable(freq,$srcip) > $::maxquery } { set ::blacklist($srcip) $curtime unset ::usertable(freq,$srcip) unset ::usertable(time,$srcip) drop return } } else { set ::usertable(freq,$srcip) 1 set ::usertable(time,$srcip) $curtime } } Limiting POST Requests Per User Another example to limit POSTs per-user uses even more very tricky global array manipulation. when RULE_INIT { set ::windowSecs 10 } when HTTP_REQUEST { if { [HTTP::method] eq "POST" } { if { ![HTTP::header exists Authorization] } { HTTP::respond 401 return } set myUserID [getfield [b64decode [substr [HTTP::header "Authorization"] 6 end]] ":" 1] set myMaxRate [findclass $myUserID $::MaxPOSTRates3 " "] if { $myMaxRate ne "" }{ set currentTime [clock seconds] if { [info exists ::timestamp($myUserID)] } { set i [ expr $currentTime - $::timestamp($myUserID) ] if { $i > $::windowSecs } { set i $::windowSecs } while { $i > 0} { for {set j $::windowSecs} {$j > 0} {set j [expr $j - 1]} { set k [ expr $j - 1 ] set ::count($myUserID.$j) $::count($myUserID.$k) } set ::count($myUserID.0) 0 incr i -1 } set k 0 for {set i 0} { $i < $::windowSecs} {incr i} { incr k $::count($myUserID.$i) } if {$k < $myMaxRate } { incr ::count($myUserID.0) set ::timestamp($myUserID) $currentTime } else { HTTP::respond 302 Location http://asdf:44444/ } } else { for {set i 0} {$i < $::windowSecs} {incr i} { set ::count($myUserID.$i) 0 } set ::timestamp($myUserID) $currentTime set ::count($myUserID.0) 1 } } } } It can be very difficult to understand, to say nothing of trying to maintain or alter it. With the table command to do all of the heavy lifting for us, it becomes much simpler, lighter-weight, and CMP-compatible. when RULE_INIT { set static::windowSecs 10 } when HTTP_REQUEST { if { [HTTP::method] eq "POST" } { if { ! [HTTP::header exists Authorization] } { HTTP::respond 401 return } set myUserID [getfield [b64decode [substr [HTTP::header "Authorization"] 6 end]] ":" 1] set myMaxRate [findclass $myUserID $::MaxPOSTRates3 " "] if { $myMaxRate ne "" } { set reqnum [table incr "req:$myUserId"] set tbl "countpost:$myUserId" table set -subtable $tbl $reqnum "ignored" indef $static::windowSecs if { [table keys -subtable $tbl -count] > $myMaxRate } { HTTP::respond 302 Location http://asdf:44444/ return } } } } That's pretty impressive, if I do say so myself. And once again, because the table command will expire the entries automatically, there's no need to worry about old data taking up memory. We cannot wait to see all of the new (or old!) problems you solve with the help of the table command. Don't forget to show off your work in the iRules CodeShare!3.2KViews0likes4CommentsThe table Command: The Fine Print
With version 10.1, we've given the session table some long-sought functionality, and revamped its iRules interface completely to give users a new, cleaner, full-featured way to keep track of global data. We are very proud to introduce the table command... In this eighth part of the series on the usage of the new table command, we discuss the limitations, a few advanced details and plans for the future. Limitations As fantastic as the new session table is, it does have its limitations. You can't use the table command in RULE_INIT or any other global event. There's no way to access the session table outside of tmm. The ability to count and list keys is limited to subtables. Timestamps are limited to 1-second resolution, so there's no way to do counts for sub-second events. As you saw, you can use it for accurate counting, but that's not nearly as easy to use as we'd like it to be. We tried very hard to come up with a command, or set of commands, that would hide all the table manipulation and just do the counting for you, but everything we came up with was either very complicated or would always use the most computationally expensive method for counting, and that seemed like a bad tradeoff. Advanced details Because there is only one global session table, the session command and the table command both act on the same data. For example, if you insert an entry into the session table with the session command, you could then later look up that data (perhaps in another iRule) using the table command. However, the session command cannot access any data in a subtable in any way. The session table is mirrored to the standby, and this cannot be disabled. While most of the table commands are fairly low weight, getting the list of keys in a subtable is very heavy weight. Because of the new features that subtables have, you might be tempted to use one giant subtable for all your data. This is probably a bad idea, because of the way that the session table is arranged across all the CPUs in a CMP system. Entries in the session table are distributed across all processors. Subtables themselves are also distributed across all processors. However, all of the entries in a given subtable are on the same processor. So if you put all of your entries (or the vast majority of them) into the same subtable, then one CPU will take a disproportionate amount of memory and load. Which you probably don't want. So, in general, more, smaller subtables are a better arrangement than fewer, larger subtables. Plans for the future We think that we've provided a great set of features here, and that they will help you a great deal, but we recognize that it's probably not perfect. Also, we have lots of other plans for future enhancements to the session table and the table command. Ideally, this would completely take the place of global variables in iRules. We would love to get your feedback on what is most important to fix or add. The best way to do that is not to merely suggest features, but to supply us with actual use cases. What can't you do with the existing commands? What can you do, but only in an ugly way?1.2KViews0likes4CommentsThe table Command: Counting
With version 10.1, we've given the session table some long-sought functionality, and revamped its iRules interface completely to give users a new, cleaner, full-featured way to keep track of global data. We are very proud to introduce the table command... In this seventh part of the series on the usage of the new table command, we discuss counting techniques. Counting It turns out that one popular use of iRules is to count things and then report or act on those counts. It also turns out that there are a lot of different ways of counting things, and they require slightly different techniques. The four major ways of counting that we've found so far are (in order of increasing complexity): Events that happened ever (or recently) Events that happened in the past 1 second (or other fixed window) Events that happened in the past N seconds (or other sliding window) Events that are happening right now We'll cover each of these in turn. Events that happened ever How many requests have been made on this VIP? How many POSTs has user X made? This is easy: just use table incr and table lookup. Most of the time you’ll only care about things that happened recently, so you can specify a different timeout for the entry, or just use the default. If you truly want to keep track of things ever, then you can also specify an indefinite timeout. This does mean that that memory will never go away on its own, though, so you should be careful with that option. :) Events that happened in the last fixed window How many DB queries were made in the last second? How many requests has IP X made in the last 10-second window? This is also pretty easy. Each window can have a name (for example, each 1 second window can be named by the result of [clock second]), and you just need to somehow combine that name with the name of the event you're counting. For example: table incr "dbquery:[clock second]" set tensecwin "[string range [clock second] 0 end-1]" table incr "ipreq:[IP::client_addr]:$tensecwin" Events that happened in the last sliding window How many hits on the login page in the last hour? How many bytes sent to IP X in the last minute? How many connections from user Y in the last 15 seconds? What users logged in in the last day? While a fixed window is useful, more often a sliding window is what is desired. This is because a fixed window doesn't move with the clock. For example, if you're using a fixed 60-second window, and it's now 12:30:14, you can't know how many things happened "in the last 60 seconds". You only can know how many things happened from 12:30:00-now, and maybe from 12:29:00-12:29:59, if you kept the result from the previous window. If you had a sliding window instead, it would always contain "the last 60 seconds", meaning that if it's 12:30:14 then the window is from 12:29:15-12:30:14, and when it's 12:30:17, then the window is from 12:29:18-12:30:17. For something like rate-limiting, this makes a huge difference in user experience. Unfortunately, the things that make it more desirable are the very things that make it more complicated. I'll explain this by considering each window as a bucket. With a fixed window, you only ever add things to the bucket ("user X made a request that gets counted in bucket #12:29"). With a sliding window, sometimes things need to come out of the bucket ("the bucket is now counting from 12:29:18 to 12:30:17, so the request that was made at 12:29:17 now needs to be removed"). This tells us that getting a sliding window means managing specific items (so we need to have a handle or name for each item) and expiring them as they move out of the window. Fortunately, a subtable is exactly up for the job. To implement a sliding window, you would do something like: set reqno [table incr "reqs:$username"] table set -subtable "reqrate:$username" $reqno "ignored" indefinite 60 log local0. "User $username made [table keys -count -subtable "reqrate:$username"] requests in the past 60 seconds" This keeps track of the rate of requests for each user. The first line gives each request a serial number (it just increments a variable), and the second line adds an entry to that user's subtable. The entry will expire on its own when it is supposed to leave the window, and counting the keys in the subtable will tell us the number of requests that user made in the window. The more observant among you will note that the table entry we're using for the serial number is going to expire too, so if this user is idle for a while, then the next request will have the serial number go back to 0. This is perfectly OK, though, since as long as the window size (in this example, it’s 60 seconds) is less than the timeout of the serial number entry (in this example, it’s the default of 180 seconds), then by the time the serial number entry expires, all of the entries in the subtable will have expired also. So it doesn't matter that the serial numbers starts at 0 again. If you want to use a longer window than 180 seconds, then you'll need to change the timeout to be longer after the incr line. Now, there's no rule that says you have to have a separate variable to keep track of the identity of each thing you're counting, or that they have to be consecutive. If you already have a serial number (for example, if you're counting new JSESSIONIDs), then you can just use that. Events that are happening right now How many client connections to this VIP? What users are currently logged in? This is slightly tricky, but doable. It builds on the concepts last example, with some additions. One simple example is: when HTTP_REQUEST { switch [HTTP::uri] { "/login.html" { table set -subtable userlist $username "ignored" $static::useridletimeout } "/logout.html" { table delete -subtable userlist $username } default { table lookup -subtable userlist $username } } log local0. "[table keys –count –subtable userlist] users currently logged in" } Once again, we're adding an entry to a subtable to start counting something, but in this case not only are we removing the entry when we want to stop counting it, but we also do a lookup on that entry when the user accesses any page. The reason we do that is to keep the entry alive as long as the user is busy, so if the user doesn't log out via our logout page, we still stop counting that user (because that user's entry in our subtable will expire) when our application does (after the number of seconds in $static::useridletimeout) . With a similar method, we can do something like limit a VIP to 1000 connections: when CLIENT_ACCEPTED { set tbl "connlimit" set key "[IP::client_addr]:[TCP::client_port]" if { [table keys -subtable $tbl -count] > 1000 } { event CLIENT_CLOSED disable reject } else { table set -subtable $tbl $key "ignored" 180 set timer [after 60000 -periodic { table lookup -subtable $tbl $key }] } } when CLIENT_CLOSED { after cancel $timer table delete -subtable $tbl $key } The basics are the same as the previous examples, but now we've added a timer to keep the table entry alive, rather than relying on any particular iRule event. This works because when the connection closes we kill the timer and delete the entry. Now, I know what you're asking: "Why all the extra complexity? Heck, if I had table decr, I could just do something like:" when CLIENT_ACCEPTED { if { [table lookup "conns"] > 1000 } { event CLIENT_CLOSED disable reject } else { table incr "conns" } } when CLIENT_CLOSED { table decr "conns" } The answer is: reliability. What happens if CLIENT_CLOSED never fires? Maybe your box runs low on memory and the connection sweeper hits. Maybe you're running on a VIPRION with no HA and a blade goes down. Your user count never gets to be decremented, and so now you start rejecting connections because you think your VIP is busier than it is. And that's why there is no table decr: to discourage exactly this sort of code. If you think you need it, you probably shouldn’t be using increment and decrement to solve your problem. In the real example above, if CLIENT_CLOSED never fires, there's no problem. No matter how the connection dies, the timer will be removed, so it will stop touching the entry, so the entry will expire on its own. In this way, your connection count may only be wrong temporarily; it will settle on the right answer all on its own.2KViews0likes0CommentsThe table Command: Subtables
With version 10.1, we've given the session table some long-sought functionality, and revamped its iRules interface completely to give users a new, cleaner, full-featured way to keep track of global data. We are very proud to introduce the table command... In this sixth part of the series on the usage of the new table command, we discuss how you can create and manage subtables within the session table. Subtables Whew! Still with me? Good, because we're just now getting to some of best stuff. One of the most powerful changes to the session table in v10.1 is the concept of a subtable, which is simply a named set of entries. It lets you organize your entries in groups, and lets you act on those groups. A subtable is basically a table in and of itself, with some special abilities. Subtables only contain entries, though; you can’t nest subtables. Here's what they look like: They are trivial to use: every current table subcommand simply takes a -subtable <name> parameter. So you can do things like: table set -subtable $tbl $key $data table lookup -subtable $othertbl $key table delete -subtable $thirdtbl $key That's all there is to it! Doesn't look like much, does it? But this is very powerful stuff! Maybe you want a different set of data for each user, or for each client IP address, or for each virtual server? No problem! table set -subtable "$username" $key $data table add -subtable "[IP::client_addr]" $key $data table replace -subtable "[virtual]" $key $data Since you can name your subtables however you want, you can now have shared variables with any arbitrary scope! And there's an easy way to clean up: table delete -all -subtable "$username" will remove every entry in a subtable in one fell swoop. But wait! There's more! With subtables comes a new and long-desired ability: the ability to list or count the number of keys: table keys -subtable <name> [-count|-notouch] You’ll note that unlike all the other table subcommands, the subtable name is not optional here: there’s no way to use it on entries that aren’t in a subtable. Also note that by default, retrieving the list of keys will touch each one, but getting the count will never touch any entries. Isn't this AWESOME!?! This feature is going to simplify a lot of your iRules, and give you the ability to do things that simply weren't feasible before. I can't wait to see all the creative uses you come up with! That covers all of the new session table features. Now we'll cover some ways of putting everything together...3.2KViews0likes5CommentsThe table Command: Advanced Data Expiration
With version 10.1, we've given the session table some long-sought functionality, and revamped its iRules interface completely to give users a new, cleaner, full-featured way to keep track of global data. We are very proud to introduce the table command... In this fifth part of the series on the usage of the new table command, we discuss advanced concepts around data lifetime and expiration from the session table. Advanced Data Expiration On the off chance that your iRule does need to keep close track of varying timeout or lifetime values, then you should know some important ways that they might or might not be changed. As you might expect, all of the other subcommands to set values take the timeout and lifetime parameters as well: table set [-excl|-mustexist] <key> <value> [<timeout> [<lifetime>]] table add <key> <value> [<timeout> [<lifetime>]] table replace <key> <value> [<timeout> [<lifetime>]] However, the add and replace subcommands are designed to sometimes not update the data. For example, if you call table add with a key that already exists, then the data will not get updated. In all cases, if the table command doesn't update the data, then it doesn't update the timeout and lifetime either. If it does update the data, then it does update the timeout and lifetime. So, if you have a table that looks like: and you run: table set 10.1.2.3 UserQuux indefinite 3600 then you end up with: The timeout and lifetime both got updated because the data was updated. If you then ran: table add 10.1.2.3 UserBar 120 120 then the table would not change at all. Because the data did not get updated, the timeout and lifetime also didn't. Sometimes you might want to update the value and not touch the existing timeout or lifetime values, even though you don't know what they are. By specifying a 0 for either the timeout or the lifetime, or by omitting the lifetime, that parameter is not changed even if the value is updated. So if you had the previous table and then ran: table replace 10.2.9.12 UserZap 180 0 you would get: You can see that the entry's value and the timeout were changed, but the lifetime was not.1.2KViews0likes0CommentsThe table Command: Data Expiration
With version 10.1, we've given the session table some long-sought functionality, and revamped its iRules interface completely to give users a new, cleaner, full-featured way to keep track of global data. We are very proud to introduce the table command... In this fourth part of the series on the usage of the new table command, we discuss data lifetime and expiration from the session table. Data expiration Remember that I mentioned that an entry also has metadata? Prior to v10.1, the only metadata that an entry had was a timestamp, which keeps track of when that entry was last touched (changed or queried), and a timeout. The touch timestamp can’t be directly set by iRules, of course. It’s used internally by the session table to keep track of when to expire entries. So a small session table might have looked like this: You can see a number of different types of keys and values, each one with a timeout and a timestamp. One common request to enhance the session table has been to have some way of looking up an entry without affecting the touch timestamp, so we added the -notouch flag to the table command: table set [-notouch] $key $data table lookup [-notouch] $key With this flag, you can query the session table without changing when the entries will expire. But we felt that that didn't go far enough for some uses, so while the session table looks much the same in v10.1, it has two additions: Just as before, an entry has a touch timestamp and can have a timeout, but now it also has a create timestamp (to record when the entry was created), and it can also have a lifetime. By setting a lifetime on an entry, you can have it expire after a certain period of time no matter how many changes or lookups are performed on it. An entry can have a lifetime and a timeout at the same time. It will expire (be removed from the table) whenever the timeout OR the lifetime expires, whichever comes first. You can just specify the lifetime right after the timeout value: table set <key> <value> [<timeout> [<lifetime>]] If you want an entry to only have a lifetime, and not a timeout, you can specify an indefinite timeout. For example: table set $key $data indefinite 3600 will set an entry that will be removed after 3600 seconds. By default, an entry has a timeout of 180 seconds (just like previous versions), and an indefinite lifetime. For most uses of the session table, the user will specify timeout and lifetime values when an entry gets set, and never explicitly worry about them afterwards. If you do need to query or change those values on an entry, then you can do so directly: table timeout [-remaining] $key table timeout $key <value> table lifetime [-remaining] $key table lifetime $key <value> These commands operate exactly as they read: you can query or set the lifetime or timeout parameter which the entry has, or you can query the remaining time that an entry has left. Note that these commands never affect the expiration of a record. In other words, they always implicitly have the -notouch flag set.1.9KViews0likes0Comments