devops
24044 TopicsGetting Started With n8n For AI Automation
First, what is n8n? If you're not familiar with n8n yet, it's a workflow automation utility that allows us to use nodes to connect services quite easily. It's been the subject of quite a bit of Artificial Intelligence hype because it helps you construct AI Agents. I'm going to be diving more into n8n and what it can do with AI. My hope is that you can use this in your own labs to work out some of these AI networking and security challenges in your environment. Here's an example of how someone could use Ollama to control multiple Twitter accounts, for instance: How do you install it? Well... It’s all node, so the best way to install it in any environment is to ensure you have node version 22 (on Mac, homebrew install node@22) installed on your machine, as well as nvm (again, for mac, homebrew install nvm) and then do an npm install -g n8n. Done! Really...That simple. How much does it cost? While there is support and expanded functionality for paid subscribers, there is also a community edition that I have used here and it's free. How to license:
451Views5likes0CommentsHow to log HTTP/2 reset_stream
Hello, We are currently in a meeting to prepare for HTTP/2 DDoS attacks. What we would like to do is log the client’s IP address (either local or remote) whenever an HTTP/2 RESET_STREAM is received. Is there any way to achieve this? Would it be possible to implement using an iRule? Thank you.15Views0likes1CommentHow can k8s CIS CRD VirtualServer reference existing APM Access profile?
Hey Everyone, How can k8s Container Ingress Services (CIS) CRD VirtualServer reference existing APM Acess profile? I know that this is in as3 ( https://clouddocs.f5.com/products/extensions/f5-appsvcs-extension/3.32/declarations/access-related.html ) but I don't see such options in the virtualserver ( https://clouddocs.f5.com/containers/latest/userguide/crd/virtualserver.html ) or policy ( https://clouddocs.f5.com/containers/latest/userguide/crd/virtualserver.html ) crd and I don't want to use old way with config maps. Edit: A not great workaround I found is attaching an access profile by using an irule (APM access-profile can be assigned from iRule only) as the F5 CRD supports attaching configured existing irules. apiVersion: "cis.f5.com/v1" kind: VirtualServer metadata: name: vs-test namespace: xxxx labels: f5cr: "true" spec: virtualServerAddress: "xxxx" virtualServerHTTPPort: xxx snat: auto iRules: - "/Common/test-irule" pools: - monitor: interval: 10 recv: "" send: "GET /" timeout: 31 type: http path: / service: XXX servicePort: 80Solved44Views0likes3CommentsSimplifying Application Health Monitoring with F5 BIG-IP
A simple agreement between BIG-IP administrators and application owners can foster smooth collaboration between teams. Application owners define their own simple or complex health monitors and agree to expose a conventional /health endpoint. When a /health endpoint responds with an HTTP 200 request, BIG-IP assumes the application is healthy based on the application owners' own criteria. The Challenge of Health Monitoring in Modern Environments F5 BIG-IP administrators in Network Operations (NetOps) teams often work with application teams because the BIG-IP acts as a full proxy, providing services like: TLS termination Load balancing Health monitoring Health checks are crucial for effective load balancing. The BIG-IP uses them to determine where to send traffic among back-end application servers. However, health monitoring frequently causes friction between teams. Problems with the Traditional Approach Traditionally, BIG-IP administrators create and maintain health monitors ranging from simple ICMP pings to complex monitors that: Simulate user transactions Verify HTTP response codes Validate payload contents Track application dependencies This leads to several issues: Knowledge Gap: NetOps may not fully grasp each application's intricacies. Change Management Overhead: Application updates require retesting monitors, causing delays. Production Risk: Monitors can break after application changes, incorrectly marking services as up/down. Team Friction: Troubleshooting failed health checks involves tedious back-and-forth between teams. A Cloud-Native Solution The cloud-native and microservices communities have patterns that elegantly solve these problems. One widely used pattern is the [health endpoint], which adapts well to BIG-IP environments. The /health Endpoint Convention Cloud-native applications commonly expose dedicated health endpoints like /health, /healthy, or /ready. These return standard status codes reflecting the application's state. The /health endpoint provides a clear contract between NetOps and application teams for BIG-IP integration. Implementing the Contract This approach establishes a simple agreement: Application Team Responsibilities: Implement /health to return HTTP 200 when the application is ready for traffic Define "healthy" based on application needs (database connectivity, dependencies, etc.) Maintain the health check logic as the application changes BIG-IP Team Responsibilities: Configure an HTTP monitor targeting the /health endpoint Treat 200 as "healthy", anything else as "unhealthy" Benefits of This Approach Aligned Expertise: Application teams define health based on their knowledge. Less Friction: BIG-IP configuration stays stable as applications evolve. Better Reliability: Health checks reflect true application health, including dependencies. Easier Troubleshooting: The /health endpoint can return detailed diagnostic info, but this is ignored by the BIG-IP and used strictly for troubleshooting. Implementation Examples F5 BIG-IP Health Monitor Configuration ltm monitor http /Common/app-health-monitor { defaults-from /Common/http destination *:* interval 5 recv 200 recv-disable none send "GET /health HTTP/1.1\r\nHost: example.com\r\nConnection: close\r\n\r\n" time-until-up 0 timeout 16 } Node.js Health Endpoint Implementation const express = require('express'); const app = express(); const port = 3000; app.get('/', (req, res) => { res.send('Application is running'); }); app.get('/health', async (req, res) => { try { const dbStatus = await checkDatabaseConnection(); const serviceStatus = await checkDependentServices(); if (dbStatus && serviceStatus) { return res.status(200).json({ status: 'healthy', database: 'connected', services: 'available', timestamp: new Date().toISOString() }); } res.status(503).json({ status: 'unhealthy', database: dbStatus ? 'connected' : 'disconnected', services: serviceStatus ? 'available' : 'unavailable', timestamp: new Date().toISOString() }); } catch (error) { res.status(500).json({ status: 'error', message: error.message, timestamp: new Date().toISOString() }); } }); async function checkDatabaseConnection() { // Check real database connection return true; } async function checkDependentServices() { // Check required service connections return true; } app.listen(port, () => { console.log(`Application listening at http://localhost:${port}`); }); Adopting this health check pattern can greatly reduce friction between NetOps and application teams while improving reliability. The simple contract of HTTP 200 for healthy provides the needed integration while letting each team focus on their expertise. For apps that can't implement a custom /health endpoint, BIG-IP admins can still use traditional ICMP or TCP port monitoring. However, these basic checks can't accurately reflect an app's true health and complex dependencies. This approach fosters collaboration and leverages the specialized knowledge of both network and application teams. The result is more reliable services and smoother operations.438Views1like0CommentsVIPTest: Rapid Application Testing for F5 Environments
VIPTest is a Python-based tool for efficiently testing multiple URLs in F5 environments, allowing quick assessment of application behavior before and after configuration changes. It supports concurrent processing, handles various URL formats, and provides detailed reports on HTTP responses, TLS versions, and connectivity status, making it useful for migrations and routine maintenance.1KViews5likes2CommentsLess than 600 seconds lab
In my previous post I shared with you, how you can deploy a lab environment in less than 60 seconds with AS3. This time let's take a look at another lab, that you can set up in less than 10 minutes. Purpose of this lab This lab requires a web server. And some minimal knowledge of Linux (Debian) and git. In my example, I use NGINX. The web application consists of four pages in four colours (red, blue, yellow and green) that are designed to demonstrate the load balancing functionality of the F5 Local Traffic Manager (LTM). You can use the app to familiarise yourself with load balancing functionalities such as: different load balancing methods and priority groups different types of persistence caching HTTP, SSL and other profiles SNAT The web application has a couple of nice features real-time server information display like Server hostname Request timestamp (ISO 8601 format) Request URI Source IP address X-Forwarded-For (XFF) header User-Agent informatio modern, responsive UI picture gallery Prerequisites First you need to set up and configure the web server. Add multiple IPs to the web server (Debian 11+). Edit /etc/network/interfaces: sudo nano /etc/network/interfaces Add the following: allow-hotplug eth0 iface eth0 inet static address 192.168.1.10/24 gateway 192.168.1.1 auto eth0:1 allow-hotplug eth0:1 iface eth0:1 inet static address 192.168.1.11/24 auto eth0:2 allow-hotplug eth0:2 iface eth0:2 inet static address 192.168.1.12/24 auto eth0:3 allow-hotplug eth0:3 iface eth0:3 inet static address 192.168.1.13/24 Restart networking: sudo systemctl restart networking Note: Replace eth0 with your actual interface name. Generate SSL Certificate Create a self-signed SSL certificate with RSA 2048-bit key (no password): openssl req -x509 -nodes -days 365 -newkey rsa:2048 \ -keyout nginx-selfsigned.key -out nginx-selfsigned.crt \ -subj "/C=US/ST=State/L=City/O=Organization/CN=example.com" Installing the web application Example for NGINX 1. Clone the repository git clone https://github.com/webserverdude/ltm-demo-html.git cd webpages 2. Deploy to your web server sudo cp -r * /var/www/ltm-demo-html 3. Configure your web server see below NGINX Configuration The configuration includes HTTP as well as HTTPS listeners. Add this configuration to your NGINX server block: server { listen 192.168.1.10:8000 default_server; root /var/www/ltm-demo-html; index index_red.html; server_name _; add_header X-Backend-Server 1; add_header Set-Cookie "X-Backend-Server=1; Max-Age=10"; location / { try_files $uri $uri/ =404; } # Enable the substitution filter sub_filter_once off; # Allow multiple substitutions # Replace template variables with actual NGINX variables sub_filter '{{server_name}}' '$hostname'; sub_filter '{{time_iso8601}}' '$time_iso8601'; sub_filter '{{request_uri}}' '$request_uri'; sub_filter '{{remote_addr}}' '$remote_addr'; sub_filter '{{http_x_forwarded_for}}' '$http_x_forwarded_for'; sub_filter '{{http_user_agent}}' '$http_user_agent'; } server { listen 10.0.2.71:443 ssl default_server; ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt; ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key; # SSL configuration ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384; ssl_prefer_server_ciphers off; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; root /var/www/ltm-demo-html; index index_red.html; server_name _; add_header X-Backend-Server 1; add_header Set-Cookie "X-Backend-Server=$request_id; Max-Age=10; Secure; SameSite=Strict"; location / { try_files $uri $uri/ =404; } # Enable the substitution filter sub_filter_once off; # Allow multiple substitutions # Replace template variables with actual NGINX variables sub_filter '{{server_name}}' '$hostname'; sub_filter '{{time_iso8601}}' '$time_iso8601'; sub_filter '{{request_uri}}' '$request_uri'; sub_filter '{{remote_addr}}' '$remote_addr'; sub_filter '{{http_x_forwarded_for}}' '$http_x_forwarded_for'; sub_filter '{{http_user_agent}}' '$http_user_agent'; } Note: This is just a snippet for one HTTP and one HTTPS virtual. The full config for all four pages is available at my Git repository in the nginx_config folder. Once this is done, check the web pages from your browser. Make sure they work as expected. Configure your BIG-IP After the web server is running and serving all 4 pages with HTTP and HTTPS, you can configure your BIG-IP. My AS3 declaration includes an HTTP and an HTTPS virtual server, two pools and some http and persistence profiles. Here is a snippet: { "$schema": "https://raw.githubusercontent.com/F5Networks/f5-appsvcs-extension/main/schema/latest/as3-schema.json", "class": "AS3", "action": "deploy", "persist": true, "declaration": { "class": "ADC", "schemaVersion": "3.0.0", "LTM_Demo": { "class": "Tenant", "LTM_Demo": { "class": "Application", "vs_http": { "class": "Service_HTTP", "virtualAddresses": [ "192.168.3.80" ], "persistenceMethods": [], "profileHTTP": { "use": "pr_http_xff" }, "pool": "pl_ltm-demo_http", "snat": { "use": "pl_SNAT_addresses" } }, ... The complete AS3 configuration can be found in my Git repository. The repository also contains an additional AS3 declaration with further configuration options. Note: You should not deploy the second declaration with the optional configurations; instead, merge the snippets you want to use into ltm_demo.json. Deployment The deployment of the AS3 declaration works similar like I described in my previous post. What's next? You can try differnet load balancing algorithms, persistence methods, caching, SSL configurations. Once you set up the web app and the LTM config, play around - the sky is the limit. Have fun!61Views2likes0CommentsPassing Arguments to iCall Scripts
iCall is a control-plane automation tool for the BIG-IP platform. There are several articles on overview and implementation details, but lost among them is being clear about how to pass arguments to iCall scripts. A post in the technical forum on disabling multiple other interfaces if one should fail highlighted the fact that the configuration can get pretty bloated if one does not pass data to the script. Here are two ways to do it: Setting variables in user_alert.conf events The alert definition supports variable names and values, here are a few examples: alert local-http-10-2-80-1-80-DOWN "Pool /Common/my_pool member /Common/10.2.80.1:80 monitor status down" { exec command="tmsh generate sys icall event tcpdump context { { name ip value 10.2.80.1 } { name port value 80 } { name vlan value internal } { name count value 20 } }" } alert interface_1_1_down "Link: 1.1 is DOWN" { exec command="tmsh generate sys icall event interface_manager context { { name action value disabled } { name interface value 1.1 } }" } The key/value pair arguments are set in the context of the exec command like so: { { name k1 value v1 } { name k2 value v2 } { name k3 value v3 } } Setting variables in iCall handlers A second method of setting variables is to do so in the handler definition. Note, however, this only is supported on the periodic handler. sys icall handler periodic myPeriodicTestHandlerWithArguments { arguments { { name k1 value v1 } { name k2 value v2 } { name k3 value v3 } } interval 30 script myTestScriptWithArguments } Reading the variables in the iCall script This is where the magic happens! In the iCall script, there's a little snippet to gain access to all that goodness you set in the alerts and/or handlers: sys icall script myTestScriptWithArguments { app-service none definition { foreach var { k1 k2 k3 } { set $var $EVENT::context($var) } tmsh::log "k1: ${k1}" tmsh::log "k2: ${k2}" tmsh::log "k3: ${k3}" } description none events none } The for loop iterates through the names you established in the alert/handler (specified also in the script) and then sets each variable to the context you provided. In this dummy example, I'm just logging it. But let's look at a real example to close the loop. Use Case: Disable other interfaces when one fails In the forum thread, the ask was to validate the alerts, handlers, and scripts they had assembled to accomplish disabling multiple interfaces when one fails. Totally possible without passing arguments, but think about how many objects you need to accomplish this. It's a lot! The only number of objects that doesn't change is how many alert definitions you need in user_alert.conf. But...the size of that definition shrinks considerably. Let's start with the user_alert.conf file, and I'm limiting to two interfaces (one failure triggering the other to be disabled) for brevity. alert interface_1_1_down "Link: 1.1 is DOWN" { exec command="tmsh generate sys icall event interface_manager context { { name action value disabled } { name interface value 1.1 } }" } alert interface_1_3_down "Link: 1.3 is DOWN" { exec command="tmsh generate sys icall event interface_manager context { { name action value disabled } { name interface value 1.3 } }" } alert interface_1_1_up "Link: 1.1 is UP" { exec command="tmsh generate sys icall event interface_manager context { { name action value enabled } { name interface value 1.1 } }" } alert interface_1_3_up "Link: 1.3 is UP" { exec command="tmsh generate sys icall event interface_manager context { { name action value enabled } { name interface value 1.3 } }" } Pretty simple here. Notice there is only one exec command, and I only need to pass the action desired (enabled, disabled) and the failing interface. Now let's look at the handler. This is the easier piece of the puzzle. We only need one triggered handler to call the script. sys icall handler triggered interface_manager { script interface_manager subscriptions { interface_manager { event-name interface_manager } } } So here in the handler, the event-name matches the event specified in the alert, and for consistency, I've named the script that as well. And now the script. sys icall script interface_manager { app-service none definition { foreach var { action interface } { set $var $EVENT::context($var) } switch ${interface} { "1.1" { tmsh::modify /net interface 1.3 ${action} } "1.3" { tmsh::modify /net interface 1.1 ${action} } } } description none events none } You can see at the top of the script definition is our little snippet to extract and set the variables for use, and then I'm using a switch statement to then modify the interfaces I want disabled or enabled based on the source interface failure. By passing the action along with the interface, I don't have to have two handlers and two scripts, one for each interface state. You could further optimize by passing with each source interface failure a list of the interfaces that should be disabled, then execute a tmsh::modify in a foreach loop, but the script is easier to modify programmatically than the user_alert.conf file. Testing the solution My first attempt to test the script failed because I disabled the interface administratively rather than having it fail. I had to look up how to "fail" an interface in my BIG-IP VE running on VMWare Fusion. Turns out I just have to deactivate the appropriate NIC in the VM settings. Here's a quick one minute video showing the results of the alert, handler, and script above.
1.8KViews0likes0CommentsLightboard Lessons: iCall
In this episode of Lightboard Lessons, I give an introduction to iCall, the built-in event-based BIG-IP control-plane scripting engine. Resources iCall release article iCall Codeshare iCall Triggers Example with iStats to Invalidate Cache iCall Periodic Example Pool Check to Disable Interface Passing Arguments to iCall Scripts1.9KViews0likes4Comments