node
28 TopicsBig-IP sending Health Check to not-used Node-IP
Hello everyone, my customer recently noticed while checking traffic on his firewall that healt checks are send from the Big-IPs internal self-ip to an IP that fits into the address range of the nodes in use on the f5. This node ip is not known to the customer, and by searching the node table or looking in /var/log/ltm we were unable to find this ip-address. So either this node was used a while ago and the node object was deleted or the Big-IP send tries talking to this ip via 443 for some other reason. Pings & curls send from the Big-IP fail. Has anyone noticed something like this before? Or is there another way to see where health checks are sent? Thanks and regards183Views0likes9CommentsSimplifying Application Health Monitoring with F5 BIG-IP
A simple agreement between BIG-IP administrators and application owners can foster smooth collaboration between teams. Application owners define their own simple or complex health monitors and agree to expose a conventional /health endpoint. When a /health endpoint responds with an HTTP 200 request, BIG-IP assumes the application is healthy based on the application owners' own criteria. The Challenge of Health Monitoring in Modern Environments F5 BIG-IP administrators in Network Operations (NetOps) teams often work with application teams because the BIG-IP acts as a full proxy, providing services like: TLS termination Load balancing Health monitoring Health checks are crucial for effective load balancing. The BIG-IP uses them to determine where to send traffic among back-end application servers. However, health monitoring frequently causes friction between teams. Problems with the Traditional Approach Traditionally, BIG-IP administrators create and maintain health monitors ranging from simple ICMP pings to complex monitors that: Simulate user transactions Verify HTTP response codes Validate payload contents Track application dependencies This leads to several issues: Knowledge Gap: NetOps may not fully grasp each application's intricacies. Change Management Overhead: Application updates require retesting monitors, causing delays. Production Risk: Monitors can break after application changes, incorrectly marking services as up/down. Team Friction: Troubleshooting failed health checks involves tedious back-and-forth between teams. A Cloud-Native Solution The cloud-native and microservices communities have patterns that elegantly solve these problems. One widely used pattern is the [health endpoint], which adapts well to BIG-IP environments. The /health Endpoint Convention Cloud-native applications commonly expose dedicated health endpoints like /health, /healthy, or /ready. These return standard status codes reflecting the application's state. The /health endpoint provides a clear contract between NetOps and application teams for BIG-IP integration. Implementing the Contract This approach establishes a simple agreement: Application Team Responsibilities: Implement /health to return HTTP 200 when the application is ready for traffic Define "healthy" based on application needs (database connectivity, dependencies, etc.) Maintain the health check logic as the application changes BIG-IP Team Responsibilities: Configure an HTTP monitor targeting the /health endpoint Treat 200 as "healthy", anything else as "unhealthy" Benefits of This Approach Aligned Expertise: Application teams define health based on their knowledge. Less Friction: BIG-IP configuration stays stable as applications evolve. Better Reliability: Health checks reflect true application health, including dependencies. Easier Troubleshooting: The /health endpoint can return detailed diagnostic info, but this is ignored by the BIG-IP and used strictly for troubleshooting. Implementation Examples F5 BIG-IP Health Monitor Configuration ltm monitor http /Common/app-health-monitor { defaults-from /Common/http destination *:* interval 5 recv 200 recv-disable none send "GET /health HTTP/1.1\r\nHost: example.com\r\nConnection: close\r\n\r\n" time-until-up 0 timeout 16 } Node.js Health Endpoint Implementation const express = require('express'); const app = express(); const port = 3000; app.get('/', (req, res) => { res.send('Application is running'); }); app.get('/health', async (req, res) => { try { const dbStatus = await checkDatabaseConnection(); const serviceStatus = await checkDependentServices(); if (dbStatus && serviceStatus) { return res.status(200).json({ status: 'healthy', database: 'connected', services: 'available', timestamp: new Date().toISOString() }); } res.status(503).json({ status: 'unhealthy', database: dbStatus ? 'connected' : 'disconnected', services: serviceStatus ? 'available' : 'unavailable', timestamp: new Date().toISOString() }); } catch (error) { res.status(500).json({ status: 'error', message: error.message, timestamp: new Date().toISOString() }); } }); async function checkDatabaseConnection() { // Check real database connection return true; } async function checkDependentServices() { // Check required service connections return true; } app.listen(port, () => { console.log(`Application listening at http://localhost:${port}`); }); Adopting this health check pattern can greatly reduce friction between NetOps and application teams while improving reliability. The simple contract of HTTP 200 for healthy provides the needed integration while letting each team focus on their expertise. For apps that can't implement a custom /health endpoint, BIG-IP admins can still use traditional ICMP or TCP port monitoring. However, these basic checks can't accurately reflect an app's true health and complex dependencies. This approach fosters collaboration and leverages the specialized knowledge of both network and application teams. The result is more reliable services and smoother operations.409Views1like0CommentsLineRate HTTP to HTTPS redirect
Here's a quick LineRate proxy code snippet to convert an HTTP request to a HTTPS request using the embedded Node.js engine. The relevant parts of the LineRate proxy config are below, as well. By modifying the redirect_domain variable, you can redirect HTTP to HTTPS as well as doing a non-www to a www redirect. For example, you can redirect a request for http://example.com to https://www.example.com . The original URI is simply appended to the redirected request, so a request for http://example.com/page1.html will be redirected to https://www.example.com/page1.html . This example uses the self-signed SSL certificate that is included in the LineRate distribution. This is fine for testing, but make sure to create a new SSL profile with your site certificate and key when going to production. As always, the scripting docs can be found here. redirect.js: Put this script in the default scripts directory - /home/linerate/data/scripting/proxy/ and update the redirect_domain and redirect_type variables for your environment. "use strict"; var vsm = require('lrs/virtualServerModule'); // domain name to which to redirect var redirect_domain = 'www.example.com'; // type of redirect. 301 = temporary, 302 = permanent var redirect_type = 302; vsm.on('exist', 'vs_example.com', function(vs) { console.log('Redirect script installed on Virtual Server: ' + vs.id); vs.on('request', function(servReq, servResp, cliReq) { servResp.writeHead(redirect_type, { 'Location': 'https://' + redirect_domain + servReq.url }); servResp.end(); }); }); LineRate config: real-server rs1 ip address 10.1.2.100 80 admin-status online ! virtual-ip vip_example.com ip address 192.0.2.1 80 admin-status online ! virtual-ip vip_example.com_https ip address 192.0.2.1 443 attach ssl profile self-signed admin-status online ! virtual-server vs_example.com attach virtual-ip vip_example.com default attach real-server rs1 ! virtual-server vs_example.com_https attach virtual-ip vip_example.com_https default attach real-server rs1 ! script redirect source file "proxy/redirect.js" admin-status online Example: user@m1:~/ > curl -L -k -D - http://example.com/test HTTP/1.1 302 Found Location: https://www.example.com/test Date: Wed, 03-Sep-2014 16:39:53 GMT Transfer-Encoding: chunked HTTP/1.1 200 OK Content-Type: text/plain Date: Wed, 03-Sep-2014 16:39:53 GMT Transfer-Encoding: chunked hello world250Views0likes0CommentsLineRate Performance Tip: Logging in Node.js
LineRate's Node.js engine lets you program in the datapath with JavaScript. Once you start embedding business logic in your proxy, you'll need to debug this logic and report on the actions your proxy is taking. This can take several forms from the simple to the complex. When you're just getting started, nothing beats the venerable console.log() and util.inspect() for simplicity. Whether your service scales to millions of requests per second, or just a trickle, there are some simple performance considerations that you should keep in mind. Knowing these up front prevents you from learning expensive habits that are harder to fix later. 1. Use log levels If your logging is impacting performance, one easy way to fix it is to channel your inner Bob Newhart: "Stop it!" But you don't want to be left without any logs when you're debugging a problem. A very common approach regardless of language is a logging helper that respects log levels like "error", "warning", "info" or "debug". Your code contains all the logging statements you could need, but by tuning the logging level you adjust which ones fire. function handleRequest(servReq, servRes, cliReq) { log.debug('Caught new request for %s at %s', servReq.url, new Date()); if (url.toLowerCase() != url) { log.warning('Request URL %s is not lower case', servReq.url); try { fixLowercaseUrl(servReq); } catch (err) { log.error('Error fixing URL: %s, sending 502', err); send502(servReq, servRes); return } // ... } It's easy to find a node.js logging library that supports this, or you can make your own. Here's a simple gist that I use for logging (download from my gist😞 var log = (function () { "use strict"; var format = require('util').format; // Customize levels, but do not name any level: "log" "getLevel" "setLevel" var levels = { ERROR: 10, WARN: 20, INFO: 30, DEBUG: 40, TRACE: 50 }; var currentLevel = 'INFO'; function doGetLevel() { return currentLevel; } function doSetLevel(level) { if (!levels[level]) { throw new Error('No such level ' + level); } currentLevel = level; } function doLog(level, otherArgs) { if (!levels[level] || levels[level] > levels[currentLevel]) { return; } var args = [].slice.call(arguments); args.shift(); console.log(level + ': ' + format.apply(this, args)); } var toRet = { log: doLog, getLevel: doGetLevel, setLevel: doSetLevel }; Object.getOwnPropertyNames(levels).forEach(function (level) { this[level.toLowerCase()] = function () { var args = [].slice.call(arguments); args.unshift(level); return doLog.apply(this, args); } }, toRet); return toRet; })(); 2. Defer string concatenation What's the difference between this call log.debug('The request is ' + util.inspect(servReq) + ', and the response is ' + util.inspect(servRes)); and this one? log.debug('The request is ', servReq, ', and the response is ', servRes); These statements both produce the same output. Let's consider what happens if you're not running at the debug log level. Suppose we're running at logLevel INFO (the default in my simpleLog library). Let's see what happens in the first call. Our code says to call log.debug() with one argument. The argument is a long string formed by concatenating 4 smaller strings: 'The request is' util.inspect(servReq) ', and the response is ' util.inspect(servRes) You can check out the implementation of util.inspect() and probably notice that there's a lot of things going on, including recursive calls. This is all building up lots of little strings, which are concatenated into bigger strings and finally returned from util.inspect(). The javascript language is left-associative for string concatenation, but different engines may optimize as long as there aren't observable side effects; conceptually you can think about the costs by thinking about the tree of strings that the engine is building: The 2 hard-coded strings and the 2 strings returned from util.inspect() are concatenated one last time into the argument to log.debug(). Then we call log.debug(). If you're using my simpleLog, you can see that the first thing it does is bail out without doing anything: if (!levels[level] || levels[level] > levels[currentLevel]) { return; } So we went through all the effort of building up a big string detailing all the aspects of the server request and server response, just to unceremoniously discard it. What about the other style of call (spoiler alert: it's better :-)? Remember the second form passed four arguments; 2 arguments are just plain strings, and the other 2 are objects that we want to dump: log.debug('The request is ', servReq, ', and the response is ', servRes); This style is taking advantage of a console.log() behavior: "If formatting elements are not found in the first string, then util.inspect is used on each argument." I'm breaking up the arguments and asking console.log() to call util.inspect() on my behalf. Why? What's the advantage here? This structure avoids calling util.inspect() until it is needed. In this case, if we're at loglevel INFO, it's not called at all. All those recursive calls and concatenating tiny strings into bigger and bigger ones? Skipped entirely. Here's what happens: The four separate arguments (2 strings, 2 objects) are passed to the log.debug() function. The log.debug() function checks the loglevel, sees that it is INFO, which means log.debug() calls should not actually result in a log message. The log.debug() function returns immediately, without even bothering to check any of the arguments. The call to log.debug() is pretty cheap, too. The 2 strings are compile-time static, so it's easy for a smart optimizing engine like v8 to avoid making brand new strings each time. The 2 objects (servReq and servRes) were already created; v8 is also performant when it comes to passing references to objects. The log.debug() helper, Node's built-in console.log(), and logging libraries on NPM let you defer by using either the comma-separated arguments approach, or the formatted string (%s) style. A couple of final notes on this topic: First, the optimization in console.log() is awesomeness straight from plain node.js; LineRate's node.js scripting engine just inherited it. Second, remember that JavaScript allows overriding the Object.prototype.toString() method to customize how objects are stringified. util.inspect() has code to avoid invoking toString() on an object it is inspecting, but the simple '+' operator does not. A simple statement like this: console.log('x is ' + x); could be invoking x.toString() without you knowing it. If you didn't write x.toString(), you may have no idea how expensive it is. 3. Stdout/stderr is synchronous In node.js, process.stdout and process.stderr are two special streams for process output, in the UNIX style. console.log() writes to process.stdout, and console.error() writes to process.stderr. It's important to know that writes to these streams may be synchronous: depending on whether the stream is connected to a TTY, file or pipe, node.js may ensure that the write succeeds before moving on. Successful writes can include storing in an intermediate buffer, especially if you're using a TTY, so in normal operation this synchronous behavior is no problem. The performance pain comes if you're producing writes to stdout or stderr faster than the consumer can consume them: eventually the buffer will fill up. This causes node.js to block waiting on the consumer. Fundamentally, no amount of buffer can save you from a producer going faster than a consumer forever. There are only two choices: limit the producer to the rate of the consumer ("blocking"), or discard messages between the producer and consumer ("dropping"). But if your program's log production is bursty, then a buffer can help by smoothing out the producer. The risk is if the consumer fails while data is left in the buffer (this may be lost forever). LineRate's node.js environment provides a little bit of both: LineRate hooks your node.js script up to our system logging infrastructure; you can use our CLI or REST API to configure where the logs end up. We route logs to those configured destinations as fast as we can with buffering along the way. If the buffers are exceeded, we do drop messages (adding a message to the logs indicating that we dropped). We ensure the logging subsystem (our consumer) stays up even if your script fails with an error. All this ensures you get the best logs you can, and protects you from catastrophically blocking in node.js. 4. Get counters on demand One last approach we've used to reduce logging is to provide a mechanism to query stats from our node.js scripts. LineRate includes an excellent key-value store called redis. LineRate scripts can use redis to store and retrieve arbitrary data, like script-specific counters. If all you're tring to do is count page requests for a specific URL, you could use a simple script like: function serveCounter(servRes, counter) { redisClient.get(counter, function (error, buffer) { if (error) { servRes.writeHead(502); servRes.end(); } else { servRes.writeHead(200, { 'Content-Type' : 'application/json', 'Content-Length' : buffer.length }); servRes.end(buffer); } }); } vsm.on('exist', 'vsStats', function (vs) { vs.on('request', function (servReq, servRes, cliReq) { if (servReq.url === '/path/to/counterA') { serveCounter(servRes, 'counterA'); } else { cliReq(); // Or maybe an error? } }); }); or as fancy as a mini-webserver that serves back an HTML webpage comprising a dashboard and client-side AJAX to periodically pull down new stats and plot them. If you're in plain node.js, you'd use http.createServer() instead of vs.on('request', ...) and you'd have to generate an error page yourself instead of calling cliReq(), but the idea is the same. Conclusion I've presented a summary of good and bad logging behaviors in node.js. The combination of JavaScript's syntactic simplicity and node.js' awesome non-blocking I/O capabilities mean that you can spend more time on your business logic and less time shuffling bytes around. But, there are a few sharp corners. Logging libraries can smooth out those sharp corners, as long as you're aware of how to use them and the problems like output buffering that they can't solve. All of my advice applies to node.js and LineRate's node.js scripting engine. I hope it serves you well for both. If you're looking for a proxy you can program in node.js, or if you'd just like to try out the examples, remember you can deploy LineRate today for free.355Views0likes0CommentsHow can I create member with name using powershell cmdlet?
How can you create pool members with descriptive names? When I create a new vm, I'm able to automatically add it to a pool. Add-F5.LTMPoolMember -Pool $PoolName -Member "${VMIP}:${Port}" However the name of the node is its ip address. I've also tried using the more low level way of adding a node $PoolList = @($PoolName) $Node = New-Object -TypeName iControl.CommonAddressPort; $Node.address = $VMIP $Node.port = $Port (Get-F5.iControl).LocalLBPool.add_member_v2($PoolList, $Node) I can't find any way to change the node name with add_member_v2496Views0likes2CommentsQuery Current Connections at the Node Level
I am working on Powershell scripts to do automated deployments to our servers behind our BIG-IP LTM. I have simple scripts that use the iControl powershell cmdlets Disable-F5.LTMNodeAddress -Node xxx.xxx.xxx.xxx These work quite well, however, what I need next is a way to query the Current Connections to the node as they bleed off so that my automation doesn't begin the deployment until current connections = 0. I'm assuming I'm just not formatting my searches right as someone must have figured this out by now. Any help would be greatly appreciated. Thanks!203Views0likes0CommentsHow do I log information from a nodejs based LTM external monitor?
How can I log something from a nodejs based LTM external monitor? I have my monitor script working, and if I write a message like this, the script regards the monitor as up: console.log("Success!"); Are these messages to stdout logged anywhere where I can see the record of them? If not, if I wanted to log something from my external monitor script (say perhaps to /var/log/ltm, or even some other location like /var/log/monitor), how would I do it?Solved944Views0likes3CommentsPersist based on query string
Our QA team needs a way to specify a backend node via a query string and have all subsequent queries persist to that node for testing purposes. I have written the following irule which send the request to a specified node - the problem is that associated requests to things like images, javascript, style sheets don't match the irule and thus get sent to a random backend web server: DESCRIPTION: This if the URI contains a query parameter named server this irule attempts to match the server name to a datagroup named servername2ip_datagroup and use that to send the user to the appropriate back end server. 1) This rule relies on the servername2ip_datagroup datagroup. Which is a server name to IP datagroup on the load balancer. This needs to be maintained / updated as server IPs or names change. when HTTP_REQUEST { If the uri contains a query parameter named server if { [HTTP::uri] contains "server" } { Define a lowercase variable to store the server name set webserver [URI::query [string tolower [HTTP::uri]] server] Define a variable to store the port to make this rule https/http agnostic set prt [TCP::local_port] If the server query parameter matches an entry in the datagroup if { [class match $webserver equals servername2ip_datagroup] } { Direct traffic to that node. node [class lookup $webserver servername2ip_datagroup] $prt } } } I think perhaps I need to add persistence after: node [class lookup $webserver servername2ip_datagroup] $prt I tried adding persist source_addr 1800 But that's not working. Can any irule guru's out there help me get this working. Is persistence what I need - if so what's wrong with how I'm using it? Thanks Brad489Views0likes6CommentsSend Post/GET to all nodes regarless of status
I am trying to send a POST or a GET to all nodes in a pool regardless of the node status. I pieced together the following code. The issue is that the website is not ready yet so I am working with 404 and checking IIS logs. We want the request to be sent to each node once regardless of the return code. Then we want to display a page to the user with some status. I use curl to send data it post to each node. But I get "curl: (56) Recv failure: Connection reset by peer" If I visit the page in a browser (GET) it sends 250 request to each node. works but creates a 250 request when page is visited in browser send 1 POST to each node if curl is used to post data curl --data "cacheclear=1" http://site.robert.com/clear when HTTP_REQUEST { set variables set pool [LB::server pool] set memberslist {} set members [members -list [LB::server pool]] set posturl "http://[HTTP::host][HTTP::uri]" save original request set req [HTTP::request] set reqcount to the total number of servers in assigned pool set reqcount [active_members [LB::server pool]] look for the trigger in the URL/URi if { [string tolower [HTTP::uri]] eq "/clear" } { send request to the vip default pool pool $pool } } http retry only works in http_response when HTTP_RESPONSE { since no test page existing working with 404 status. we can change this later and add error checking if { [HTTP::status] equals 404 } { if request count is greater than 0, decrement variable and retry request if { $reqcount > 0 } { incr reqcount -1 HTTP::retry $req } respond to user set response "URL: $posturl Pool: $pool Members List: $memberslist Current Member: [LB::server addr] Reguest Info: $req Active Members:" HTTP::respond 200 content $response "Content-Type" "text/html" } }397Views0likes2Comments