node
26 TopicsLineRate HTTP to HTTPS redirect
Here's a quick LineRate proxy code snippet to convert an HTTP request to a HTTPS request using the embedded Node.js engine. The relevant parts of the LineRate proxy config are below, as well. By modifying the redirect_domain variable, you can redirect HTTP to HTTPS as well as doing a non-www to a www redirect. For example, you can redirect a request for http://example.com to https://www.example.com . The original URI is simply appended to the redirected request, so a request for http://example.com/page1.html will be redirected to https://www.example.com/page1.html . This example uses the self-signed SSL certificate that is included in the LineRate distribution. This is fine for testing, but make sure to create a new SSL profile with your site certificate and key when going to production. As always, the scripting docs can be found here. redirect.js: Put this script in the default scripts directory - /home/linerate/data/scripting/proxy/ and update the redirect_domain and redirect_type variables for your environment. "use strict"; var vsm = require('lrs/virtualServerModule'); // domain name to which to redirect var redirect_domain = 'www.example.com'; // type of redirect. 301 = temporary, 302 = permanent var redirect_type = 302; vsm.on('exist', 'vs_example.com', function(vs) { console.log('Redirect script installed on Virtual Server: ' + vs.id); vs.on('request', function(servReq, servResp, cliReq) { servResp.writeHead(redirect_type, { 'Location': 'https://' + redirect_domain + servReq.url }); servResp.end(); }); }); LineRate config: real-server rs1 ip address 10.1.2.100 80 admin-status online ! virtual-ip vip_example.com ip address 192.0.2.1 80 admin-status online ! virtual-ip vip_example.com_https ip address 192.0.2.1 443 attach ssl profile self-signed admin-status online ! virtual-server vs_example.com attach virtual-ip vip_example.com default attach real-server rs1 ! virtual-server vs_example.com_https attach virtual-ip vip_example.com_https default attach real-server rs1 ! script redirect source file "proxy/redirect.js" admin-status online Example: user@m1:~/ > curl -L -k -D - http://example.com/test HTTP/1.1 302 Found Location: https://www.example.com/test Date: Wed, 03-Sep-2014 16:39:53 GMT Transfer-Encoding: chunked HTTP/1.1 200 OK Content-Type: text/plain Date: Wed, 03-Sep-2014 16:39:53 GMT Transfer-Encoding: chunked hello world216Views0likes0CommentsLineRate Performance Tip: Logging in Node.js
LineRate's Node.js engine lets you program in the datapath with JavaScript. Once you start embedding business logic in your proxy, you'll need to debug this logic and report on the actions your proxy is taking. This can take several forms from the simple to the complex. When you're just getting started, nothing beats the venerableconsole.log()andutil.inspect()for simplicity. Whether your service scales to millions of requests per second, or just a trickle, there are some simple performance considerations that you should keep in mind. Knowing these up front prevents you from learning expensive habits that are harder to fix later. 1. Use log levels If your logging is impacting performance, one easy way to fix it is to channel your inner Bob Newhart: "Stop it!" But you don't want to be left without any logs when you're debugging a problem. A very common approach regardless of language is a logging helper that respects log levels like "error", "warning", "info" or "debug". Your code contains all the logging statements you could need, but by tuning the logging level you adjust which ones fire. function handleRequest(servReq, servRes, cliReq) { log.debug('Caught new request for %s at %s', servReq.url, new Date()); if (url.toLowerCase() != url) { log.warning('Request URL %s is not lower case', servReq.url); try { fixLowercaseUrl(servReq); } catch (err) { log.error('Error fixing URL: %s, sending 502', err); send502(servReq, servRes); return } // ... } It's easy to find a node.js logging library that supports this, or you can make your own. Here's a simple gist that I use for logging (download from my gist😞 var log = (function () { "use strict"; var format = require('util').format; // Customize levels, but do not name any level: "log" "getLevel" "setLevel" var levels = { ERROR: 10, WARN: 20, INFO: 30, DEBUG: 40, TRACE: 50 }; var currentLevel = 'INFO'; function doGetLevel() { return currentLevel; } function doSetLevel(level) { if (!levels[level]) { throw new Error('No such level ' + level); } currentLevel = level; } function doLog(level, otherArgs) { if (!levels[level] || levels[level] > levels[currentLevel]) { return; } var args = [].slice.call(arguments); args.shift(); console.log(level + ': ' + format.apply(this, args)); } var toRet = { log: doLog, getLevel: doGetLevel, setLevel: doSetLevel }; Object.getOwnPropertyNames(levels).forEach(function (level) { this[level.toLowerCase()] = function () { var args = [].slice.call(arguments); args.unshift(level); return doLog.apply(this, args); } }, toRet); return toRet; })(); 2. Defer string concatenation What's the difference between this call log.debug('The request is ' + util.inspect(servReq) + ', and the response is ' + util.inspect(servRes)); and this one? log.debug('The request is ', servReq, ', and the response is ', servRes); These statements both produce the same output. Let's consider what happens if you're not running at the debug log level. Suppose we're running at logLevel INFO (the default in my simpleLog library). Let's see what happens in the first call. Our code says to call log.debug() with one argument. The argument is a long string formed by concatenating 4 smaller strings: 'The request is' util.inspect(servReq) ', and the response is ' util.inspect(servRes) You can check out theimplementation of util.inspect()and probably notice that there's a lot of things going on, including recursive calls. This is all building up lots of little strings, which are concatenated into bigger strings and finally returned from util.inspect(). The javascript language is left-associative for string concatenation, but different engines may optimize as long as there aren't observable side effects; conceptually you can think about the costs by thinking about the tree of strings that the engine is building: The 2 hard-coded strings and the 2 strings returned from util.inspect() are concatenated one last time into the argument to log.debug(). Then we call log.debug(). If you're using my simpleLog, you can see that the first thing it does is bail out without doing anything: if (!levels[level] || levels[level] > levels[currentLevel]) { return; } So we went through all the effort of building up a big string detailing all the aspects of the server request and server response, just to unceremoniously discard it. What about the other style of call (spoiler alert: it's better :-)? Remember the second form passed four arguments; 2 arguments are just plain strings, and the other 2 are objects that we want to dump: log.debug('The request is ', servReq, ', and the response is ', servRes); This style is taking advantage of aconsole.log()behavior: "If formatting elements are not found in the first string, then util.inspect is used on each argument." I'm breaking up the arguments and asking console.log() to call util.inspect() on my behalf. Why? What's the advantage here? This structure avoids calling util.inspect() until it is needed. In this case, if we're at loglevel INFO, it's not called at all. All those recursive calls and concatenating tiny strings into bigger and bigger ones? Skipped entirely. Here's what happens: The four separate arguments (2 strings, 2 objects) are passed to the log.debug() function. The log.debug() function checks the loglevel, sees that it is INFO, which means log.debug() calls should not actually result in a log message. The log.debug() function returns immediately, without even bothering to check any of the arguments. The call to log.debug() is pretty cheap, too. The 2 strings are compile-time static, so it's easy for a smart optimizing engine like v8 to avoid making brand new strings each time. The 2 objects (servReq and servRes) were already created; v8 is also performant when it comes to passing references to objects. The log.debug() helper, Node's built-inconsole.log(), and logging libraries on NPM let you defer by using either the comma-separated arguments approach, or the formatted string (%s) style. A couple of final notes on this topic: First, the optimization in console.log() is awesomeness straight from plain node.js; LineRate's node.js scripting engine just inherited it. Second, remember that JavaScript allows overriding theObject.prototype.toString()method to customize how objects are stringified. util.inspect() has code to avoid invoking toString() on an object it is inspecting, but the simple '+' operator does not. A simple statement like this: console.log('x is ' + x); could be invoking x.toString() without you knowing it. If you didn't write x.toString(), you may have no idea how expensive it is. 3. Stdout/stderr is synchronous In node.js, process.stdout and process.stderr are two special streams for process output,in the UNIX style. console.log() writes to process.stdout, and console.error() writes to process.stderr. It's important to know that writes to these streams may be synchronous: depending on whether the stream is connected to a TTY, file or pipe, node.js may ensure that the write succeeds before moving on. Successful writes can include storing in an intermediate buffer, especially if you're using a TTY, so in normal operation this synchronous behavior is no problem. The performance pain comes if you're producing writes to stdout or stderr faster than the consumer can consume them: eventually the buffer will fill up. This causes node.js to block waiting on the consumer. Fundamentally, no amount of buffer can save you from a producer going faster than a consumer forever. There are only two choices: limit the producer to the rate of the consumer ("blocking"), or discard messages between the producer and consumer ("dropping"). But if your program's log production is bursty, then a buffer can help by smoothing out the producer. The risk is if the consumer fails while data is left in the buffer (this may be lost forever). LineRate's node.js environment provides a little bit of both: LineRate hooks your node.js script up to our system logging infrastructure; you can use ourCLIorREST APIto configure where the logs end up. We route logs to those configured destinations as fast as we can with buffering along the way. If the buffers are exceeded, we do drop messages (adding a message to the logs indicating that we dropped). We ensure the logging subsystem (our consumer) stays up even if your script fails with an error. All this ensures you get the best logs you can, and protects you from catastrophically blocking in node.js. 4. Get counters on demand One last approach we've used to reduce logging is to provide a mechanism to query stats from our node.js scripts. LineRateincludesan excellent key-value store calledredis. LineRate scripts can use redis to store and retrieve arbitrary data, like script-specific counters. If all you're tring to do is count page requests for a specific URL, you could use a simple script like: function serveCounter(servRes, counter) { redisClient.get(counter, function (error, buffer) { if (error) { servRes.writeHead(502); servRes.end(); } else { servRes.writeHead(200, { 'Content-Type' : 'application/json', 'Content-Length' : buffer.length }); servRes.end(buffer); } }); } vsm.on('exist', 'vsStats', function (vs) { vs.on('request', function (servReq, servRes, cliReq) { if (servReq.url === '/path/to/counterA') { serveCounter(servRes, 'counterA'); } else { cliReq(); // Or maybe an error? } }); }); or as fancy as a mini-webserver that serves back an HTML webpage comprising a dashboard and client-side AJAX to periodically pull down new stats and plot them. If you're in plain node.js, you'd use http.createServer() instead of vs.on('request', ...) and you'd have to generate an error page yourself instead of calling cliReq(), but the idea is the same. Conclusion I've presented a summary of good and bad logging behaviors in node.js. The combination of JavaScript's syntactic simplicity and node.js' awesome non-blocking I/O capabilities mean that you can spend more time on your business logic and less time shuffling bytes around. But, there are a few sharp corners. Logging libraries can smooth out those sharp corners, as long as you're aware of how to use them and the problems like output buffering that they can't solve. All of my advice applies to node.js and LineRate's node.js scripting engine. I hope it serves you well for both. If you're looking for a proxy you can program in node.js, or if you'd just like to try out the examples, remember you can deploy LineRate todayfor free.302Views0likes0CommentsHow can I create member with name using powershell cmdlet?
How can you create pool members with descriptive names? When I create a new vm, I'm able to automatically add it to a pool. Add-F5.LTMPoolMember -Pool $PoolName -Member "${VMIP}:${Port}" However the name of the node is its ip address. I've also tried using the more low level way of adding a node $PoolList = @($PoolName) $Node = New-Object -TypeName iControl.CommonAddressPort; $Node.address = $VMIP $Node.port = $Port (Get-F5.iControl).LocalLBPool.add_member_v2($PoolList, $Node) I can't find any way to change the node name with add_member_v2452Views0likes2CommentsQuery Current Connections at the Node Level
I am working on Powershell scripts to do automated deployments to our servers behind our BIG-IP LTM. I have simple scripts that use the iControl powershell cmdlets Disable-F5.LTMNodeAddress -Node xxx.xxx.xxx.xxx These work quite well, however, what I need next is a way to query the Current Connections to the node as they bleed off so that my automation doesn't begin the deployment until current connections = 0. I'm assuming I'm just not formatting my searches right as someone must have figured this out by now. Any help would be greatly appreciated. Thanks!186Views0likes0CommentsHow do I log information from a nodejs based LTM external monitor?
How can I log something from a nodejs based LTM external monitor? I have my monitor script working, and if I write a message like this, the script regards the monitor as up: console.log("Success!"); Are these messages to stdout logged anywhere where I can see the record of them? If not, if I wanted to log something from my external monitor script (say perhaps to /var/log/ltm, or even some other location like /var/log/monitor), how would I do it?Solved858Views0likes3CommentsPersist based on query string
Our QA team needs a way to specify a backend node via a query string and have all subsequent queries persist to that node for testing purposes. I have written the following irule which send the request to a specified node - the problem is that associated requests to things like images, javascript, style sheets don't match the irule and thus get sent to a random backend web server: DESCRIPTION: This if the URI contains a query parameter named server this irule attempts to match the server name to a datagroup named servername2ip_datagroup and use that to send the user to the appropriate back end server. 1) This rule relies on the servername2ip_datagroup datagroup. Which is a server name to IP datagroup on the load balancer. This needs to be maintained / updated as server IPs or names change. when HTTP_REQUEST { If the uri contains a query parameter named server if { [HTTP::uri] contains "server" } { Define a lowercase variable to store the server name set webserver [URI::query [string tolower [HTTP::uri]] server] Define a variable to store the port to make this rule https/http agnostic set prt [TCP::local_port] If the server query parameter matches an entry in the datagroup if { [class match $webserver equals servername2ip_datagroup] } { Direct traffic to that node. node [class lookup $webserver servername2ip_datagroup] $prt } } } I think perhaps I need to add persistence after: node [class lookup $webserver servername2ip_datagroup] $prt I tried adding persist source_addr 1800 But that's not working. Can any irule guru's out there help me get this working. Is persistence what I need - if so what's wrong with how I'm using it? Thanks Brad402Views0likes6CommentsSend Post/GET to all nodes regarless of status
I am trying to send a POST or a GET to all nodes in a pool regardless of the node status. I pieced together the following code. The issue is that the website is not ready yet so I am working with 404 and checking IIS logs. We want the request to be sent to each node once regardless of the return code. Then we want to display a page to the user with some status. I use curl to send data it post to each node. But I get "curl: (56) Recv failure: Connection reset by peer" If I visit the page in a browser (GET) it sends 250 request to each node. works but creates a 250 request when page is visited in browser send 1 POST to each node if curl is used to post data curl --data "cacheclear=1" http://site.robert.com/clear when HTTP_REQUEST { set variables set pool [LB::server pool] set memberslist {} set members [members -list [LB::server pool]] set posturl "http://[HTTP::host][HTTP::uri]" save original request set req [HTTP::request] set reqcount to the total number of servers in assigned pool set reqcount [active_members [LB::server pool]] look for the trigger in the URL/URi if { [string tolower [HTTP::uri]] eq "/clear" } { send request to the vip default pool pool $pool } } http retry only works in http_response when HTTP_RESPONSE { since no test page existing working with 404 status. we can change this later and add error checking if { [HTTP::status] equals 404 } { if request count is greater than 0, decrement variable and retry request if { $reqcount > 0 } { incr reqcount -1 HTTP::retry $req } respond to user set response "URL: $posturl Pool: $pool Members List: $memberslist Current Member: [LB::server addr] Reguest Info: $req Active Members:" HTTP::respond 200 content $response "Content-Type" "text/html" } }361Views0likes2CommentsError when I try to assign a member to a Pool
When I execute this piece of code: pool = bigip.tm.ltm.pools.pool.create(name="Pool Name", partition='Common', description="First Pool", monitor="/Common/" + monitor.name) Create the Members node = pool.members_s.members.create(name="Node name", address=ip_address, partition='Common', description='First Node', monitor="/Common/icmp_tid") UpdatePool pool.update() I get the next error: Text: '{"code":400,"message":"01070587:7: The requested monitor rule (/Common/icmp_tid on pool ) can only be applied to node addresses. Can anyone explain what is the issue? When I try to create the node itself with th command mgmt.tm.ltm.nodes.node.create() and attach the monitor to it I don't have any problem. But when I create it as a member of an existing pool the error appears. Is there any way this can work or is there any way of assigning an existing node as a member of an pool? Thanks485Views0likes1CommentDynamic port selection not working
Hello All. I'm trying to compose an irule that will direct the traffic to a dynamically chosen port in a pool, according to the URL the user uses. After much searching I got to the point where the node and the port are correctly selected, but the NLB disregards the node command and directs the traffic to the original port. The URL is made of 3 letters of the service and 3 digits of the wanted inside-component. Together they compose the destination port. The user uses HTTPS(443), but the NLB has to direct the traffic to the "member:composed-port" according to the URL. The VIP has address and port translation enabled. To be sure of that I included those commands in the irule. The member in the pool is defined with "port=all services". when RULE_INIT { 0 = none, 1 = debug, 2 = verbose set static::APsp_Debug 2 } when CLIENT_ACCEPTED { translate address enable translate port enable } when HTTP_REQUEST priority 1 { Extract the last 3 chars from the hostname (e.g. 200 from ADM200.company.com) set APsp_inside_code [string range [getfield [HTTP::host] "." 1] end-2 end] Extract the first 3 chars from the hostname (e.g. ADM from ADM200.company.com) set APsp_service_code [string range [getfield [HTTP::host] "." 1] 0 2 ] switch -glob [string tolower $APsp_service_code] { "adm" {set APsp_dest_port "60$APsp_inside_code" } "rst" {set APsp_dest_port "64$APsp_inside_code" } default { log local0.error "service code not found. [HTTP::host][HTTP::uri]" HTTP::respond 404 "Not Found" } } } when LB_SELECTED priority 1 { set APsp_dest_node [LB::server addr] replace the host header so the server will think that this is the original request HTTP::header replace Host "company.co.il" go to load balanced member, but with the needed port if {$static::APsp_Debug > 0} { log local0.info "LBserver= [LB::server addr] node=$APsp_dest_node port=$APsp_dest_port" } node $APsp_dest_node:$APsp_dest_port log local0.info "after node command LBserver= [LB::server]" } when LB_FAILED { log local0.error "Selected server $APsp_dest_node:$APsp_dest_port is not responding" HTTP::respond 404 "Not Found" } when SERVER_CONNECTED { if {$static::APsp_Debug > 0} { log local0.info "serverport: [TCP::server_port]" } } Here are the Debug messages: Rule /Common/Event_Logger : Client 10.99.99.99:54565 requested http(s)://adm200.company.com/appbuilder/forms?code=8. Rule /Common/Event_Logger : Client 10.99.99.99:54565 request DIDN'T match any policy rule. Rule /Common/MY_select_port : LBserver= 10.237.214.28 node=10.237.214.28 port=60200 Rule /Common/MY_select_port : after node command LBserver= 10.237.214.28 60200 Rule /Common/Event_Logger : Client 10.99.99.99:54565 farwarded to 10.237.214.28 60200 /appbuilder/forms?code=8. Rule /Common/Event_Logger : Client 10.99.99.99:54565 connected from 10.237.214.253:54565 to node 10.237.214.28:443. Rule /Common/MY_select_port : serverport: 443 Rule /Common/Event_Logger : Client 10.99.99.99:54565 sending request to 10.237.214.28:443. Rule /Common/Event_Logger : Client 10.99.99.99:54565 releasing request to 10.237.214.28:443. Rule /Common/Event_Logger : Client 10.99.99.99:54565 got a response from 10.237.214.28:443. Rule /Common/Event_Logger : Client 10.99.99.99:54565 404 response released from 10.237.214.28:443 Rule /Common/Event_Logger : Connection from 10.237.214.253:54565 to Server 10.237.214.28:443 has closed. As you can see, the node command did the correct selection but the server connect went on with port 443. The pool definition: ltm pool /Common/service_pool { description load-balancing-mode observed-member members { /Common/10.237.214.28:0 { address 10.237.214.28 } /Common/10.237.214.29:0 { address 10.237.214.29 } } monitor /Common/gateway_icmp } Thanks in advance. Gil.493Views0likes2Comments