programmability
52 TopicsGetting Started with iControl: Working with Statistics
In the previous article in the Getting Started with iControl series, we threw the lion's share of languages at you for both iControl portals, exposing you to many of the possible attack angles available for programmatically interacting with the BIG-IP. In this article, we're going to scale back a bit, focusing a few rest-based examples on how to gather statistics. Note: This article published by Jason Rahm but co-authored with Joe Pruitt. Statistics on BIG-IP As much as you can do with iControl, it isn’t the only methodology for gathering statistics. In fact, I’d argue that there are at least two better ways to get at them, besides the on-box rrd graphs. SNMP - Yes, it’s old school. But it has persisted as long as it has for a reason. It’s broad and lightweight. It does, however, require client-side software to do something with all that data. AVR - By provisioning the AVR module on BIG-IP, you can configure an expansive amount of metrics to measure. Combined with syslog to send the data off-box, this push approach is also pretty lightweight, though like with snmp, having a tool to consume those stats is helpful. Ken Bocchinoreleased an analytics iApp that combines AVR and its data output with Splunk and its indexing ability to deliver quite a solution. But maybe you don’t have those configured, or available, and you just need to do some lab testing and would like to consume those stats locally as you generate your test traffic. In these code samples, we’ll take a look at virtual server and pool stats. Pool Stats At the pool level, several stats are available for consumption, primarily around bandwidth utilization and connection counts. You can also get state information as well, though I'd argue that's not really a statistic, unless I suppose you are counting how often the state is available. With snmp, the stats come in high/low form, so you have to do some bit shifting in order to arrive at the actual value. Not so with with iControl. The value comes as is. In each of these samples below, you'll see two functions. The first is the list function, where when you run the script you will get a list of pools so you can then specify which pool you want stats for. The second function will then take that pool as an argument and return the statistics. Node.js Powershell Python Node.Js Pool Stats //-------------------------------------------------------- function getPoolList(name) { //-------------------------------------------------------- var uri = "/mgmt/tm/ltm/pool"; handleVERB("GET", uri, null, function(json) { //console.log(json); var obj = JSON.parse(json); var items = obj.items; console.log("POOLS"); console.log("-----------"); for(var i=0; i<items.length; i++) { var fullPath = items[i].fullPath; console.log(" " + fullPath); } }); } //-------------------------------------------------------- function getPoolStats(name) { //-------------------------------------------------------- var uri = "/mgmt/tm/ltm/pool/" + name + "/stats"; handleVERB("GET", uri, null, function(json) { //console.log(json); var obj = JSON.parse(json); var selfLink = obj.selfLink; var entries = obj.entries; for(var n in entries) { console.log("--------------------------------------"); console.log("NAME : " + getStatsDescription(entries[n].nestedStats, "tmName")); console.log("--------------------------------------"); console.log("AVAILABILITY STATE : " + getStatsDescription(entries[n].nestedStats, "status.availabilityState")); console.log("ENABLED STATE : " + getStatsDescription(entries[n].nestedStats, "status.enabledState")); console.log("REASON : " + getStatsDescription(entries[n].nestedStats, "status.statusReason")); console.log("SERVER BITS IN : " + getStatsValue(entries[n].nestedStats, "serverside.bitsIn")); console.log("SERVER BITS OUT : " + getStatsValue(entries[n].nestedStats, "serverside.bitsOut")); console.log("SERVER PACKETS IN : " + getStatsValue(entries[n].nestedStats, "serverside.pktsIn")); console.log("SERVER PACKETS OUT : " + getStatsValue(entries[n].nestedStats, "serverside.pktsOut")); console.log("CURRENT CONNECTIONS : " + getStatsValue(entries[n].nestedStats, "serverside.curConns")); console.log("MAXIMUM CONNECTIONS : " + getStatsValue(entries[n].nestedStats, "serverside.maxConns")); console.log("TOTAL CONNECTIONS : " + getStatsValue(entries[n].nestedStats, "serverside.totConns")); console.log("TOTAL REQUESTS : " + getStatsValue(entries[n].nestedStats, "totRequests")); } }); } Powershell Pool Stats #---------------------------------------------------------------------------- function Get-PoolList() # # Description: # This function returns the list of pools # # Parameters: # None #---------------------------------------------------------------------------- { $uri = "/mgmt/tm/ltm/pool"; $link = "https://$Bigip$uri"; $headers = @{}; $headers.Add("ServerHost", $Bigip); $secpasswd = ConvertTo-SecureString $Pass -AsPlainText -Force $mycreds = New-Object System.Management.Automation.PSCredential ($User, $secpasswd) $obj = Invoke-RestMethod -Method GET -Headers $headers -Uri $link -Credential $mycreds $items = $obj.items; Write-Host "POOL NAMES"; Write-Host "----------"; for($i=0; $i -lt $items.length; $i++) { $name = $items[$i].fullPath; Write-Host " $name"; } } #---------------------------------------------------------------------------- function Get-PoolStats() # # Description: # This function returns the statistics for a pool # # Parameters: # Name - The name of the pool #---------------------------------------------------------------------------- { $uri = "/mgmt/tm/ltm/pool/${Name}/stats"; $link = "https://$Bigip$uri"; $headers = @{}; $headers.Add("ServerHost", $Bigip); $secpasswd = ConvertTo-SecureString $Pass -AsPlainText -Force $mycreds = New-Object System.Management.Automation.PSCredential ($User, $secpasswd) $obj = Invoke-RestMethod -Method GET -Headers $headers -Uri $link -Credential $mycreds $entries = $obj.entries; $names = $entries | get-member -MemberType NoteProperty | select -ExpandProperty Name; $desc = $entries | Select -ExpandProperty $names $nestedStats = $desc.nestedStats; Write-Host ("--------------------------------------"); Write-Host ("NAME : $(Get-StatsDescription $nestedStats 'tmName')"); Write-Host ("--------------------------------------"); Write-Host ("AVAILABILITY STATE : $(Get-StatsDescription $nestedStats 'status.availabilityState')"); Write-Host ("ENABLED STATE : $(Get-StatsDescription $nestedStats 'status.enabledState')"); Write-Host ("REASON : $(Get-StatsDescription $nestedStats 'status.statusReason')"); Write-Host ("SERVER BITS IN : $(Get-StatsValue $nestedStats 'serverside.bitsIn')"); Write-Host ("SERVER BITS OUT : $(Get-StatsValue $nestedStats 'serverside.bitsOut')"); Write-Host ("SERVER PACKETS IN : $(Get-StatsValue $nestedStats 'serverside.pktsIn')"); Write-Host ("SERVER PACKETS OUT : $(Get-StatsValue $nestedStats 'serverside.pktsOut')"); Write-Host ("CURRENT CONNECTIONS : $(Get-StatsValue $nestedStats 'serverside.curConns')"); Write-Host ("MAXIMUM CONNECTIONS : $(Get-StatsValue $nestedStats 'serverside.maxConns')"); Write-Host ("TOTAL CONNECTIONS : $(Get-StatsValue $nestedStats 'serverside.totConns')"); Write-Host ("TOTAL REQUESTS : $(Get-StatsValue $nestedStats 'totRequests')"); } Python Pool Stats def get_pool_list(bigip, url): try: pools = bigip.get("%s/ltm/pool" % url).json() print "POOL LIST" print " ---------" for pool in pools['items']: print " /%s/%s" % (pool['partition'], pool['name']) except Exception, e: print e def get_pool_stats(bigip, url, pool): try: pool_stats = bigip.get("%s/ltm/pool/%s/stats" % (url, pool)).json() selflink = "https://localhost/mgmt/tm/ltm/pool/%s/~Common~%s/stats" % (pool, pool) nested_stats = pool_stats['entries'][selflink]['nestedStats']['entries'] print '' print ' --------------------------------------' print ' NAME : %s' % nested_stats['tmName']['description'] print ' --------------------------------------' print ' AVAILABILITY STATE : %s' % nested_stats['status.availabilityState']['description'] print ' ENABLED STATE : %s' % nested_stats['status.enabledState']['description'] print ' REASON : %s' % nested_stats['status.statusReason']['description'] print ' SERVER BITS IN : %s' % nested_stats['serverside.bitsIn']['value'] print ' SERVER BITS OUT : %s' % nested_stats['serverside.bitsOut']['value'] print ' SERVER PACKETS IN : %s' % nested_stats['serverside.pktsIn']['value'] print ' SERVER PACKETS OUT : %s' % nested_stats['serverside.pktsOut']['value'] print ' CURRENT CONNECTIONS : %s' % nested_stats['serverside.curConns']['value'] print ' MAXIMUM CONNECTIONS : %s' % nested_stats['serverside.maxConns']['value'] print ' TOTAL CONNECTIONS : %s' % nested_stats['serverside.totConns']['value'] print ' TOTAL REQUESTS : %s' % nested_stats['totRequests']['value'] Virtual Server Stats The virtual server stats are very similar in nature to the pool stats, though you are getting an aggregate view of traffic to the virtual server, whereas there might be several pools of traffic being serviced by that single virtual server. In any event, if you compare the code below to the code from the pool functions, they are nearly functionally identical. The only differences should be discernable: the method calls themselves and then the object properties that are different (clientside/serverside for example.) Node.js Powershell Python Node.Js Virtual Server Stats //-------------------------------------------------------- function getVirtualList(name) { //-------------------------------------------------------- var uri = "/mgmt/tm/ltm/virtual"; handleVERB("GET", uri, null, function(json) { //console.log(json); var obj = JSON.parse(json); var items = obj.items; console.log("VIRTUALS"); console.log("-----------"); for(var i=0; i<items.length; i++) { var fullPath = items[i].fullPath; console.log(" " + fullPath); } }); } //-------------------------------------------------------- function getVirtualStats(name) { //-------------------------------------------------------- var uri = "/mgmt/tm/ltm/virtual/" + name + "/stats"; handleVERB("GET", uri, null, function(json) { //console.log(json); var obj = JSON.parse(json); var selfLink = obj.selfLink; var entries = obj.entries; for(var n in entries) { console.log("--------------------------------------"); console.log("NAME : " + getStatsDescription(entries[n].nestedStats, "tmName")); console.log("--------------------------------------"); console.log("DESTINATION : " + getStatsDescription(entries[n].nestedStats, "destination")); console.log("AVAILABILITY STATE : " + getStatsDescription(entries[n].nestedStats, "status.availabilityState")); console.log("ENABLED STATE : " + getStatsDescription(entries[n].nestedStats, "status.enabledState")); console.log("REASON : " + getStatsDescription(entries[n].nestedStats, "status.statusReason")); console.log("CLIENT BITS IN : " + getStatsValue(entries[n].nestedStats, "clientside.bitsIn")); console.log("CLIENT BITS OUT : " + getStatsValue(entries[n].nestedStats, "clientside.bitsOut")); console.log("CLIENT PACKETS IN : " + getStatsValue(entries[n].nestedStats, "clientside.pktsIn")); console.log("CLIENT PACKETS OUT : " + getStatsValue(entries[n].nestedStats, "clientside.pktsOut")); console.log("CURRENT CONNECTIONS : " + getStatsValue(entries[n].nestedStats, "clientside.curConns")); console.log("MAXIMUM CONNECTIONS : " + getStatsValue(entries[n].nestedStats, "clientside.maxConns")); console.log("TOTAL CONNECTIONS : " + getStatsValue(entries[n].nestedStats, "clientside.totConns")); console.log("TOTAL REQUESTS : " + getStatsValue(entries[n].nestedStats, "totRequests")); } }); } Powershell Virtual Server Stats #---------------------------------------------------------------------------- function Get-VirtualList() # # Description: # This function lists all virtual servers. # # Parameters: # None #---------------------------------------------------------------------------- { $uri = "/mgmt/tm/ltm/virtual"; $link = "https://$Bigip$uri"; $headers = @{}; $headers.Add("ServerHost", $Bigip); $secpasswd = ConvertTo-SecureString $Pass -AsPlainText -Force $mycreds = New-Object System.Management.Automation.PSCredential ($User, $secpasswd) $obj = Invoke-RestMethod -Method GET -Headers $headers -Uri $link -Credential $mycreds $items = $obj.items; Write-Host "POOL NAMES"; Write-Host "----------"; for($i=0; $i -lt $items.length; $i++) { $name = $items[$i].fullPath; Write-Host " $name"; } } #---------------------------------------------------------------------------- function Get-VirtualStats() # # Description: # This function returns the statistics for a virtual server # # Parameters: # Name - The name of the virtual server #---------------------------------------------------------------------------- { param( [string]$Name ); $uri = "/mgmt/tm/ltm/virtual/${Name}/stats"; $link = "https://$Bigip$uri"; $headers = @{}; $headers.Add("ServerHost", $Bigip); $secpasswd = ConvertTo-SecureString $Pass -AsPlainText -Force $mycreds = New-Object System.Management.Automation.PSCredential ($User, $secpasswd) $obj = Invoke-RestMethod -Method GET -Headers $headers -Uri $link -Credential $mycreds $entries = $obj.entries; $names = $entries | get-member -MemberType NoteProperty | select -ExpandProperty Name; $desc = $entries | Select -ExpandProperty $names $nestedStats = $desc.nestedStats; Write-Host ("--------------------------------------"); Write-Host ("NAME : $(Get-StatsDescription $nestedStats 'tmName')"); Write-Host ("--------------------------------------"); Write-Host ("DESTINATION : $(Get-StatsDescription $nestedStats 'destination')"); Write-Host ("AVAILABILITY STATE : $(Get-StatsDescription $nestedStats 'status.availabilityState')"); Write-Host ("ENABLED STATE : $(Get-StatsDescription $nestedStats 'status.enabledState')"); Write-Host ("REASON : $(Get-StatsDescription $nestedStats 'status.statusReason')"); Write-Host ("CLIENT BITS IN : $(Get-StatsValue $nestedStats 'clientside.bitsIn')"); Write-Host ("CLIENT BITS OUT : $(Get-StatsValue $nestedStats 'clientside.bitsOut')"); Write-Host ("CLIENT PACKETS IN : $(Get-StatsValue $nestedStats 'clientside.pktsIn')"); Write-Host ("CLIENT PACKETS OUT : $(Get-StatsValue $nestedStats 'clientside.pktsOut')"); Write-Host ("CURRENT CONNECTIONS : $(Get-StatsValue $nestedStats 'clientside.curConns')"); Write-Host ("MAXIMUM CONNECTIONS : $(Get-StatsValue $nestedStats 'clientside.maxConns')"); Write-Host ("TOTAL CONNECTIONS : $(Get-StatsValue $nestedStats 'clientside.totConns')"); Write-Host ("TOTAL REQUESTS : $(Get-StatsValue $nestedStats 'totRequests')"); } Python Virtual Server Stats def get_virtual_list(bigip, url): try: vips = bigip.get("%s/ltm/virtual" % url).json() print "VIRTUAL SERVER LIST" print " -------------------" for vip in vips['items']: print " /%s/%s" % (vip['partition'], vip['name']) except Exception, e: print e def get_virtual_stats(bigip, url, vip): try: vip_stats = bigip.get("%s/ltm/virtual/%s/stats" % (url, vip)).json() selflink = "https://localhost/mgmt/tm/ltm/virtual/%s/~Common~%s/stats" % (vip, vip) nested_stats = vip_stats['entries'][selflink]['nestedStats']['entries'] print '' print ' --------------------------------------' print ' NAME : %s' % nested_stats['tmName']['description'] print ' --------------------------------------' print ' AVAILABILITY STATE : %s' % nested_stats['status.availabilityState']['description'] print ' ENABLED STATE : %s' % nested_stats['status.enabledState']['description'] print ' REASON : %s' % nested_stats['status.statusReason']['description'] print ' CLIENT BITS IN : %s' % nested_stats['clientside.bitsIn']['value'] print ' CLIENT BITS OUT : %s' % nested_stats['clientside.bitsOut']['value'] print ' CLIENT PACKETS IN : %s' % nested_stats['clientside.pktsIn']['value'] print ' CLIENT PACKETS OUT : %s' % nested_stats['clientside.pktsOut']['value'] print ' CURRENT CONNECTIONS : %s' % nested_stats['clientside.curConns']['value'] print ' MAXIMUM CONNECTIONS : %s' % nested_stats['clientside.maxConns']['value'] print ' TOTAL CONNECTIONS : %s' % nested_stats['clientside.totConns']['value'] print ' TOTAL REQUESTS : %s' % nested_stats['totRequests']['value'] Going Further So now that you have scripts that will pull stats from the pools and virtual servers, what might you do to extend these? As an exercise in skill development, I’d suggest trying these tasks below. Note that there are dozens of existing articles on iControl that you can glean details from that will help you in your quest. Given that you can pull back a list of pools or virtual servers by not providing one specifically, iterate through the appropriate list and print stats for all of them instead of just one. Extend that even further, building a table of virtual servers and their respective stats These stats are a point in time, and not terribly useful as a single entity. Create a timer where the stats are pulled back every 10s or so and take a delta to display. Extend that even further by narrowing down to maybe the bits in and out, and graph them over time. Resources All the scripts for this article are available in the codeshare under "Getting Started with iControl Code Samples."4.5KViews0likes3CommentsGetting Started with iControl: History
tl;dr - iControl provides access to BIG-IP management plane services through SOAP and REST API interfaces. The Early Days iControl started back in early 2000. F5 had 2 main products: BIG-IP and 3-DNS (later GTM, now BIG-IP DNS). BIG-IP managed the local datacenter's traffic, while 3-DNS was the DNS orchestrator for all the BIG-IP's in numerous data centers. The two products needed a way to communicate with each other to ensure they were making the right traffic management decisions respective to all of the products in the system. At the time, the development team was focused on developing the fastest running code possible and that idea found it's way into the cross product communication feature that was developed. The technology the team chose to use was the Common Object Request Broker Architecture (CORBA) as standardized by the Object Management Group (OMG). Coming hot off the heels of F5's first management product SEE-IT (which was another one of my babies), the dev team coined this internal feature as "LINK-IT" since it "linked" the two products together. With the development of our management, monitoring, and visualization product SEE-IT, we needed a way to get the data off of the BIG-IP. SEE-IT was written for Windows Server and we really didn't want to go down the route of integrating into the CORBA interface due to several factors. So, we wrote a custom XML provider on BIG-IP and 3-DNS to allow for configuration and statistic data to be retrieved and consumed by SEE-IT. It was becoming clear to me that automation and customization of our products would be beneficial to our customers who had been previously relying on our SNMP offerings. We now had 2 interfaces for managing and monitoring our devices: one purely internal (LINK-IT) and the other partially (XML provider). The XML provider was very specific to our SEE-IT products use case and we didn't see a benefit of trying to morph that so we looked back at LINK-IT to see what we could to do make that a publicly supported interface. We began work on documenting and packaging it into F5's first public SDK. About that time, a new standard was emerging for exchanging structured information. The Simple Object Access Protocol (SOAP), which allows for structured information exchange, was being developed but not fully ratified until version 1.2 in 2003. I had to choose to roll our own XML implementation or make use of this new proposed specification. There was risk as the specification was not a standard yet but I made the choice to go the SOAP route as I felt that picking a standard format would give us the best 3rd party software compatibility down the road. Our CORBA interface was built on a nice class model which I used as a basis for an SOAP/XML wrapper on top of that code. I even had a great code name for the interface: X-LINK-IT! For those who were around when I gave my "XML at F5" presentation to F5 Product Development, you may remember the snide comments going around afterwards about how XML was not a great technology and a big mistake supporting. Good thing I didn't listen to them... At this point in mid-2001, the LINK-IT SDK was ready to go and development of X-LINK-IT was well underway. Well, let's just say that Marketing didn't agree with our ingenious product naming and jumped in to VETO our internal code names for our public release. I'll give our Chief Marketer Jeff Pancottine credit for coining the term "iControl" which was explained to me as "Internet Control". This was the start of F5's whole Internet Controlled Architecture messaging by the way. So, LINK-IT and X-LINK-IT were dead and iControl CORBA and iControl SOAP were born. The Death of CORBA, All Hail SOAP The first version of the iControl SDK for CORBA was released on 5/25/2001 with the SOAP version trailing it by a few months. This was around the BIG-IP version 3 time frame. We chugged along for a few years through the BIG-IP version 4 life and then a big event occurred that was the demise for CORBA - well, it's actually 2 events. The first event was the full rewrite of the BIG-IP data plane when TMOS was introduced in BIG-IP, version 9 (we skipped from version 4 to version 9 for some reason that slips my mind). Since virtually the entire product was rewritten, the interfaces that were tied to the product features, would have to change drastically. We used this as an opportunity to look at the next evolution of iControl. Until this point, iControl SOAP was just a shim on top of CORBA and it had some performance issues so we worked at splitting them apart and having SOAP talk directly to our configuration engine. Now we had 2 interface stacks side by side. The second event was learning we only had 1 confirmed customer using the CORBA interface compared to the 100's using SOAP. Given that knowledge and now that BIG-IP and 3-DNS no longer used iControl CORBA to talk to each other, the decision was made to End of Life iControl CORBA with Version 9.0. But, iControl SOAP still used the CORBA IDL files for it's API definitions and documentation so fun trivia note: the same CORBA tools are still in place in today's iControl SOAP build tools that were there in version 3 of BIG-IP. I'm fairly sure that is the longest running component in our build system. The Birth of DevCentral I can't speak about iControl without mentioning DevCentral. We had our iControl SDK out but no where to directly support the developers using it. At that time, F5 was a "hardware" company and product support wasn't ready to support application developers. Not many know that DevCentral was created due to the popularity of iControl with our customer base and was born on a PC under my desk in 2003. I continued to help with DevCentral part time for a few years but in 2007 I decided to work full time on building our community and focusing 100% on DevCentral. It was about this time that we were pushing the idea of merging the application and infrastructure teams together - or at least getting them to talk more frequently. This was a precursor to the whole DevOps mentality so I'd like to think we were a bit ahead of the curve on that. Enter iControl REST In 2013, iControl was reaching it's teenage years and starting to show it's age a bit. While SOAP is still supported by all the major tool vendors, application development was shifting to richer browser-based apps. And with that, Representational State Transfer (REST) was gaining steam. REST defined a usage pattern for using browser based mechanisms with HTTP to access objects across the network with a JavaScript Object Notation (JSON) format for the content. To keep up with current technologies, the PD team at F5 developed the first REST/JSON interface in BIG-IP version 11.5 as Early Access and was made Generally Available in version 11.6. With the REST interface, more modern web paradigms could be supported and you could actually code to iControl directly from a browser! There were also additional interface based tools for developing and debugging built directly into the service to give service listings and schema definitions. At the time of this writing, F5 supports both of the main iControl interfaces (SOAP and REST) but are focusing all new energy on our REST platform for the future. For those who have developed SOAP integrations, have no fear as that interface is not going away. It will just likely not get all the new feature support that will be added into the REST interfaces over time. SDKs, Toolkits, and Libraries Through the years, I've developed several client libraries (.Net, PowerShell, Java) for iControl SOAP to assist with ramp-up time for initial development. There have also been numerous other language based libraries for languages like Ruby, PHP, and Python developed by the community and other development teams. Most recently, F5 has published the iControl library for Python which is available as part of our OpenStack integration. DevCentral is your place to for our API documentation where we update our wiki with API changes on each major release. And as time rolls on, we are adding REST support for new and existing products such as iWorkflow, BIG-IQ, and other products yet to be released that will include SDKs and other reference material. F5 has a strong commitment to our end users and their automation and integration projects with F5 technologies. Coming full circle, my current role is overseeing APIs and SDKs for standards, consistency, and completeness in our Programmability and Orchestration (P&O) team so keep a look out for future articles from me on our efforts on that front. And REST assured, we will continue to do all we can to help our customers move to new architectures and deployment models with programmability and automation with F5 technologies.3KViews1like1CommentGetting Started with iControl: Working with the System
So far throughout this Getting Started with iControl series, we’ve worked our way through history, libraries, configuration object, and statistics. In this final article in the series, we’ll tackle a few system functions, namely system ntp and dns settings, and then generating a qkview. Some of the system functions are new functionality to iControl with the rest portal, as system calls are not available via soap. The rule of thumb here is if the system call is available in tmsh via the util branch, it’s available in the rest portal. System DNS Settings When setting up the BIG-IP, some functions require an active DNS configuration to perform lookups. In the GUI, this is configured by navigating to System->Configuration->Device->DNS. In tmsh, it’s configured by modifying the settings in /sys/dns. Note that there is not a create function here, as the DNS objects themselves already exist, whether or not you have configured them. Because of this, when updating these settings via iControl rest, you’ll use the PUT method. An example tmsh configuration for the DNS settings is below: [root@ltm3:Active:Standalone] config # tmsh list sys dns sys dns { name-servers { 10.10.10.2 10.10.10.3 } search { test.local } } This formatted in json looks like this: {"nameServers": ["10.10.10.2", "10.10.10.3"], "search": ["test.local"]} with a PUT to https://hostname/mgmt/tm/sys/dns. Powershell Python Powershell DNS Settings #---------------------------------------------------------------------------- function Get-systemDNS() # # Description # This function retrieves the system DNS configuration # #---------------------------------------------------------------------------- { $uri = "/mgmt/tm/sys/dns"; $link = "https://$Bigip$uri"; $headers = @{}; $headers.Add("ServerHost", $Bigip); $secpasswd = ConvertTo-SecureString $Pass -AsPlainText -Force $mycreds = New-Object System.Management.Automation.PSCredential ($User, $secpasswd) $obj = Invoke-RestMethod -Method GET -Headers $headers -Uri $link -Credential $mycreds Write-Host "`nName Servers"; Write-Host "------------"; $items = $obj.nameServers; for($i=0; $i -lt $items.length; $i++) { $name = $items[$i]; Write-Host "`t$name"; } Write-Host "`nSearch Domains"; $items = $obj.search; for($i=0; $i -lt $items.length; $i++) { $name = $items[$i]; Write-Host "`t$name"; } Write-Host "`n" } #---------------------------------------------------------------------------- function Set-systemDNS() # # Description # This function sets the system DNS configuration # #---------------------------------------------------------------------------- { param( [array]$Servers, [array]$SearchDomains ); $uri = "/mgmt/tm/sys/dns"; $link = "https://$Bigip$uri"; $headers = @{}; $headers.Add("ServerHost", $Bigip); $headers.Add("Content-Type", "application/json"); $obj = @{ nameServers = $Servers search = $SearchDomains }; $body = $obj | ConvertTo-Json $secpasswd = ConvertTo-SecureString $Pass -AsPlainText -Force $mycreds = New-Object System.Management.Automation.PSCredential ($User, $secpasswd) $obj = Invoke-RestMethod -Method PUT -Uri $link -Headers $headers -Credential $mycreds -Body $body; Get-systemDNS; } Python DNS Settings def get_sys_dns(bigip, url): try: dns = bigip.get('%s/sys/dns' % url).json() print "\n\n\tName Servers:" for server in dns['nameServers']: print "\t\t%s" % server print "\n\tSearch Domains:" for domain in dns['search']: print "\t\t%s\n\n" %domain except Exception, e: print e def set_sys_dns(bigip, url, servers, search): servers = [x.strip() for x in servers.split(',')] search = [x.strip() for x in search.split(',')] payload = {} payload['nameServers'] = servers payload['search'] = search try: bigip.put('%s/sys/dns' % url, json.dumps(payload)) get_sys_dns(bigip, url) except Exception, e: print e System NTP Settings NTP is another service that some functions like APM access require. In the GUI, the settings for NTP are configured in two places. First, the servers are configured in System->Configuration->Device->NTP. The timezone is configured on the System->Platform page. In tmsh, the settings are configured in /sys/ntp. Like DNS, these objects already exist, so the iControl rest method will be a PUT instead of a POST. An example tmsh configuration for the NTP settings is below: [root@ltm3:Active:Standalone] config # tmsh list sys ntp sys ntp { servers { 10.10.10.1 } timezone America/Chicago } This formatted in json looks like this: {"servers": ["10.10.10.1"], "timezone": "America/Chicago"} with a PUT to https://hostname/mgmt/tm/sys/ntp. Powershell Python Powershell NTP Settings #---------------------------------------------------------------------------- function Get-systemNTP() # # Description # This function retrieves the system NTP configuration # #---------------------------------------------------------------------------- { $uri = "/mgmt/tm/sys/ntp"; $link = "https://$Bigip$uri"; $headers = @{}; $headers.Add("ServerHost", $Bigip); $secpasswd = ConvertTo-SecureString $Pass -AsPlainText -Force $mycreds = New-Object System.Management.Automation.PSCredential ($User, $secpasswd) $obj = Invoke-RestMethod -Method GET -Headers $headers -Uri $link -Credential $mycreds Write-Host "`nNTP Servers"; Write-Host "------------"; $items = $obj.servers; for($i=0; $i -lt $items.length; $i++) { $name = $items[$i]; Write-Host "`t$name"; } Write-Host "`nTimezone"; $item = $obj.timezone; Write-Host "`t$item`n"; } #---------------------------------------------------------------------------- function Set-systemNTP() # # Description # This function sets the system NTP configuration # #---------------------------------------------------------------------------- { param( [array]$Servers, [string]$TimeZone ); $uri = "/mgmt/tm/sys/ntp"; $link = "https://$Bigip$uri"; $headers = @{}; $headers.Add("ServerHost", $Bigip); $headers.Add("Content-Type", "application/json"); $obj = @{ servers = $Servers timezone = $TimeZone }; $body = $obj | ConvertTo-Json $secpasswd = ConvertTo-SecureString $Pass -AsPlainText -Force $mycreds = New-Object System.Management.Automation.PSCredential ($User, $secpasswd) $obj = Invoke-RestMethod -Method PUT -Uri $link -Headers $headers -Credential $mycreds -Body $body; Get-systemNTP; } Python NTP Settings def get_sys_ntp(bigip, url): try: ntp = bigip.get('%s/sys/ntp' % url).json() print "\n\n\tNTP Servers:" for server in ntp['servers']: print "\t\t%s" % server print "\n\tTimezone: \n\t\t%s" % ntp['timezone'] except Exception, e: print e def set_sys_ntp(bigip, url, servers, tz): servers = [x.strip() for x in servers.split(',')] payload = {} payload['servers'] = servers payload['timezone'] = tz try: bigip.put('%s/sys/ntp' % url, json.dumps(payload)) get_sys_ntp(bigip, url) except Exception, e: print e Generating a qkview Moving on from configuring some system settings to running a system task, we’ll now focus on a common operational task: running a qkview. In the GUI, this is done via System->Support. This is easy enough at the command line as well with the simple qkview command at the bash prompt, or via tmsh as show below: [root@ltm3:Active:Standalone] config # tmsh run util qkview Gathering System Diagnostics: Please wait ... Diagnostic information has been saved in: /var/tmp/ltm3.test.local.qkview Please send this file to F5 support. This formatted in json looks like this: {"command": "run"} with a POST to https://hostname/mgmt/tm/util/qkview. Powershell Python Powershell - Run a qkview #---------------------------------------------------------------------------- function Gen-QKView() # # Description # This function generates a qkview on the system # #---------------------------------------------------------------------------- { $uri = "/mgmt/tm/util/qkview"; $link = "https://$Bigip$uri"; $headers = @{}; $headers.Add("serverHost", $Bigip); $headers.Add("Content-Type", "application/json"); $secpasswd = ConvertTo-SecureString $Pass -AsPlainText -Force $mycreds = New-Object System.Management.Automation.PSCredential ($User, $secpasswd) $obj = @{ command='run' }; $body = $obj | ConvertTo-Json Write-Host ("Running qkview...standby") $obj = Invoke-RestMethod -Method POST -Uri $link -Headers $headers -Credential $mycreds -Body $body; Write-Host ("qkview is complete and is available in /var/tmp.") } Python - Run a qkview def gen_qkview(bigip, url): payload = {} payload['command'] = 'run' try: print "\n\tRunning qkview...standby" qv = bigip.post('%s/util/qkview' % url, json.dumps(payload)).text if 'saved' in qv: print '\tqkview is complete and available in /var/tmp.' except Exception, e: print e Going Further Now that you have a couple basic tasks in your arsenal, what else might you add? Here are a couple exercises for you to build your toolbox: Configure sshd, creating a custom security banner. Configure snmp Extend the qkview function to support downloading the qkview file generated. Resources All the scripts for this article are available in the codeshare under "Getting Started with iControl Code Samples."2.1KViews0likes2CommentsGetting Started with iControl: Interface Taxonomy & Language Libraries
In part two of this article in the Getting Started with iControl series, we will cover the taxonomy for both the SOAP and REST iControl interfaces, that were discussed in theprevious article on iControl History, as well as introduce the language libraries we unofficially support on DevCentral. Note: This article published by Jason Rahm but co-authored with Joe Pruitt. Taxonomy The two interfaces were designed in different ways and with different goals. iControl SOAP was developed as a functional API behaving very similar to a Java or C++ class where you have interfaces and methods included within those interfaces. Interfaces roughly related to product features and for organization, those interfaces were then grouped into larger namespaces, or product modules. To get a little more specific, iControl SOAP implements a RPC (Remote Procedure Call) type model. To make use of the APIs, one just instantiated an instance of the interface and then used functional (create(), get_list(), ...) and accessor (get_*(), set_*(), ...) methods with it. iControl REST on the other hand, is based on a document model where documents are passed back and forth containing all the relevant information of the given object(s) in question. This interface makes use of the HTTP protocol's VERBs (GET, POST, PUT, PATCH, DELETE) for action and URIs (ie. /mgmt/tm/ltm/pool) for the objects you are manipulating. Instead using methods to perform actions, you use the associated HTTP VERB to trigger the action, the URI to denote the object, and the POST data or Query Parameters to indicated the objects details. iControl REST is currently modeled off of the tmsh shell which uses similar modules and interfaces from SOAP (with different abbreviations) in it's URI path. Module Interface Method iControl SOAP LocalLB Pool get_list() create() add_member() Networking SelfIP get_netmask() set_vlan() Module Sub-Module Component iControl REST ltm pool <pool name> net self <self name> Now for some simple examples using the SOAP and REST interfaces using python libraries. Don't worry if python's not your thing,we’ll cover some perl, powershell, java, and Node.jsas well in the following articles in this series. ### ignore SSL cert warnings >>> import ssl >>> ssl._create_default_https_context = ssl._create_unverified_context ### SOAP >>> import bigsuds >>> b_soap = bigsuds.BIGIP('172.16.44.15', 'admin', 'admin') >>> b_soap.LocalLB.Pool.get_list() ['/Common/myNewPool2'] ### REST >>> from f5.bigip import BigIP as f5 >>> b_rest = f5('172.16.44.15', 'admin', 'admin') >>> b_rest.ltm.pools.get_collection()[0].name u'myNewPool2' Note there really isn't much difference between the two iControl interfaces when it comes to the client code. This is the beauty of libraries, they mask the hard work! We'll dig into the code more in the next article. so don't worry too much about the code above just yet. Interfaces & Objects In the above section, we briefly outlined the high level taxonomy between module and interface. There are literally thousands of defined methods for the SOAP interface, all defined and published in theiControl wikion DevCentral. For the REST interface, you will want toreference the iControl REST wikias well, but you will really want to become intimate with thetmsh reference guide, as a solid understanding of how tmsh works will make a more favorable going. Language Libraries Raw SOAP payload is out of the question unless you enjoy pain, so a library is a given if using the SOAP interface. But even with REST, as easy as it is to interact with interface directly, the fact is if you aren’t using a library, you still have to create all your JSON documents and investigate and prepare all the payload requirements when manipulating or creating objects. When selecting SOAP or REST, the reality is if you’re going to build applications that utilize iControl, you’re definitely going to want to use a library. What is a Library? Depending on the language or languages you might use, a library can be called many things, such as a package, a wrapper, a module, a class, etc. But at it’s most basic meaning, a library is simply a collection of tools and/or functionality that abstracts some of the work required in understanding the underlying protocols and assists in getting things done faster. For example, if you are formatting an office document and changing the font size, color, family, and bolding it on select words throughout the doc, it could be painful to do this if you have to do it more than a few times. You could record a macro of these steps, and then for each subsequent task, replay that work via the macro so you only have to do it once. In a roundabout way, this is what libraries do for you as well. The common repetitive work is done for you, so you can focus on the business logic in your applications. What Libraries Are Available? There are several libraries available, some languages having more than one to support both iControl SOAP and iControl REST, and at least one (like python) that has several libraries for just the SOAP interface. To each his own, we like to expand the horizons around here and equip everyone for whatever language suits best for your needs and abilities. We could list out all the libraries here, but they are already documented on theprogrammability page in the wiki, at least the ones we are directly responsible for. Joe does a lot with PowerShell and Perl, and I with python, and we do have quite a few community members well versed in the .Net languages and python too, though there is some support for java and ruby as well. On Github I've seen aniControl wrapper for the Go language, and I seem to recall some love for PHP a while back. Now that we'rethrough the historyand some foundational information on the SOAP and REST interfaces, we can move on to actually building something! In the next article in this series, we'll jump into some code samples to highlight the pros and cons of working with SOAP and REST, as well as show you the differences among a few languages for each interface in accomplishing the same tasks.2.1KViews1like1CommentMicroservices and HTTP/2
It's all about that architecture. There's a lot of things we do to improve the performance of web and mobile applications. We use caching. We use compression. We offload security (SSL and TLS) to a proxy with greater compute capacity. We apply image optimization and minification to content. We do all that because performance is king. Failure to perform can be, for many businesses, equivalent to an outage with increased abandonment rates and angry customers taking to the Internet to express their extreme displeasure. The recently official HTTP/2 specification takes performance very seriously, and introduced a variety of key components designed specifically to address the need for speed. One of these was to base the newest version of the Internet's lingua franca on SPDY. One of the impacts of this decision is that connections between the client (whether tethered or mobile) and the app (whether in the cloud or on big-iron) are limited to just one. One TCP connection per app. That's a huge divergence from HTTP/1 where it was typical to open 2, 4 or 6 TCP connections per site in order to take advantage of broadband. And it worked for the most part because, well, broadband. So it wouldn't be a surprise if someone interprets that ONE connection per app limitation to be a negative in terms of app performance. There are, of course, a number of changes in the way HTTP/2 communicates over that single connection that ultimately should counteract any potential negative impact on performance from the reduction in TCP connections. The elimination of the overhead of multiple DNS lookups (not insignificant, by the way) as well as TCP-related impacts from slow start and session setup as well as a more forgiving exchange of frames under the covers is certainly a boon in terms of application performance. The ability to just push multiple responses to the client without having to play the HTTP acknowledgement game is significant in that it eliminates one of the biggest performance inhibitors of the web: latency arising from too many round trips. We've (as in the corporate We) seen gains of 2-3 times the performance of HTTP/1 with HTTP/2 during testing. And we aren't alone; there's plenty of performance testing going on out there, on the Internets, that are showing similar improvements. Which is why it's important (very important) that we not undo all the gains of HTTP/2 with an architecture that mimics the behavior (and performance) of HTTP/1. Domain Sharding and Microservices Before we jump into microservices, we should review domain sharding because the concept is important when we look at how microservices are actually consumed and delivered from an HTTP point of view. Scalability patterns (i.e. architectures) include the notion of Y-axis scale which is a sharding-based pattern. That is, it creates individual scalability domains (or clusters, if you prefer) based on some identifiable characteristic in the request. User identification (often extricated from an HTTP cookie) and URL are commonly used information upon which to shard requests and distribute them to achieve greater scalability. An incarnation of the Y-axis scaling pattern is domain sharding. Domain sharding, for the uninitiated, is the practice of distributing content to a variety of different host names within a domain. This technique was (and probably still is) very common to overcome connection limitations imposed by HTTP/1 and its supporting browsers. You can see evidence of domain sharding when a web site uses images.example.com and scripts.example.com and static.example.com to optimize page or application load time. Connection limitations were by host (origin server), not domain, so this technique was invaluable in achieving greater parallelization of data transfers that made it appear, at least, that pages were loading more quickly. Which made everyone happy. Until mobile came along. Then we suddenly began to realize the detrimental impact of introducing all that extra latency (every connection requires a DNS lookup, a TCP handshake, and suffers the performance impacts of TCP slow start) on a device with much more limited processing (and network) capability. I'm not going to detail the impact; if you want to read about it in more detail I recommend reading some material from Steve Souder and Tom Daly or Mobify on the subject. Suffice to say, domain sharding has an impact on mobile performance, and it is rarely a positive one. You might think, well, HTTP/2 is coming and all that's behind us now. Except it isn't. Microservice architectures in theory, if not in practice, are ultimately a sharding-based application architecture that, if we're not careful, can translate into a domain sharding-based network architecture that ultimately negates any of the performance gains realized by adopting HTTP/2. That means the architectural approach you (that's you, ops) adopt to delivering microservices can have a profound impact on the performance of applications composed from those services. The danger is not that each service will be its on (isolated and localized) "domain", because that's the whole point of microservices in the first place. The danger is that those isolated domains will be presented to the outside world as individual, isolated domains, each requiring their own personal, private connection by clients. Even if we assume there are load balancing services in front of each service (a good assumption at this point) that still means direct connections between the client and each of the services used by the client application because the load balancing service acts as a virtual service, but does not eliminate the isolation. Each one is still its own "domain" in the sense that it requires a separate, dedicated TCP connection. This is essentially the same thing as domain sharding as each host requires its own IP address to which the client can connect, and its behavior is counterproductive to HTTP/2*. What we need to do to continue the benefits of a single, optimized TCP connection while being able to shard the back end is to architect a different solution in the "big black box" that is the network. To be precise, we need to take advantage of the advanced capabilities of a proxy-based load balancing service rather than a simple load balancer. An HTTP/2 Enabling Network Architecture for Microservices That means we need to enable a single connection between the client and the server and then utilize capabilities like Y-axis sharding (content switching, L7 load balancing, etc...) in "the network" to maintain the performance benefits of HTTP/2 to the client while enabling all the operational and development benefits of a microservices architecture. What we can do is insert a layer 7 load balancer between the client and the local microservice load balancers. The connection on the client side maintains a single connection in the manner specified (and preferred) by HTTP/2 and requires only a single DNS lookup, one TCP session start up, and incurs the penalties from TCP slow start only once. On the service side, the layer 7 load balancer also maintains persistent connections to the local, domain load balancing services which also reduces the impact of session management on performance. Each of the local, domain load balancing services can be optimized to best distribute requests for each service. Each maintains its own algorithm and monitoring configurations which are unique to the service to ensure optimal performance. This architecture is only minimally different from the default, but the insertion of a layer 7 load balancer capable of routing application requests based on a variety of HTTP variables (such as the cookies used for persistence or to extract user IDs or the unique verb or noun associated with a service from the URL of a RESTful API call) results in a network architecture that closely maintains the intention of HTTP/2 without requiring significant changes to a microservice based application architecture. Essentially, we're combining X- and Y-axis scalability patterns to architect a collaborative operational architecture capable of scaling and supporting microservices without compromising on the technical aspects of HTTP/2 that were introduced to improve performance, particularly for mobile applications. Technically speaking we're still doing sharding, but we're doing it inside the network and without breaking the one TCP connection per app specified by HTTP/2. Which means you get the best of both worlds - performance and efficiency. Why DevOps Matters The impact of new architectures - like microservices - on the network and the resources (infrastructure) that deliver those services is not always evident to developers or even ops. That's one of the reasons DevOps as a cultural force within IT is critical; because it engenders a breaking down of the isolated silos between ops groups that exist (all four of them) and enables greater collaboration that leads to more efficient deployment, yes, but also more efficient implementations. Implementations that don't necessarily cause performance problems that require disruptive modification to applications or services. Collaboration in the design and architectural phases will go along way towards improving not only the efficacy of the deployment pipeline but the performance and efficiency of applications across the entire operational spectrum. * It's not good for HTTP/1, either, as in this scenario there is essentially no difference** between HTTP/1 and HTTP/2. ** In terms of network impact. HTTP/2 still receives benefits from its native header compression and other performance benefits.1.5KViews0likes2CommentsF5 Programmability for Eclipse - Installation Instructions
It’s been a long time coming, but I’m pleased to announce a brand new editor for your iRules: F5 Programmability for Eclipse! This editor supports iRules & iRules LX in its initial release, but stay tuned for more features down the road. One of the cool features from my quick glance this afternoon is simultaneous multi-system support. F5 Programmability for Eclipse F5 Programmability for Eclipse version 1.0 allows you to use the Eclipse IDE to manage iRules and iRuleLX development. By using Eclipse, you can connect to one or more BIG-IP devices to view, modify or create iRules or iRuleLX workspaces. The editor functionality includes TCL/iRules and JavaScript language syntax highlighting, code completion, and hover documentation for the iRules API. Note: This tool is provided via DevCentral as a free tool to help you better leverage iRules, and is in no way officially supported by F5 or F5 Professional Services. All support, questions, comments or otherwise for this editor should be submitted in the Q&A section of DevCentral. Please make sure to tag irules and/or iruleslx, as well as adding a custom editor tag. System Requirements The system requirements for the F5 Programmability for Eclipse are: Eclipse installed – Minimum version: Luna (v4.4) Recommended: Mars (v4.5) or Neon (v4.6) Java version 1.7 or later installed Network access to one or more F5 Networks BIG-IP systems (TMOS version 12.1+) Verified Software Combinations F5 Programmability for Eclipse, version 1.0, has been tested for compatibility with the following configurations: OS Eclipse Version Java Version TMOS Version Linux (CentOS 6.3) Mars 4.5.1 1.7.0_70 12.1 Linux (CentOS 6.6) Mars 4.5.1 1.7.0_79 12.1 Linux (CentOS 6.7) Mars 4.5.2 1.7.0_101 12.1 MacOS (v10.8) Mars 4.5.2 1.8.0_91 12.1 MacOS (v10.11.5) Mars 4.5.2 1.8.0_91 12.1 MacOS (v10.11.5) Neon 4.6 1.8.0_91 12.1 Windows (v7) Luna 4.4.2 1.8.0_91 12.1 Windows (v7) Mars 4.5.2 1.8.0_91 12.1 Windows (v7) Neon 4.6 1.8.0_91 12.1 Installation To install the F5 Programmability for Eclipse plug-in, Start the version of Eclipse that you installed. Click on Help > Install New Software… Verify that the Mars or Neon release URL is listed in the Works with drop down. If you are running the Luna version, or the release URL is not listed, Click Add… Type the text “http://download.eclipse.org/releases/mars” into the Location field (omit the quotation marks) Click OK Add a repository for the F5 plug-in Click Add… Type the text “http://cdn.f5.com/product/f5-eclipse/v1.0.0” into the Location field (omit the quotation marks) Click OK After you add the repository, F5 Networks or F5 Programmability for Eclipse should apppear in the Available Software dialog. Check the box next to the F5 item. Check the Contact all update sites box and click Next After you review the installation items, click Next. Read the License Agreement, check I accept the terms of the license agreement, and click Finish When prompted to restart Eclipse, click Yes Common tasks Select the F5 perspective To use the F5 plug-in, you must activate the F5 perspective. In Eclipse, Click on Window > Perspective > Open Perspective > Other… Find and select the F5 perspective Click OK Connect to a BIG-IP system In the F5 perspective you chose there are three views: Explorer pane on the left hand side, Editor pane on the right hand side, and a Log panel along the bottom. The Explorer pane includes a toolbar with 3 buttons. On the iRules & iRuleLX tab, Click the New BIG-IP Connection toolbar button (at the top of the Explorer pane) When prompted, provide the IP address and credentials for the BIG-IP system. You may store your credentials in a secure store. If you use the secure store, you must enter a master password for each session. Click Finish When you connect to a BIG-IP system, all LTM iRules, GTM iRules, and iRuleLX workspaces are loaded. The process may take a few seconds. When the process completes, you can expand the connection folder and subfolders. You can connect to multiple BIG-IP systems simultaneously. Note that folders exist for provisioned product modules on the BIG-IP system to which you connected. If you have not provisioned GTM or iRuleLX, no folder appears for that module. Note: To access a Big-IP system remotely via Eclipse, the user account must have been assigned a role of administrator. Every Big-IP system includes a user with the name admin which has an administrator role. For creating additional users with an administrator role, refer to the BIG-IP guide for User Account Administration. Change the BIG-IP partition The default partition Common is loaded when you connect to a BIG-IP system. If you would like to use a different partition, In the Explorer, select the BIG-IP Connection Click either the Gears toolbar button, or use the context menu (right click) to open the Project Properties dialog. Select a new partition Click OK. Content will be loaded from the BIG-IP partition you selected. Open an iRule or iRuleLX file to view or edit Click Open in the context menu, or double-click on the file in the Explorer. The content is pulled from the BIG-IP and an edit tab is created. This step creates a file on the local filesystem. Add or delete iRules, ILX workspaces, extensions, and files Use the context menu (right click) for Add and Delete menu items. These items are available in folders that contain iRules and iRuleLX resources. Saving an iRule or iRuleLX file Use File > Save, the Save button on the toolbar, or type Ctrl-S. Any of these actions save the file locally and pushes the changes to the BIG-IP system. Reloading an iRules or iRuleLX file Each file has a Reload context menu selection. Click Reload to reload the edit buffer and sync the changes with the BIG-IP system. Reloading all BIG-IP content To reload all content from each connected BIG-IP system, click Reload All toolbar button in the Explorer. This action closes all open editors after prompting you to save any unsaved changes. General information about iRule editing features The editor used for editing iRules includes code completion and hover documentation for iRule commands and events. Consistent with the default Eclipse behavior, typing Ctrl-Space will check for available completion proposals for the current word being type. Also, completions will be invoked after adding a colon (“:”) to a word. This is handy for completions of namespaced iRule commands, e.g. TCP::. The same completion popup documentation can be seen by hovering over an iRule command or event. The F5 plug-in does not include the man pages for the standard Tcl commands. To enable hover documentation for those standard Tcl commands, you can download the man pages and link them in to the Tcl and iRule editors. The manual pages can be downloaded from here: Tcl. Once uncompressed, the containing directory can be linked to Eclipse via the global preferences dialog (Window > Preferences for Linux and Windows, Eclipse > Preferences for MacOS). Under the Tcl > Man Pages section, click Configure…, then click Add. Specify a name of your choosing for the documentation set and add the file path to the local directory which contains the docs. JavaScript editing Opening a JavaScript file within an iRuleLX project will invoke a specific JavaScript editor (from the JSDT plug-in). This editor includes syntax highlighting, validation, completions, etc. The editing features can be configured via the global preferences dialog under the JavaScript section (Window > Preferences for Linux and Windows, Eclipse > Preferences for MacOS). File editing File types other than iRule or JavaScript will be opened with the editor identified by Eclipse to be appropriate. Depending on the file associations you have configured for your operating system, Eclipse may decide to open an external editor. To override this and choose the editor type of your own choosing, you can define a file association via the global preferences dialog under the General > Editors > File Associations section. Known Issues ID 592459 F5 Perspective Dialogs do not include Help button content. ID 594055 The formatter page within the Project Properties dialog can cause an error: “The currently displayed page contains invalid values”. The Eclipse bug is tracked here: https://bugs.eclipse.org/bugs/show_bug.cgi?id=476261 ID 594413 Code folding within an iRule ‘when’ block only works for contained comment blocks, not code blocks. ID 598035 iRule command completion does not work with Tcl execute statements. The workaround is to compose the statement outside the context of square brackets, then add the brackets later. ID 599573 Specific standard Tcl commands which are disabled for iRules (exit, exec, etc.) are highlighted/completed as if they are supported commands for iRules. ID 599574 A connection failure during a file Save operation leaves the file appearing that it’s saved (* decorator removed) when in fact it’s out of sync with the corresponding content on the Big-IP. If a file save is unsuccessful, the user should try to determine the cause of the failure: check the connection, status of the Big-IP, etc. Once that issue is resolved, re-initiating the file save, if successful, will then leave both copies of the file in sync. Until a successful save is accomplished the file on the local disk will be maintained, even if Eclipse is shutdown, so the edits aren’t lost, just not in sync. ID 600422 Eclipse running on Windows, will not escape a “\” line continuation character which may cause validation errors upon saving an iRule. The workaround is to avoid line continuations within iRules.1.4KViews0likes12CommentsProgrammability in the Network: Canary Deployments
#devops The canary deployment pattern is another means of enabling continuous delivery. Deployment patterns (or as I like to call them of late, devops patterns) are good examples of how devops can put into place systems and tools that enable continuous delivery to be, well, continuous. The goal of these patterns is, for the most part, to make sure operations can smoothly move features, functions, releases or applications into production. We've previously looked at the Blue Green deployment pattern and today we're going to look at a variation: Canary deployments. Canary deployments are applicable when you're running a cluster of servers. In other words, you've got lots and lots of (probably active right now while you're considering pushing that next release) users. What you don't want is to do the traditional "we're sorry, we're down for maintenance, here's a picture of a funny squirrel to amuse you while you wait" maintenance page. You want to be able to roll out the new release without disruption. Yeah, that's quite the ask, isn't it? The Canary deployment pattern is an incremental upgrade methodology. First, the build is pushed to a small set of servers to which only a select group of users are directed. If that goes well, the release is pushed to a larger set of servers with a limited set of users. Finally, if that goes well, then the release is pushed out to all servers and all users. If issues occur at any stage, the release is halted - it goes no further. Hence the naming of the pattern - after the miner's canary, used because "its demise provided a warning of dangerous levels of toxic gases". The trick to implementing this pattern is two fold: first, being able to group the servers used in each step into discrete pools and second, the ability to direct specific sets of users to the appropriate pools. Both capabilities requires the ability to execute some logic to perform user-based load balancing. Nolio, in its first Devops Best Practices video, implements Canary deployments by manipulating the pools of servers at the load balancing tier, removing them to upgrade and then reinserting them for testing before moving onto the next phase. If your load balancing solution is programmable, there's no need to actually remove them as you can simply insert logic to remove them from being selected until they've been upgraded. You can also then insert the logic to determine which users are directed to which pool of servers. If the load balancing platform is really programmable, you can even extend that to determination to querying a database to determine user inclusion in certain groups, such as those you might use to perform AB testing. Such logic might base the decision on IP address (not the best option but an option) or later, when you're actually rolling out to a percentage of users you can write logic that randomly selects users based on location or their user name - like sharding, only in reverse - or pretty much anything you can think of. You can even split that further if you're rolling out an update to an API that's used by both mobile and traditional clients, to catch both or neither or specific types in an orderly fashion so you can test methodically - because you want to test methodically when you're using live users as test subjects. The beauty of this pattern is that allows continuous delivery. Users are never disrupted (if you do it right) and the upgrade occurs in a safely staged, incremental fashion. That enables you to back out quickly if necessary, because you do have a back button plan, right? Right?807Views1like1CommentF5 Synthesis: What about SDN?
#SDN #SDAS How does Synthesis impact and interact with SDN architectures? With SDN top of mind (or at least top of news feeds) of late, it's natural to wonder how F5's architecture, Synthesis, relates to SDN. You may recall that SDN -or something like it - was inevitable due to increasing pressure on IT to improve service velocity in the network to match that of development (agile) and operations (devops). The "network" really had no equivalent, until SDN came along. But SDN did not - and still does not - address service velocity at layers 4-7. The application layers where application services like load balancing, acceleration, access and identity (you know, all the services F5 platforms are known for providing) live. This is not because SDN architectures don't want to provide those services, it's because technically they can't. They're impeded from doing so because the network is naturally bifurcated. There's actually two networks in the data center - the layer 2-3 switching and routing fabric and the layer 4-7 services fabric. One of the solutions for incorporating application (layer 4-7) services into SDN architectures is service chaining. Now, this works well as a solution to extending the data path to application services but it does very little in terms of addressing the operational aspects which, of course, are where service velocity is either improved or not. It's the operational side - the deployment, the provisioning, the monitoring and management - that directly impacts service velocity. Service chaining is focused on how the network makes sure application data traversing the data path flows to and from application services appropriately. All that operational stuff is not necessarily addressed by service chaining. There are good reasons for that but we'll save enumerating them for another day in the interests of getting to an answer quickly. Suffice to say that service chaining is about execution, not operation. So something had to fill the gap and make sure that while SDN is improving service velocity for network (layer 2-3) services, the velocity of application services (layer 4-7) were also being improved. That's where F5 Synthesis comes in. We're not replacing SDN; we're not an alternative architecture. Synthesis is completely complementary to SDN and in fact interoperates with a variety of architectures falling under the "SDN" moniker as well as traditional network fabrics. Ultimately, F5's vision is to provide application-protocol aware data path elements (BIG-IP, LineRate, etc..) that can execute programmatic rules that are pushed by a centralized control plane (BIG-IQ). A centralized control-decentralized execution model implementing a programmatic application control plane and application-aware data plane architecture. Bringing the two together offers a comprehensive, dynamic software-defined architecture for the data center that addresses service velocity challenges across the entire network stack (layer 2-7). SDN automates and orchestrates the network and passes the right traffic to Synthesis High-Performance Services Fabric, which then does what it does best - apply the application services critical to ensuring apps are fast, secure and reliable. In addition to service chaining scenarios there are orchestration integrations (such as that with VMware's NSX) as well as network integrations such as a cooperative effort between F5, Arista and VMware. You might have noticed that we specifically integrate with leading SDN architectures and partners like Cisco/Insieme, VMware, HP, Arista, Dell and BIgSwitch. We're participating in all the relevant standards organizations to help find additional ways to integrate both network and application services in SDN architectures. We see SDN as validating what We (and that's the corporate we) have always believed - networks need to be dynamic, services need to be extensible and programmable, and all the layers of the network stack need to be as agile as the business it supports. We're fans, in other words, and our approach is to support and integrate with SDN architectures to enable customers to deploy a fully software-defined stack of network and application services. F5 Synthesis: The Time is Right F5 and Cisco: Application-Centric from Top to Bottom and End to End F5 Synthesis: Software Defined Application Services F5 Synthesis: Integration and Interoperability F5 Synthesis: High-Performance Services Fabric F5 Synthesis: Leave no application behind F5 Synthesis: The Real Value of Consolidation Revealed F5 Synthesis: Reference Architectures - Good for What Ails Your Apps731Views0likes1CommentSnippet #7: OWASP Useful HTTP Headers
If you develop and deploy web applications then security is on your mind. When I want to understand a web security topic I go to OWASP.org, a community dedicated to enabling the world to create trustworthy web applications. One of my favorite OWASP wiki pages is the list of useful HTTP headers. This page lists a few HTTP headers which, when added to the HTTP responses of an app, enhances its security practically for free. Let’s examine the list… These headers can be added without concern that they affect application behavior: X-XSS-Protection Forces the enabling of cross-site scripting protection in the browser (useful when the protection may have been disabled) X-Content-Type-Options Prevents browsers from treating a response differently than the Content-Type header indicates These headers may need some consideration before implementing: Public-Key-Pins Helps avoid *-in-the-middle attacks using forged certificates Strict-Transport-Security Enforces the used of HTTPS in your application, covered in some depth by Andrew Jenkins X-Frame-Options / Frame-Options Used to avoid "clickjacking", but can break an application; usually you want this Content-Security-Policy / X-Content-Security-Policy / X-Webkit-CSP Provides a policy for how the browser renders an app, aimed at avoiding XSS Content-Security-Policy-Report-Only Similar to CSP above, but only reports, no enforcement Here is a script that incorporates three of the above headers, which are generally safe to add to any application: And that's it: About 20 lines of code to add 100 more bytes to the total HTTP response, and enhanced enhanced application security! Go get your own FREE license and try it today!729Views0likes2CommentsProgrammability in the Network: Blue-Green Deployment Pattern
#gluecon #devops Intelligent handling of requests whether for testing or migration or upgrades requires programmability in the network. Cory von Wallenstein, CTO of Dyn Inc, gave a great presentation at Glue 2013 on upgrading infrastructure. If you weren't at Glue (or were and missed his presentation) you can check it out on SlideShare. The premise of Dark Architecture is based on Martin Fowler's Blue/Green deployment pattern. Developers will recognize this DevOps pattern as similar to A/B testing patterns. Martin sums up in his Blue/Green Deployment Pattern blog: The fundamental idea is to have two easily switchable environments to switch between, there are plenty of ways to vary the details. One project did the switch by bouncing the web server rather than working on the router. Another variation would be to use the same database, making the blue-green switches for web and domain layers. Cory's application of this pattern, which he calls Dark Architecture, engenders a continuous deployment pattern capable of transitioning from an existing architecture to a new (and potentially in-progress) architecture while retaining the ability to rapidly rollback (or fallback, whichever you prefer). In both Dark Architecture and A/B testing patterns, the idea is to slowly siphon off some identifiable portion of traffic. In some cases this may be to test a new version of an API or to enable migration to the new API and client by matching appropriate versions on both the client and the server-side application (or API). In both cases, the match should optimally be made in real-time, meaning as the requests arrive some "thing" in the network figures out which version of the < API | Application | Architecture > should handle each request. This requires programmability in the network; the ability to programmatically inspect requests as they flow through the network and determine based on layer 7 (application layer) properties how the request should be handled. This requires programmability because the specific information on which you want to base the decision - API version, URI, client device or version, etc... - is likely unique to your application. You need to be able to programmatically extract certain pieces of data from the request and then use them to inform infrastructure elements such as load balancers or proxies on how to direct every request. This kind of capability is critical to enabling agile business and operations because nothing is as constant as change, and it is nearly impossible today to forklift upgrade hundreds or thousands or more consumers (or employees) across multiple devices to a new version of the application. But you still have to do it, somehow, unless you're going to bloat the application (or API (or both)) by always and forever, amen, enabling backwards compatibility. To version before 1.0. That's not feasible in today's rapidly evolving and fast-paced application ecosystems. APIs mature rapidly and often deprecate calls. This same pattern can be used to manage the transition from deprecated API methods to more current methods - including transforming data if necessary. That's because programmability in the network enables a programmatic, development-oriented means of dealing with traffic in a real-time manner. This is not the programmability often referenced with respect to SDN, which is more about enabling the development of applications that perform a specific task on traffic as it is inspected by the controller. Nor is it the programmability that enables management and automation of network devices through control plane APIs. This is programmability in the data path, while data is flowing, in real-time. It's about routing at layer 7 (application layer), enabling a dynamic, robust and intelligent network capable of directing traffic based on application details. Programmability in the network is a critical component for devops and developers looking to enable continuous delivery of applications, APIs, and architectures.636Views0likes0Comments