programmability
52 TopicsF5 Friday: App Proxy or ADC?
(Editors note: the LineRate product has been discontinued for several years. 09/2023) --- Choosing between BIG-IP and LineRate isn't as difficult as it seems.... Our recent announcement of the availability of LineRate Point raised the same question over and over: isn't this just a software-version of BIG-IP? How do I know when to choose LineRate Point instead of BIG-IP VE (Virtual Edition)? Aren't they the same?? No, no they aren't. LineRate Point (and really Line Rate Precision, too) is more akin to an app proxy while BIG-IP VE remains, of course, an ADC (Application Delivery Controller). That's not even pedantry, it's core to what each of the two solutions supports - both in their capabilities, their extensibility, and the applications they're designed to deliver services for. Platforms and Proxies First, let's remember that an ADC is a platform; that is, it's a software system supporting extensibility through modules. That's why we have BIG-IP . Because BIG-IP is really a platform, and its capabilities to deliver software defined application services (SDAS) are enabled through the modules it supports. Whether it's BIG-IP on F5 hardware or BIG-IP VE in the cloud or in virtual machines, it's still an extensible ADC platform. LineRate Point is a layer 7 load balancer; it's an app proxy. It's primary goal is to serve HTTP/S applications with scalability and security (like SSL and TLS). It's not extensible like BIG-IP VE. There are no "modules" you can deploy to expand its capabilities. It is what it is - a lightweight, load balancing layer 7 app proxy. Extensibility in the LineRate world is achieved with LineRate Precision, which includes node.js data path programmability (scripting) as a means to create new services, new functionality, and implement a variety of infrastructure patterns like A/B testing, Canary Deployments, Blue/Green deployments, and more. That's where the confusion with BIG-IP VE usually comes in, because in addition to its platform extensibility, BIG-IP VE also enables data path programmability through iRules. So how do you choose between the options? There's BIG-IP on F5 hardware, BIG-IP VE (virtual) and cloud (AWS, Azure, Rackspace, IBM, etc…), LineRate Point and Precision for cloud (Amazon EC2) and virtual as well as bare-metal. The best way to choose is to base it on (wait for it, wait for it) the application for which you need services delivered. C'mon, you saw that coming - it's an application world, after all, and F5 is always all about that application. Applications, Scale and Service Delivery It really is all about that application. The scale, the nature of the business function the application provides, and the services required to deliver that application are critical components in the choice of what is basically ADC or App Proxy. And you know me, a picture is worth at least 1024 words, so here it is: The first assumption we're making (and I think it's a good assumption) is that if someone deployed an application and is using it within the context of the business (or line of business or department or, well, you get the picture) then it's important enough to need some service. Maybe that's just scale or availability, maybe it needs security or a performance-boosting push, but it probably needs something. What it needs may be dependent on the number of users, the criticality of the application to productivity and profit, and the sensitivity of the data it interacts with. Given that set of criteria, you can start to see that business critical and major line of business applications - ERP, Sharepoint, Exchange, etc... - have few instances but thousands of users. These apps require high availability, massive scale, security and often performance boosts. They need multiple application services. That means BIG-IP*, and probably BIG-IP on F5 hardware or, perhaps, the deployment of a High Performance Services Fabric comprised of many BIG-IP VE using F5 Synthesis. Either way, you're talking high capacity BIG-IP. As we move down the triangular stack, we start running into line of business apps that number in the hundreds, and may have hundreds of users. These apps are of two ilk: 1. Those that require multiple application services, and 2. Those that require data path programmability Now, the app may need one or the other or both. The first question to ask (and this isn't obvious) is what protocols do the applications support? Yes, that actually is very relevant. Line Rate is basically providing app proxy services; that means app protocols like HTTP and HTTPs. Not UDP, not SIP, not RDP or PCoIP. If you need that kind of protocol support, the answer at this layer is BIG-IP VE. If the answer was HTTP or HTTPS, now you're faced with a second (easy) question: do you need multiple services? Do you need availability (load balancing and failover) plus performance boosting services like caching and acceleration options? Do you need availability plus identity management (like SSO or SAML)? If you need availability plus then the answer, again, is you should choose BIG-IP VE. If you just need availability, now you get into a more difficult decision tree. If you want (or need) data path programmability (such as might be used to patch zero-day security vulnerabilities or do some layer 7 app routing) then the question is what language do you want to script in? Do you want node.js? Choose Line Rate Precision. Want iRules? Choose BIG-IP VE. There's really no "right" or "wrong" answer here, it's a matter of preference (and probably skill set availability or standard practices in your organization). Finally, you reach the broad bottom of the triangle, where the number of apps may be in the thousands but the users per app is minimal. This is where apps need basic availability but little more. This layer is where orchestration support (robust APIs) become as important as the service itself, because continuous delivery (CD) is in play as well as other DevOps-related practices like continuous integration and testing. This environment is often very fluid, highly volatile and always in motion, requiring similar characteristics of any availability services required. In this layer of the enterprise application stack, Line Rate Point is your best choice. Coupled with our newly introduced Volume Licensing Subscription (VLS), LineRate Point here offers both the support for the environment (with its robust, proper REST API) and its software or virtual form-factor along with excellent economy of scale. Hopefully this handy-dandy guide to F5 and enterprise application segmentation helps to sort out the question whether you should choose BIG-IP, BIG-IP VE or a flavor of LineRate. Happy Friday! * Oh, I know, you could provide those services with a conga line of point products but platforms are a significant means of enabling standardization and consolidation, which greatly enhance overall value and lower both operating and capital costs.332Views0likes0CommentsGetting Started with iControl: Working with the System
So far throughout this Getting Started with iControl series, we’ve worked our way through history, libraries, configuration object, and statistics. In this final article in the series, we’ll tackle a few system functions, namely system ntp and dns settings, and then generating a qkview. Some of the system functions are new functionality to iControl with the rest portal, as system calls are not available via soap. The rule of thumb here is if the system call is available in tmsh via the util branch, it’s available in the rest portal. System DNS Settings When setting up the BIG-IP, some functions require an active DNS configuration to perform lookups. In the GUI, this is configured by navigating to System->Configuration->Device->DNS. In tmsh, it’s configured by modifying the settings in /sys/dns. Note that there is not a create function here, as the DNS objects themselves already exist, whether or not you have configured them. Because of this, when updating these settings via iControl rest, you’ll use the PUT method. An example tmsh configuration for the DNS settings is below: [root@ltm3:Active:Standalone] config # tmsh list sys dns sys dns { name-servers { 10.10.10.2 10.10.10.3 } search { test.local } } This formatted in json looks like this: {"nameServers": ["10.10.10.2", "10.10.10.3"], "search": ["test.local"]} with a PUT to https://hostname/mgmt/tm/sys/dns. Powershell Python Powershell DNS Settings #---------------------------------------------------------------------------- function Get-systemDNS() # # Description # This function retrieves the system DNS configuration # #---------------------------------------------------------------------------- { $uri = "/mgmt/tm/sys/dns"; $link = "https://$Bigip$uri"; $headers = @{}; $headers.Add("ServerHost", $Bigip); $secpasswd = ConvertTo-SecureString $Pass -AsPlainText -Force $mycreds = New-Object System.Management.Automation.PSCredential ($User, $secpasswd) $obj = Invoke-RestMethod -Method GET -Headers $headers -Uri $link -Credential $mycreds Write-Host "`nName Servers"; Write-Host "------------"; $items = $obj.nameServers; for($i=0; $i -lt $items.length; $i++) { $name = $items[$i]; Write-Host "`t$name"; } Write-Host "`nSearch Domains"; $items = $obj.search; for($i=0; $i -lt $items.length; $i++) { $name = $items[$i]; Write-Host "`t$name"; } Write-Host "`n" } #---------------------------------------------------------------------------- function Set-systemDNS() # # Description # This function sets the system DNS configuration # #---------------------------------------------------------------------------- { param( [array]$Servers, [array]$SearchDomains ); $uri = "/mgmt/tm/sys/dns"; $link = "https://$Bigip$uri"; $headers = @{}; $headers.Add("ServerHost", $Bigip); $headers.Add("Content-Type", "application/json"); $obj = @{ nameServers = $Servers search = $SearchDomains }; $body = $obj | ConvertTo-Json $secpasswd = ConvertTo-SecureString $Pass -AsPlainText -Force $mycreds = New-Object System.Management.Automation.PSCredential ($User, $secpasswd) $obj = Invoke-RestMethod -Method PUT -Uri $link -Headers $headers -Credential $mycreds -Body $body; Get-systemDNS; } Python DNS Settings def get_sys_dns(bigip, url): try: dns = bigip.get('%s/sys/dns' % url).json() print "\n\n\tName Servers:" for server in dns['nameServers']: print "\t\t%s" % server print "\n\tSearch Domains:" for domain in dns['search']: print "\t\t%s\n\n" %domain except Exception, e: print e def set_sys_dns(bigip, url, servers, search): servers = [x.strip() for x in servers.split(',')] search = [x.strip() for x in search.split(',')] payload = {} payload['nameServers'] = servers payload['search'] = search try: bigip.put('%s/sys/dns' % url, json.dumps(payload)) get_sys_dns(bigip, url) except Exception, e: print e System NTP Settings NTP is another service that some functions like APM access require. In the GUI, the settings for NTP are configured in two places. First, the servers are configured in System->Configuration->Device->NTP. The timezone is configured on the System->Platform page. In tmsh, the settings are configured in /sys/ntp. Like DNS, these objects already exist, so the iControl rest method will be a PUT instead of a POST. An example tmsh configuration for the NTP settings is below: [root@ltm3:Active:Standalone] config # tmsh list sys ntp sys ntp { servers { 10.10.10.1 } timezone America/Chicago } This formatted in json looks like this: {"servers": ["10.10.10.1"], "timezone": "America/Chicago"} with a PUT to https://hostname/mgmt/tm/sys/ntp. Powershell Python Powershell NTP Settings #---------------------------------------------------------------------------- function Get-systemNTP() # # Description # This function retrieves the system NTP configuration # #---------------------------------------------------------------------------- { $uri = "/mgmt/tm/sys/ntp"; $link = "https://$Bigip$uri"; $headers = @{}; $headers.Add("ServerHost", $Bigip); $secpasswd = ConvertTo-SecureString $Pass -AsPlainText -Force $mycreds = New-Object System.Management.Automation.PSCredential ($User, $secpasswd) $obj = Invoke-RestMethod -Method GET -Headers $headers -Uri $link -Credential $mycreds Write-Host "`nNTP Servers"; Write-Host "------------"; $items = $obj.servers; for($i=0; $i -lt $items.length; $i++) { $name = $items[$i]; Write-Host "`t$name"; } Write-Host "`nTimezone"; $item = $obj.timezone; Write-Host "`t$item`n"; } #---------------------------------------------------------------------------- function Set-systemNTP() # # Description # This function sets the system NTP configuration # #---------------------------------------------------------------------------- { param( [array]$Servers, [string]$TimeZone ); $uri = "/mgmt/tm/sys/ntp"; $link = "https://$Bigip$uri"; $headers = @{}; $headers.Add("ServerHost", $Bigip); $headers.Add("Content-Type", "application/json"); $obj = @{ servers = $Servers timezone = $TimeZone }; $body = $obj | ConvertTo-Json $secpasswd = ConvertTo-SecureString $Pass -AsPlainText -Force $mycreds = New-Object System.Management.Automation.PSCredential ($User, $secpasswd) $obj = Invoke-RestMethod -Method PUT -Uri $link -Headers $headers -Credential $mycreds -Body $body; Get-systemNTP; } Python NTP Settings def get_sys_ntp(bigip, url): try: ntp = bigip.get('%s/sys/ntp' % url).json() print "\n\n\tNTP Servers:" for server in ntp['servers']: print "\t\t%s" % server print "\n\tTimezone: \n\t\t%s" % ntp['timezone'] except Exception, e: print e def set_sys_ntp(bigip, url, servers, tz): servers = [x.strip() for x in servers.split(',')] payload = {} payload['servers'] = servers payload['timezone'] = tz try: bigip.put('%s/sys/ntp' % url, json.dumps(payload)) get_sys_ntp(bigip, url) except Exception, e: print e Generating a qkview Moving on from configuring some system settings to running a system task, we’ll now focus on a common operational task: running a qkview. In the GUI, this is done via System->Support. This is easy enough at the command line as well with the simple qkview command at the bash prompt, or via tmsh as show below: [root@ltm3:Active:Standalone] config # tmsh run util qkview Gathering System Diagnostics: Please wait ... Diagnostic information has been saved in: /var/tmp/ltm3.test.local.qkview Please send this file to F5 support. This formatted in json looks like this: {"command": "run"} with a POST to https://hostname/mgmt/tm/util/qkview. Powershell Python Powershell - Run a qkview #---------------------------------------------------------------------------- function Gen-QKView() # # Description # This function generates a qkview on the system # #---------------------------------------------------------------------------- { $uri = "/mgmt/tm/util/qkview"; $link = "https://$Bigip$uri"; $headers = @{}; $headers.Add("serverHost", $Bigip); $headers.Add("Content-Type", "application/json"); $secpasswd = ConvertTo-SecureString $Pass -AsPlainText -Force $mycreds = New-Object System.Management.Automation.PSCredential ($User, $secpasswd) $obj = @{ command='run' }; $body = $obj | ConvertTo-Json Write-Host ("Running qkview...standby") $obj = Invoke-RestMethod -Method POST -Uri $link -Headers $headers -Credential $mycreds -Body $body; Write-Host ("qkview is complete and is available in /var/tmp.") } Python - Run a qkview def gen_qkview(bigip, url): payload = {} payload['command'] = 'run' try: print "\n\tRunning qkview...standby" qv = bigip.post('%s/util/qkview' % url, json.dumps(payload)).text if 'saved' in qv: print '\tqkview is complete and available in /var/tmp.' except Exception, e: print e Going Further Now that you have a couple basic tasks in your arsenal, what else might you add? Here are a couple exercises for you to build your toolbox: Configure sshd, creating a custom security banner. Configure snmp Extend the qkview function to support downloading the qkview file generated. Resources All the scripts for this article are available in the codeshare under "Getting Started with iControl Code Samples."2.2KViews0likes2CommentsDevops Needs Application Affinity
Infrastructure must balance between applications and the network because otherwise werewolves would cease to exist. In science we're taught that gravity is the law. As it relates to us living here on earth (I can't speak for all you displaced aliens, sorry) there are two gravitational forces at work: the earth and the moon. The earth's gravity, of course, keeps us grounded. It's foundational. Without it, we're kind of up a creek (or an atmosphere, as it were) without a paddle. The moon's gravitational pull is a bit different in that's it's pulling in the opposite direction. It's pulling upwards whereas the earth's gravity pulls us downward. Both of these forces are equally important. The loss of either would be devastating. I'm sure there's a made for SyFy movie about that happening. There's a made for SyFy movie about every disastrous scenario we can think of, after all. Just think of what would happen to the werewolves - especially the sparkly ones in teenage novels - if the moon disappeared. Yeah, they might disappear too. The data center, too, has two equally important gravitational forces and they, like the moon and the earth, are pulling in opposite directions. The data center needs both too, because we don't want werewolves to disappear. Why yes, mixing metaphors and creating non-sequitors amuses me, why do you ask? In the data center, Devops is being acted on by two gravitational forces: the network and the application. The network naturally pulls devops downward. All the myriad infrastructure necessary to support and deliver an application ultimately require networking. But devops is, at its heart and soul, concerned about enabling continuous delivery of applications. And it is the application that pulls devops upward, toward the higher layers of the stack. That means the growing number of "devops" focused infrastructure - think application and API proxies, for example - must recognize their role as an infrastructure (network) component while simultaneously exhibiting affinity for the applications it will be servicing. Not only does the network need to accommodate the application, but it also needs to accommodate the folks who deploy and manage the application. To do that, infrastructure components must become more attuned to the needs of applications and their owners; the network needs application affinity. Application Affinity As Andi Mann is wont to say, "Devops is people." Ultimately any "network" tool that's designed for devops has to balance between the gravitational forces at work from both the network and the application and offer to devops both the power of the network without requiring certification in twenty or so TLA protocols. It needs to focus on the people, not the product. It needs to fall somewhere between a "network thing" and an "application." That doesn't mean less networking or network-related capabilities, but it does mean moving toward a more application-like model in which the networking complexity is not exposed. What is exposed to devops and developers, for that matter, is a more application-like approach to deploying and managing infrastructure services. Services like SSL termination, URI rewriting, virtual hosting, caching, etc... are exposed via APIs or a method that's much less complex than existing command line "fire up vi or emacs and edit this file" mechanisms. These platforms need to be more agile, more scriptable, more programmatic. They need to be more like an application and less like a networking device trying to be software. These platforms need to enable devops to ultimately build devops services that imbue the "network" with the flexibility that comes only from being programmatic. Need to extend the "network" so it routes application requests based on user identity or referring application? A proper devops platform enables that "network application" to be developed, deployed and managed as a service. That means not only a less complex configuration model, but a programmatic interface that enables devops to extend services into the network to continuously deliver applications. A platform that can programmatically modify its behavior based on context - real time data about the client, the network, the application, and the request itself - must have application affinity; it must enable direct access to delivery services so that "configuration" can be modified in real-time. If your platform of choice includes the directive "now save and restart so the new configuration is loaded" then it's not programmable, it's not dynamic, it's not enabling continuous delivery the way devops is going to need continuous delivery enabled. * There are differing opinions on how the loss of the moon would effect human beings. That it would have an effect on the oceans and the rotation of the earth and its orbit is fairly well established,but the actual impact on human beings directly is highly disputed. Unless you're a werewolf. If you're a werewolf, it's bad, period.197Views0likes0CommentsProgrammability in the Network: Canary Deployments
#devops The canary deployment pattern is another means of enabling continuous delivery. Deployment patterns (or as I like to call them of late, devops patterns) are good examples of how devops can put into place systems and tools that enable continuous delivery to be, well, continuous. The goal of these patterns is, for the most part, to make sure operations can smoothly move features, functions, releases or applications into production. We've previously looked at the Blue Green deployment pattern and today we're going to look at a variation: Canary deployments. Canary deployments are applicable when you're running a cluster of servers. In other words, you've got lots and lots of (probably active right now while you're considering pushing that next release) users. What you don't want is to do the traditional "we're sorry, we're down for maintenance, here's a picture of a funny squirrel to amuse you while you wait" maintenance page. You want to be able to roll out the new release without disruption. Yeah, that's quite the ask, isn't it? The Canary deployment pattern is an incremental upgrade methodology. First, the build is pushed to a small set of servers to which only a select group of users are directed. If that goes well, the release is pushed to a larger set of servers with a limited set of users. Finally, if that goes well, then the release is pushed out to all servers and all users. If issues occur at any stage, the release is halted - it goes no further. Hence the naming of the pattern - after the miner's canary, used because "its demise provided a warning of dangerous levels of toxic gases". The trick to implementing this pattern is two fold: first, being able to group the servers used in each step into discrete pools and second, the ability to direct specific sets of users to the appropriate pools. Both capabilities requires the ability to execute some logic to perform user-based load balancing. Nolio, in its first Devops Best Practices video, implements Canary deployments by manipulating the pools of servers at the load balancing tier, removing them to upgrade and then reinserting them for testing before moving onto the next phase. If your load balancing solution is programmable, there's no need to actually remove them as you can simply insert logic to remove them from being selected until they've been upgraded. You can also then insert the logic to determine which users are directed to which pool of servers. If the load balancing platform is really programmable, you can even extend that to determination to querying a database to determine user inclusion in certain groups, such as those you might use to perform AB testing. Such logic might base the decision on IP address (not the best option but an option) or later, when you're actually rolling out to a percentage of users you can write logic that randomly selects users based on location or their user name - like sharding, only in reverse - or pretty much anything you can think of. You can even split that further if you're rolling out an update to an API that's used by both mobile and traditional clients, to catch both or neither or specific types in an orderly fashion so you can test methodically - because you want to test methodically when you're using live users as test subjects. The beauty of this pattern is that allows continuous delivery. Users are never disrupted (if you do it right) and the upgrade occurs in a safely staged, incremental fashion. That enables you to back out quickly if necessary, because you do have a back button plan, right? Right?820Views1like1CommentGetting Started with iControl: Interface Taxonomy & Language Libraries
In part two of this article in the Getting Started with iControl series, we will cover the taxonomy for both the SOAP and REST iControl interfaces, that were discussed in theprevious article on iControl History, as well as introduce the language libraries we unofficially support on DevCentral. Note: This article published by Jason Rahm but co-authored with Joe Pruitt. Taxonomy The two interfaces were designed in different ways and with different goals. iControl SOAP was developed as a functional API behaving very similar to a Java or C++ class where you have interfaces and methods included within those interfaces. Interfaces roughly related to product features and for organization, those interfaces were then grouped into larger namespaces, or product modules. To get a little more specific, iControl SOAP implements a RPC (Remote Procedure Call) type model. To make use of the APIs, one just instantiated an instance of the interface and then used functional (create(), get_list(), ...) and accessor (get_*(), set_*(), ...) methods with it. iControl REST on the other hand, is based on a document model where documents are passed back and forth containing all the relevant information of the given object(s) in question. This interface makes use of the HTTP protocol's VERBs (GET, POST, PUT, PATCH, DELETE) for action and URIs (ie. /mgmt/tm/ltm/pool) for the objects you are manipulating. Instead using methods to perform actions, you use the associated HTTP VERB to trigger the action, the URI to denote the object, and the POST data or Query Parameters to indicated the objects details. iControl REST is currently modeled off of the tmsh shell which uses similar modules and interfaces from SOAP (with different abbreviations) in it's URI path. Module Interface Method iControl SOAP LocalLB Pool get_list() create() add_member() Networking SelfIP get_netmask() set_vlan() Module Sub-Module Component iControl REST ltm pool <pool name> net self <self name> Now for some simple examples using the SOAP and REST interfaces using python libraries. Don't worry if python's not your thing,we’ll cover some perl, powershell, java, and Node.jsas well in the following articles in this series. ### ignore SSL cert warnings >>> import ssl >>> ssl._create_default_https_context = ssl._create_unverified_context ### SOAP >>> import bigsuds >>> b_soap = bigsuds.BIGIP('172.16.44.15', 'admin', 'admin') >>> b_soap.LocalLB.Pool.get_list() ['/Common/myNewPool2'] ### REST >>> from f5.bigip import BigIP as f5 >>> b_rest = f5('172.16.44.15', 'admin', 'admin') >>> b_rest.ltm.pools.get_collection()[0].name u'myNewPool2' Note there really isn't much difference between the two iControl interfaces when it comes to the client code. This is the beauty of libraries, they mask the hard work! We'll dig into the code more in the next article. so don't worry too much about the code above just yet. Interfaces & Objects In the above section, we briefly outlined the high level taxonomy between module and interface. There are literally thousands of defined methods for the SOAP interface, all defined and published in theiControl wikion DevCentral. For the REST interface, you will want toreference the iControl REST wikias well, but you will really want to become intimate with thetmsh reference guide, as a solid understanding of how tmsh works will make a more favorable going. Language Libraries Raw SOAP payload is out of the question unless you enjoy pain, so a library is a given if using the SOAP interface. But even with REST, as easy as it is to interact with interface directly, the fact is if you aren’t using a library, you still have to create all your JSON documents and investigate and prepare all the payload requirements when manipulating or creating objects. When selecting SOAP or REST, the reality is if you’re going to build applications that utilize iControl, you’re definitely going to want to use a library. What is a Library? Depending on the language or languages you might use, a library can be called many things, such as a package, a wrapper, a module, a class, etc. But at it’s most basic meaning, a library is simply a collection of tools and/or functionality that abstracts some of the work required in understanding the underlying protocols and assists in getting things done faster. For example, if you are formatting an office document and changing the font size, color, family, and bolding it on select words throughout the doc, it could be painful to do this if you have to do it more than a few times. You could record a macro of these steps, and then for each subsequent task, replay that work via the macro so you only have to do it once. In a roundabout way, this is what libraries do for you as well. The common repetitive work is done for you, so you can focus on the business logic in your applications. What Libraries Are Available? There are several libraries available, some languages having more than one to support both iControl SOAP and iControl REST, and at least one (like python) that has several libraries for just the SOAP interface. To each his own, we like to expand the horizons around here and equip everyone for whatever language suits best for your needs and abilities. We could list out all the libraries here, but they are already documented on theprogrammability page in the wiki, at least the ones we are directly responsible for. Joe does a lot with PowerShell and Perl, and I with python, and we do have quite a few community members well versed in the .Net languages and python too, though there is some support for java and ruby as well. On Github I've seen aniControl wrapper for the Go language, and I seem to recall some love for PHP a while back. Now that we'rethrough the historyand some foundational information on the SOAP and REST interfaces, we can move on to actually building something! In the next article in this series, we'll jump into some code samples to highlight the pros and cons of working with SOAP and REST, as well as show you the differences among a few languages for each interface in accomplishing the same tasks.2.1KViews1like1CommentGetting Started with iControl: History
tl;dr - iControl provides access to BIG-IP management plane services through SOAP and REST API interfaces. The Early Days iControl started back in early 2000. F5 had 2 main products: BIG-IP and 3-DNS (later GTM, now BIG-IP DNS). BIG-IP managed the local datacenter's traffic, while 3-DNS was the DNS orchestrator for all the BIG-IP's in numerous data centers. The two products needed a way to communicate with each other to ensure they were making the right traffic management decisions respective to all of the products in the system. At the time, the development team was focused on developing the fastest running code possible and that idea found it's way into the cross product communication feature that was developed. The technology the team chose to use was the Common Object Request Broker Architecture (CORBA) as standardized by the Object Management Group (OMG). Coming hot off the heels of F5's first management product SEE-IT (which was another one of my babies), the dev team coined this internal feature as "LINK-IT" since it "linked" the two products together. With the development of our management, monitoring, and visualization product SEE-IT, we needed a way to get the data off of the BIG-IP. SEE-IT was written for Windows Server and we really didn't want to go down the route of integrating into the CORBA interface due to several factors. So, we wrote a custom XML provider on BIG-IP and 3-DNS to allow for configuration and statistic data to be retrieved and consumed by SEE-IT. It was becoming clear to me that automation and customization of our products would be beneficial to our customers who had been previously relying on our SNMP offerings. We now had 2 interfaces for managing and monitoring our devices: one purely internal (LINK-IT) and the other partially (XML provider). The XML provider was very specific to our SEE-IT products use case and we didn't see a benefit of trying to morph that so we looked back at LINK-IT to see what we could to do make that a publicly supported interface. We began work on documenting and packaging it into F5's first public SDK. About that time, a new standard was emerging for exchanging structured information. The Simple Object Access Protocol (SOAP), which allows for structured information exchange, was being developed but not fully ratified until version 1.2 in 2003. I had to choose to roll our own XML implementation or make use of this new proposed specification. There was risk as the specification was not a standard yet but I made the choice to go the SOAP route as I felt that picking a standard format would give us the best 3rd party software compatibility down the road. Our CORBA interface was built on a nice class model which I used as a basis for an SOAP/XML wrapper on top of that code. I even had a great code name for the interface: X-LINK-IT! For those who were around when I gave my "XML at F5" presentation to F5 Product Development, you may remember the snide comments going around afterwards about how XML was not a great technology and a big mistake supporting. Good thing I didn't listen to them... At this point in mid-2001, the LINK-IT SDK was ready to go and development of X-LINK-IT was well underway. Well, let's just say that Marketing didn't agree with our ingenious product naming and jumped in to VETO our internal code names for our public release. I'll give our Chief Marketer Jeff Pancottine credit for coining the term "iControl" which was explained to me as "Internet Control". This was the start of F5's whole Internet Controlled Architecture messaging by the way. So, LINK-IT and X-LINK-IT were dead and iControl CORBA and iControl SOAP were born. The Death of CORBA, All Hail SOAP The first version of the iControl SDK for CORBA was released on 5/25/2001 with the SOAP version trailing it by a few months. This was around the BIG-IP version 3 time frame. We chugged along for a few years through the BIG-IP version 4 life and then a big event occurred that was the demise for CORBA - well, it's actually 2 events. The first event was the full rewrite of the BIG-IP data plane when TMOS was introduced in BIG-IP, version 9 (we skipped from version 4 to version 9 for some reason that slips my mind). Since virtually the entire product was rewritten, the interfaces that were tied to the product features, would have to change drastically. We used this as an opportunity to look at the next evolution of iControl. Until this point, iControl SOAP was just a shim on top of CORBA and it had some performance issues so we worked at splitting them apart and having SOAP talk directly to our configuration engine. Now we had 2 interface stacks side by side. The second event was learning we only had 1 confirmed customer using the CORBA interface compared to the 100's using SOAP. Given that knowledge and now that BIG-IP and 3-DNS no longer used iControl CORBA to talk to each other, the decision was made to End of Life iControl CORBA with Version 9.0. But, iControl SOAP still used the CORBA IDL files for it's API definitions and documentation so fun trivia note: the same CORBA tools are still in place in today's iControl SOAP build tools that were there in version 3 of BIG-IP. I'm fairly sure that is the longest running component in our build system. The Birth of DevCentral I can't speak about iControl without mentioning DevCentral. We had our iControl SDK out but no where to directly support the developers using it. At that time, F5 was a "hardware" company and product support wasn't ready to support application developers. Not many know that DevCentral was created due to the popularity of iControl with our customer base and was born on a PC under my desk in 2003. I continued to help with DevCentral part time for a few years but in 2007 I decided to work full time on building our community and focusing 100% on DevCentral. It was about this time that we were pushing the idea of merging the application and infrastructure teams together - or at least getting them to talk more frequently. This was a precursor to the whole DevOps mentality so I'd like to think we were a bit ahead of the curve on that. Enter iControl REST In 2013, iControl was reaching it's teenage years and starting to show it's age a bit. While SOAP is still supported by all the major tool vendors, application development was shifting to richer browser-based apps. And with that, Representational State Transfer (REST) was gaining steam. REST defined a usage pattern for using browser based mechanisms with HTTP to access objects across the network with a JavaScript Object Notation (JSON) format for the content. To keep up with current technologies, the PD team at F5 developed the first REST/JSON interface in BIG-IP version 11.5 as Early Access and was made Generally Available in version 11.6. With the REST interface, more modern web paradigms could be supported and you could actually code to iControl directly from a browser! There were also additional interface based tools for developing and debugging built directly into the service to give service listings and schema definitions. At the time of this writing, F5 supports both of the main iControl interfaces (SOAP and REST) but are focusing all new energy on our REST platform for the future. For those who have developed SOAP integrations, have no fear as that interface is not going away. It will just likely not get all the new feature support that will be added into the REST interfaces over time. SDKs, Toolkits, and Libraries Through the years, I've developed several client libraries (.Net, PowerShell, Java) for iControl SOAP to assist with ramp-up time for initial development. There have also been numerous other language based libraries for languages like Ruby, PHP, and Python developed by the community and other development teams. Most recently, F5 has published the iControl library for Python which is available as part of our OpenStack integration. DevCentral is your place to for our API documentation where we update our wiki with API changes on each major release. And as time rolls on, we are adding REST support for new and existing products such as iWorkflow, BIG-IQ, and other products yet to be released that will include SDKs and other reference material. F5 has a strong commitment to our end users and their automation and integration projects with F5 technologies. Coming full circle, my current role is overseeing APIs and SDKs for standards, consistency, and completeness in our Programmability and Orchestration (P&O) team so keep a look out for future articles from me on our efforts on that front. And REST assured, we will continue to do all we can to help our customers move to new architectures and deployment models with programmability and automation with F5 technologies.3KViews1like1CommentF5 Programmability for Eclipse - Installation Instructions
It’s been a long time coming, but I’m pleased to announce a brand new editor for your iRules: F5 Programmability for Eclipse! This editor supports iRules & iRules LX in its initial release, but stay tuned for more features down the road. One of the cool features from my quick glance this afternoon is simultaneous multi-system support. F5 Programmability for Eclipse F5 Programmability for Eclipse version 1.0 allows you to use the Eclipse IDE to manage iRules and iRuleLX development. By using Eclipse, you can connect to one or more BIG-IP devices to view, modify or create iRules or iRuleLX workspaces. The editor functionality includes TCL/iRules and JavaScript language syntax highlighting, code completion, and hover documentation for the iRules API. Note: This tool is provided via DevCentral as a free tool to help you better leverage iRules, and is in no way officially supported by F5 or F5 Professional Services. All support, questions, comments or otherwise for this editor should be submitted in the Q&A section of DevCentral. Please make sure to tag irules and/or iruleslx, as well as adding a custom editor tag. System Requirements The system requirements for the F5 Programmability for Eclipse are: Eclipse installed – Minimum version: Luna (v4.4) Recommended: Mars (v4.5) or Neon (v4.6) Java version 1.7 or later installed Network access to one or more F5 Networks BIG-IP systems (TMOS version 12.1+) Verified Software Combinations F5 Programmability for Eclipse, version 1.0, has been tested for compatibility with the following configurations: OS Eclipse Version Java Version TMOS Version Linux (CentOS 6.3) Mars 4.5.1 1.7.0_70 12.1 Linux (CentOS 6.6) Mars 4.5.1 1.7.0_79 12.1 Linux (CentOS 6.7) Mars 4.5.2 1.7.0_101 12.1 MacOS (v10.8) Mars 4.5.2 1.8.0_91 12.1 MacOS (v10.11.5) Mars 4.5.2 1.8.0_91 12.1 MacOS (v10.11.5) Neon 4.6 1.8.0_91 12.1 Windows (v7) Luna 4.4.2 1.8.0_91 12.1 Windows (v7) Mars 4.5.2 1.8.0_91 12.1 Windows (v7) Neon 4.6 1.8.0_91 12.1 Installation To install the F5 Programmability for Eclipse plug-in, Start the version of Eclipse that you installed. Click on Help > Install New Software… Verify that the Mars or Neon release URL is listed in the Works with drop down. If you are running the Luna version, or the release URL is not listed, Click Add… Type the text “http://download.eclipse.org/releases/mars” into the Location field (omit the quotation marks) Click OK Add a repository for the F5 plug-in Click Add… Type the text “http://cdn.f5.com/product/f5-eclipse/v1.0.0” into the Location field (omit the quotation marks) Click OK After you add the repository, F5 Networks or F5 Programmability for Eclipse should apppear in the Available Software dialog. Check the box next to the F5 item. Check the Contact all update sites box and click Next After you review the installation items, click Next. Read the License Agreement, check I accept the terms of the license agreement, and click Finish When prompted to restart Eclipse, click Yes Common tasks Select the F5 perspective To use the F5 plug-in, you must activate the F5 perspective. In Eclipse, Click on Window > Perspective > Open Perspective > Other… Find and select the F5 perspective Click OK Connect to a BIG-IP system In the F5 perspective you chose there are three views: Explorer pane on the left hand side, Editor pane on the right hand side, and a Log panel along the bottom. The Explorer pane includes a toolbar with 3 buttons. On the iRules & iRuleLX tab, Click the New BIG-IP Connection toolbar button (at the top of the Explorer pane) When prompted, provide the IP address and credentials for the BIG-IP system. You may store your credentials in a secure store. If you use the secure store, you must enter a master password for each session. Click Finish When you connect to a BIG-IP system, all LTM iRules, GTM iRules, and iRuleLX workspaces are loaded. The process may take a few seconds. When the process completes, you can expand the connection folder and subfolders. You can connect to multiple BIG-IP systems simultaneously. Note that folders exist for provisioned product modules on the BIG-IP system to which you connected. If you have not provisioned GTM or iRuleLX, no folder appears for that module. Note: To access a Big-IP system remotely via Eclipse, the user account must have been assigned a role of administrator. Every Big-IP system includes a user with the name admin which has an administrator role. For creating additional users with an administrator role, refer to the BIG-IP guide for User Account Administration. Change the BIG-IP partition The default partition Common is loaded when you connect to a BIG-IP system. If you would like to use a different partition, In the Explorer, select the BIG-IP Connection Click either the Gears toolbar button, or use the context menu (right click) to open the Project Properties dialog. Select a new partition Click OK. Content will be loaded from the BIG-IP partition you selected. Open an iRule or iRuleLX file to view or edit Click Open in the context menu, or double-click on the file in the Explorer. The content is pulled from the BIG-IP and an edit tab is created. This step creates a file on the local filesystem. Add or delete iRules, ILX workspaces, extensions, and files Use the context menu (right click) for Add and Delete menu items. These items are available in folders that contain iRules and iRuleLX resources. Saving an iRule or iRuleLX file Use File > Save, the Save button on the toolbar, or type Ctrl-S. Any of these actions save the file locally and pushes the changes to the BIG-IP system. Reloading an iRules or iRuleLX file Each file has a Reload context menu selection. Click Reload to reload the edit buffer and sync the changes with the BIG-IP system. Reloading all BIG-IP content To reload all content from each connected BIG-IP system, click Reload All toolbar button in the Explorer. This action closes all open editors after prompting you to save any unsaved changes. General information about iRule editing features The editor used for editing iRules includes code completion and hover documentation for iRule commands and events. Consistent with the default Eclipse behavior, typing Ctrl-Space will check for available completion proposals for the current word being type. Also, completions will be invoked after adding a colon (“:”) to a word. This is handy for completions of namespaced iRule commands, e.g. TCP::. The same completion popup documentation can be seen by hovering over an iRule command or event. The F5 plug-in does not include the man pages for the standard Tcl commands. To enable hover documentation for those standard Tcl commands, you can download the man pages and link them in to the Tcl and iRule editors. The manual pages can be downloaded from here: Tcl. Once uncompressed, the containing directory can be linked to Eclipse via the global preferences dialog (Window > Preferences for Linux and Windows, Eclipse > Preferences for MacOS). Under the Tcl > Man Pages section, click Configure…, then click Add. Specify a name of your choosing for the documentation set and add the file path to the local directory which contains the docs. JavaScript editing Opening a JavaScript file within an iRuleLX project will invoke a specific JavaScript editor (from the JSDT plug-in). This editor includes syntax highlighting, validation, completions, etc. The editing features can be configured via the global preferences dialog under the JavaScript section (Window > Preferences for Linux and Windows, Eclipse > Preferences for MacOS). File editing File types other than iRule or JavaScript will be opened with the editor identified by Eclipse to be appropriate. Depending on the file associations you have configured for your operating system, Eclipse may decide to open an external editor. To override this and choose the editor type of your own choosing, you can define a file association via the global preferences dialog under the General > Editors > File Associations section. Known Issues ID 592459 F5 Perspective Dialogs do not include Help button content. ID 594055 The formatter page within the Project Properties dialog can cause an error: “The currently displayed page contains invalid values”. The Eclipse bug is tracked here: https://bugs.eclipse.org/bugs/show_bug.cgi?id=476261 ID 594413 Code folding within an iRule ‘when’ block only works for contained comment blocks, not code blocks. ID 598035 iRule command completion does not work with Tcl execute statements. The workaround is to compose the statement outside the context of square brackets, then add the brackets later. ID 599573 Specific standard Tcl commands which are disabled for iRules (exit, exec, etc.) are highlighted/completed as if they are supported commands for iRules. ID 599574 A connection failure during a file Save operation leaves the file appearing that it’s saved (* decorator removed) when in fact it’s out of sync with the corresponding content on the Big-IP. If a file save is unsuccessful, the user should try to determine the cause of the failure: check the connection, status of the Big-IP, etc. Once that issue is resolved, re-initiating the file save, if successful, will then leave both copies of the file in sync. Until a successful save is accomplished the file on the local disk will be maintained, even if Eclipse is shutdown, so the edits aren’t lost, just not in sync. ID 600422 Eclipse running on Windows, will not escape a “\” line continuation character which may cause validation errors upon saving an iRule. The workaround is to avoid line continuations within iRules.1.4KViews0likes12CommentsNewton's First Law of Devops
#devops Deployment patterns enabling continuous delivery help align operations with development - and keep user frustration to a minimum One of the most difficult transitions for operations to make is the need to support releases more frequently. While most organizations will never match the velocity of a Facebook or Netflix, there is still a desire to eliminate the "reduced speed zone" most applications run into when trying to move into production. The reason for the reduced speed zone is the potential for downtime, not only for the application (version) being released but for other applications and services dependent on the same infrastructure. Misconfiguration is, after all, the leading cause of downtime in data centers across the globe. Thus, operations has good reason to be wary. And yet reducing the speed at which an application actually moves into production is exactly the opposite direction application development - and the business - want to go. They want to go faster, to be more agile, to release incrementally. Because application development rules today follow Newton's First Law of Motion: an object at rest stays at rest and an object in motion stays in motion with the same speed and in the same direction unless acted upon by an unbalanced force. Operations does not want to be that unbalanced force that stops the object. That's why we're seeing more "deployment patterns" or "devops patterns". That's why we're seeing an increasing need for programmability in the data path, i.e. in the network. Today, the complexity of infrastructure in production environments - the network, the application infrastructure, the point solutions, the services - impedes continuous deployment efforts by failing to meet devops need to not just automate, but implement the patterns and emerging architectures that keep deployments moving in the same direction and with the same speed as they are released by developers. These patterns depend on agility that is achieved primarily through programmability. Whether it's to implement blue-green deployments, a/b testing, versioning patterns or canary deployments - devops is in need of a devops platform, if you will, that will provide the programmability necessary to implement the deployment patterns that enable continuous delivery. A platform that eliminates the unbalanced force that has traditionally slowed down the release cycle, frustrating users and the business alike. That platform needs to be fast, flexible, and programmable. It needs to be able to bridge the gap between traditional and modern infrastructure. It needs to be API-enabled and yet easy to manage. More than anything else, that platform needs to exist.427Views0likes2CommentsGetting Started with iControl: Working with Statistics
In the previous article in the Getting Started with iControl series, we threw the lion's share of languages at you for both iControl portals, exposing you to many of the possible attack angles available for programmatically interacting with the BIG-IP. In this article, we're going to scale back a bit, focusing a few rest-based examples on how to gather statistics. Note: This article published by Jason Rahm but co-authored with Joe Pruitt. Statistics on BIG-IP As much as you can do with iControl, it isn’t the only methodology for gathering statistics. In fact, I’d argue that there are at least two better ways to get at them, besides the on-box rrd graphs. SNMP - Yes, it’s old school. But it has persisted as long as it has for a reason. It’s broad and lightweight. It does, however, require client-side software to do something with all that data. AVR - By provisioning the AVR module on BIG-IP, you can configure an expansive amount of metrics to measure. Combined with syslog to send the data off-box, this push approach is also pretty lightweight, though like with snmp, having a tool to consume those stats is helpful. Ken Bocchinoreleased an analytics iApp that combines AVR and its data output with Splunk and its indexing ability to deliver quite a solution. But maybe you don’t have those configured, or available, and you just need to do some lab testing and would like to consume those stats locally as you generate your test traffic. In these code samples, we’ll take a look at virtual server and pool stats. Pool Stats At the pool level, several stats are available for consumption, primarily around bandwidth utilization and connection counts. You can also get state information as well, though I'd argue that's not really a statistic, unless I suppose you are counting how often the state is available. With snmp, the stats come in high/low form, so you have to do some bit shifting in order to arrive at the actual value. Not so with with iControl. The value comes as is. In each of these samples below, you'll see two functions. The first is the list function, where when you run the script you will get a list of pools so you can then specify which pool you want stats for. The second function will then take that pool as an argument and return the statistics. Node.js Powershell Python Node.Js Pool Stats //-------------------------------------------------------- function getPoolList(name) { //-------------------------------------------------------- var uri = "/mgmt/tm/ltm/pool"; handleVERB("GET", uri, null, function(json) { //console.log(json); var obj = JSON.parse(json); var items = obj.items; console.log("POOLS"); console.log("-----------"); for(var i=0; i<items.length; i++) { var fullPath = items[i].fullPath; console.log(" " + fullPath); } }); } //-------------------------------------------------------- function getPoolStats(name) { //-------------------------------------------------------- var uri = "/mgmt/tm/ltm/pool/" + name + "/stats"; handleVERB("GET", uri, null, function(json) { //console.log(json); var obj = JSON.parse(json); var selfLink = obj.selfLink; var entries = obj.entries; for(var n in entries) { console.log("--------------------------------------"); console.log("NAME : " + getStatsDescription(entries[n].nestedStats, "tmName")); console.log("--------------------------------------"); console.log("AVAILABILITY STATE : " + getStatsDescription(entries[n].nestedStats, "status.availabilityState")); console.log("ENABLED STATE : " + getStatsDescription(entries[n].nestedStats, "status.enabledState")); console.log("REASON : " + getStatsDescription(entries[n].nestedStats, "status.statusReason")); console.log("SERVER BITS IN : " + getStatsValue(entries[n].nestedStats, "serverside.bitsIn")); console.log("SERVER BITS OUT : " + getStatsValue(entries[n].nestedStats, "serverside.bitsOut")); console.log("SERVER PACKETS IN : " + getStatsValue(entries[n].nestedStats, "serverside.pktsIn")); console.log("SERVER PACKETS OUT : " + getStatsValue(entries[n].nestedStats, "serverside.pktsOut")); console.log("CURRENT CONNECTIONS : " + getStatsValue(entries[n].nestedStats, "serverside.curConns")); console.log("MAXIMUM CONNECTIONS : " + getStatsValue(entries[n].nestedStats, "serverside.maxConns")); console.log("TOTAL CONNECTIONS : " + getStatsValue(entries[n].nestedStats, "serverside.totConns")); console.log("TOTAL REQUESTS : " + getStatsValue(entries[n].nestedStats, "totRequests")); } }); } Powershell Pool Stats #---------------------------------------------------------------------------- function Get-PoolList() # # Description: # This function returns the list of pools # # Parameters: # None #---------------------------------------------------------------------------- { $uri = "/mgmt/tm/ltm/pool"; $link = "https://$Bigip$uri"; $headers = @{}; $headers.Add("ServerHost", $Bigip); $secpasswd = ConvertTo-SecureString $Pass -AsPlainText -Force $mycreds = New-Object System.Management.Automation.PSCredential ($User, $secpasswd) $obj = Invoke-RestMethod -Method GET -Headers $headers -Uri $link -Credential $mycreds $items = $obj.items; Write-Host "POOL NAMES"; Write-Host "----------"; for($i=0; $i -lt $items.length; $i++) { $name = $items[$i].fullPath; Write-Host " $name"; } } #---------------------------------------------------------------------------- function Get-PoolStats() # # Description: # This function returns the statistics for a pool # # Parameters: # Name - The name of the pool #---------------------------------------------------------------------------- { $uri = "/mgmt/tm/ltm/pool/${Name}/stats"; $link = "https://$Bigip$uri"; $headers = @{}; $headers.Add("ServerHost", $Bigip); $secpasswd = ConvertTo-SecureString $Pass -AsPlainText -Force $mycreds = New-Object System.Management.Automation.PSCredential ($User, $secpasswd) $obj = Invoke-RestMethod -Method GET -Headers $headers -Uri $link -Credential $mycreds $entries = $obj.entries; $names = $entries | get-member -MemberType NoteProperty | select -ExpandProperty Name; $desc = $entries | Select -ExpandProperty $names $nestedStats = $desc.nestedStats; Write-Host ("--------------------------------------"); Write-Host ("NAME : $(Get-StatsDescription $nestedStats 'tmName')"); Write-Host ("--------------------------------------"); Write-Host ("AVAILABILITY STATE : $(Get-StatsDescription $nestedStats 'status.availabilityState')"); Write-Host ("ENABLED STATE : $(Get-StatsDescription $nestedStats 'status.enabledState')"); Write-Host ("REASON : $(Get-StatsDescription $nestedStats 'status.statusReason')"); Write-Host ("SERVER BITS IN : $(Get-StatsValue $nestedStats 'serverside.bitsIn')"); Write-Host ("SERVER BITS OUT : $(Get-StatsValue $nestedStats 'serverside.bitsOut')"); Write-Host ("SERVER PACKETS IN : $(Get-StatsValue $nestedStats 'serverside.pktsIn')"); Write-Host ("SERVER PACKETS OUT : $(Get-StatsValue $nestedStats 'serverside.pktsOut')"); Write-Host ("CURRENT CONNECTIONS : $(Get-StatsValue $nestedStats 'serverside.curConns')"); Write-Host ("MAXIMUM CONNECTIONS : $(Get-StatsValue $nestedStats 'serverside.maxConns')"); Write-Host ("TOTAL CONNECTIONS : $(Get-StatsValue $nestedStats 'serverside.totConns')"); Write-Host ("TOTAL REQUESTS : $(Get-StatsValue $nestedStats 'totRequests')"); } Python Pool Stats def get_pool_list(bigip, url): try: pools = bigip.get("%s/ltm/pool" % url).json() print "POOL LIST" print " ---------" for pool in pools['items']: print " /%s/%s" % (pool['partition'], pool['name']) except Exception, e: print e def get_pool_stats(bigip, url, pool): try: pool_stats = bigip.get("%s/ltm/pool/%s/stats" % (url, pool)).json() selflink = "https://localhost/mgmt/tm/ltm/pool/%s/~Common~%s/stats" % (pool, pool) nested_stats = pool_stats['entries'][selflink]['nestedStats']['entries'] print '' print ' --------------------------------------' print ' NAME : %s' % nested_stats['tmName']['description'] print ' --------------------------------------' print ' AVAILABILITY STATE : %s' % nested_stats['status.availabilityState']['description'] print ' ENABLED STATE : %s' % nested_stats['status.enabledState']['description'] print ' REASON : %s' % nested_stats['status.statusReason']['description'] print ' SERVER BITS IN : %s' % nested_stats['serverside.bitsIn']['value'] print ' SERVER BITS OUT : %s' % nested_stats['serverside.bitsOut']['value'] print ' SERVER PACKETS IN : %s' % nested_stats['serverside.pktsIn']['value'] print ' SERVER PACKETS OUT : %s' % nested_stats['serverside.pktsOut']['value'] print ' CURRENT CONNECTIONS : %s' % nested_stats['serverside.curConns']['value'] print ' MAXIMUM CONNECTIONS : %s' % nested_stats['serverside.maxConns']['value'] print ' TOTAL CONNECTIONS : %s' % nested_stats['serverside.totConns']['value'] print ' TOTAL REQUESTS : %s' % nested_stats['totRequests']['value'] Virtual Server Stats The virtual server stats are very similar in nature to the pool stats, though you are getting an aggregate view of traffic to the virtual server, whereas there might be several pools of traffic being serviced by that single virtual server. In any event, if you compare the code below to the code from the pool functions, they are nearly functionally identical. The only differences should be discernable: the method calls themselves and then the object properties that are different (clientside/serverside for example.) Node.js Powershell Python Node.Js Virtual Server Stats //-------------------------------------------------------- function getVirtualList(name) { //-------------------------------------------------------- var uri = "/mgmt/tm/ltm/virtual"; handleVERB("GET", uri, null, function(json) { //console.log(json); var obj = JSON.parse(json); var items = obj.items; console.log("VIRTUALS"); console.log("-----------"); for(var i=0; i<items.length; i++) { var fullPath = items[i].fullPath; console.log(" " + fullPath); } }); } //-------------------------------------------------------- function getVirtualStats(name) { //-------------------------------------------------------- var uri = "/mgmt/tm/ltm/virtual/" + name + "/stats"; handleVERB("GET", uri, null, function(json) { //console.log(json); var obj = JSON.parse(json); var selfLink = obj.selfLink; var entries = obj.entries; for(var n in entries) { console.log("--------------------------------------"); console.log("NAME : " + getStatsDescription(entries[n].nestedStats, "tmName")); console.log("--------------------------------------"); console.log("DESTINATION : " + getStatsDescription(entries[n].nestedStats, "destination")); console.log("AVAILABILITY STATE : " + getStatsDescription(entries[n].nestedStats, "status.availabilityState")); console.log("ENABLED STATE : " + getStatsDescription(entries[n].nestedStats, "status.enabledState")); console.log("REASON : " + getStatsDescription(entries[n].nestedStats, "status.statusReason")); console.log("CLIENT BITS IN : " + getStatsValue(entries[n].nestedStats, "clientside.bitsIn")); console.log("CLIENT BITS OUT : " + getStatsValue(entries[n].nestedStats, "clientside.bitsOut")); console.log("CLIENT PACKETS IN : " + getStatsValue(entries[n].nestedStats, "clientside.pktsIn")); console.log("CLIENT PACKETS OUT : " + getStatsValue(entries[n].nestedStats, "clientside.pktsOut")); console.log("CURRENT CONNECTIONS : " + getStatsValue(entries[n].nestedStats, "clientside.curConns")); console.log("MAXIMUM CONNECTIONS : " + getStatsValue(entries[n].nestedStats, "clientside.maxConns")); console.log("TOTAL CONNECTIONS : " + getStatsValue(entries[n].nestedStats, "clientside.totConns")); console.log("TOTAL REQUESTS : " + getStatsValue(entries[n].nestedStats, "totRequests")); } }); } Powershell Virtual Server Stats #---------------------------------------------------------------------------- function Get-VirtualList() # # Description: # This function lists all virtual servers. # # Parameters: # None #---------------------------------------------------------------------------- { $uri = "/mgmt/tm/ltm/virtual"; $link = "https://$Bigip$uri"; $headers = @{}; $headers.Add("ServerHost", $Bigip); $secpasswd = ConvertTo-SecureString $Pass -AsPlainText -Force $mycreds = New-Object System.Management.Automation.PSCredential ($User, $secpasswd) $obj = Invoke-RestMethod -Method GET -Headers $headers -Uri $link -Credential $mycreds $items = $obj.items; Write-Host "POOL NAMES"; Write-Host "----------"; for($i=0; $i -lt $items.length; $i++) { $name = $items[$i].fullPath; Write-Host " $name"; } } #---------------------------------------------------------------------------- function Get-VirtualStats() # # Description: # This function returns the statistics for a virtual server # # Parameters: # Name - The name of the virtual server #---------------------------------------------------------------------------- { param( [string]$Name ); $uri = "/mgmt/tm/ltm/virtual/${Name}/stats"; $link = "https://$Bigip$uri"; $headers = @{}; $headers.Add("ServerHost", $Bigip); $secpasswd = ConvertTo-SecureString $Pass -AsPlainText -Force $mycreds = New-Object System.Management.Automation.PSCredential ($User, $secpasswd) $obj = Invoke-RestMethod -Method GET -Headers $headers -Uri $link -Credential $mycreds $entries = $obj.entries; $names = $entries | get-member -MemberType NoteProperty | select -ExpandProperty Name; $desc = $entries | Select -ExpandProperty $names $nestedStats = $desc.nestedStats; Write-Host ("--------------------------------------"); Write-Host ("NAME : $(Get-StatsDescription $nestedStats 'tmName')"); Write-Host ("--------------------------------------"); Write-Host ("DESTINATION : $(Get-StatsDescription $nestedStats 'destination')"); Write-Host ("AVAILABILITY STATE : $(Get-StatsDescription $nestedStats 'status.availabilityState')"); Write-Host ("ENABLED STATE : $(Get-StatsDescription $nestedStats 'status.enabledState')"); Write-Host ("REASON : $(Get-StatsDescription $nestedStats 'status.statusReason')"); Write-Host ("CLIENT BITS IN : $(Get-StatsValue $nestedStats 'clientside.bitsIn')"); Write-Host ("CLIENT BITS OUT : $(Get-StatsValue $nestedStats 'clientside.bitsOut')"); Write-Host ("CLIENT PACKETS IN : $(Get-StatsValue $nestedStats 'clientside.pktsIn')"); Write-Host ("CLIENT PACKETS OUT : $(Get-StatsValue $nestedStats 'clientside.pktsOut')"); Write-Host ("CURRENT CONNECTIONS : $(Get-StatsValue $nestedStats 'clientside.curConns')"); Write-Host ("MAXIMUM CONNECTIONS : $(Get-StatsValue $nestedStats 'clientside.maxConns')"); Write-Host ("TOTAL CONNECTIONS : $(Get-StatsValue $nestedStats 'clientside.totConns')"); Write-Host ("TOTAL REQUESTS : $(Get-StatsValue $nestedStats 'totRequests')"); } Python Virtual Server Stats def get_virtual_list(bigip, url): try: vips = bigip.get("%s/ltm/virtual" % url).json() print "VIRTUAL SERVER LIST" print " -------------------" for vip in vips['items']: print " /%s/%s" % (vip['partition'], vip['name']) except Exception, e: print e def get_virtual_stats(bigip, url, vip): try: vip_stats = bigip.get("%s/ltm/virtual/%s/stats" % (url, vip)).json() selflink = "https://localhost/mgmt/tm/ltm/virtual/%s/~Common~%s/stats" % (vip, vip) nested_stats = vip_stats['entries'][selflink]['nestedStats']['entries'] print '' print ' --------------------------------------' print ' NAME : %s' % nested_stats['tmName']['description'] print ' --------------------------------------' print ' AVAILABILITY STATE : %s' % nested_stats['status.availabilityState']['description'] print ' ENABLED STATE : %s' % nested_stats['status.enabledState']['description'] print ' REASON : %s' % nested_stats['status.statusReason']['description'] print ' CLIENT BITS IN : %s' % nested_stats['clientside.bitsIn']['value'] print ' CLIENT BITS OUT : %s' % nested_stats['clientside.bitsOut']['value'] print ' CLIENT PACKETS IN : %s' % nested_stats['clientside.pktsIn']['value'] print ' CLIENT PACKETS OUT : %s' % nested_stats['clientside.pktsOut']['value'] print ' CURRENT CONNECTIONS : %s' % nested_stats['clientside.curConns']['value'] print ' MAXIMUM CONNECTIONS : %s' % nested_stats['clientside.maxConns']['value'] print ' TOTAL CONNECTIONS : %s' % nested_stats['clientside.totConns']['value'] print ' TOTAL REQUESTS : %s' % nested_stats['totRequests']['value'] Going Further So now that you have scripts that will pull stats from the pools and virtual servers, what might you do to extend these? As an exercise in skill development, I’d suggest trying these tasks below. Note that there are dozens of existing articles on iControl that you can glean details from that will help you in your quest. Given that you can pull back a list of pools or virtual servers by not providing one specifically, iterate through the appropriate list and print stats for all of them instead of just one. Extend that even further, building a table of virtual servers and their respective stats These stats are a point in time, and not terribly useful as a single entity. Create a timer where the stats are pulled back every 10s or so and take a delta to display. Extend that even further by narrowing down to maybe the bits in and out, and graph them over time. Resources All the scripts for this article are available in the codeshare under "Getting Started with iControl Code Samples."4.5KViews0likes3CommentsMicroservices and HTTP/2
It's all about that architecture. There's a lot of things we do to improve the performance of web and mobile applications. We use caching. We use compression. We offload security (SSL and TLS) to a proxy with greater compute capacity. We apply image optimization and minification to content. We do all that because performance is king. Failure to perform can be, for many businesses, equivalent to an outage with increased abandonment rates and angry customers taking to the Internet to express their extreme displeasure. The recently official HTTP/2 specification takes performance very seriously, and introduced a variety of key components designed specifically to address the need for speed. One of these was to base the newest version of the Internet's lingua franca on SPDY. One of the impacts of this decision is that connections between the client (whether tethered or mobile) and the app (whether in the cloud or on big-iron) are limited to just one. One TCP connection per app. That's a huge divergence from HTTP/1 where it was typical to open 2, 4 or 6 TCP connections per site in order to take advantage of broadband. And it worked for the most part because, well, broadband. So it wouldn't be a surprise if someone interprets that ONE connection per app limitation to be a negative in terms of app performance. There are, of course, a number of changes in the way HTTP/2 communicates over that single connection that ultimately should counteract any potential negative impact on performance from the reduction in TCP connections. The elimination of the overhead of multiple DNS lookups (not insignificant, by the way) as well as TCP-related impacts from slow start and session setup as well as a more forgiving exchange of frames under the covers is certainly a boon in terms of application performance. The ability to just push multiple responses to the client without having to play the HTTP acknowledgement game is significant in that it eliminates one of the biggest performance inhibitors of the web: latency arising from too many round trips. We've (as in the corporate We) seen gains of 2-3 times the performance of HTTP/1 with HTTP/2 during testing. And we aren't alone; there's plenty of performance testing going on out there, on the Internets, that are showing similar improvements. Which is why it's important (very important) that we not undo all the gains of HTTP/2 with an architecture that mimics the behavior (and performance) of HTTP/1. Domain Sharding and Microservices Before we jump into microservices, we should review domain sharding because the concept is important when we look at how microservices are actually consumed and delivered from an HTTP point of view. Scalability patterns (i.e. architectures) include the notion of Y-axis scale which is a sharding-based pattern. That is, it creates individual scalability domains (or clusters, if you prefer) based on some identifiable characteristic in the request. User identification (often extricated from an HTTP cookie) and URL are commonly used information upon which to shard requests and distribute them to achieve greater scalability. An incarnation of the Y-axis scaling pattern is domain sharding. Domain sharding, for the uninitiated, is the practice of distributing content to a variety of different host names within a domain. This technique was (and probably still is) very common to overcome connection limitations imposed by HTTP/1 and its supporting browsers. You can see evidence of domain sharding when a web site uses images.example.com and scripts.example.com and static.example.com to optimize page or application load time. Connection limitations were by host (origin server), not domain, so this technique was invaluable in achieving greater parallelization of data transfers that made it appear, at least, that pages were loading more quickly. Which made everyone happy. Until mobile came along. Then we suddenly began to realize the detrimental impact of introducing all that extra latency (every connection requires a DNS lookup, a TCP handshake, and suffers the performance impacts of TCP slow start) on a device with much more limited processing (and network) capability. I'm not going to detail the impact; if you want to read about it in more detail I recommend reading some material from Steve Souder and Tom Daly or Mobify on the subject. Suffice to say, domain sharding has an impact on mobile performance, and it is rarely a positive one. You might think, well, HTTP/2 is coming and all that's behind us now. Except it isn't. Microservice architectures in theory, if not in practice, are ultimately a sharding-based application architecture that, if we're not careful, can translate into a domain sharding-based network architecture that ultimately negates any of the performance gains realized by adopting HTTP/2. That means the architectural approach you (that's you, ops) adopt to delivering microservices can have a profound impact on the performance of applications composed from those services. The danger is not that each service will be its on (isolated and localized) "domain", because that's the whole point of microservices in the first place. The danger is that those isolated domains will be presented to the outside world as individual, isolated domains, each requiring their own personal, private connection by clients. Even if we assume there are load balancing services in front of each service (a good assumption at this point) that still means direct connections between the client and each of the services used by the client application because the load balancing service acts as a virtual service, but does not eliminate the isolation. Each one is still its own "domain" in the sense that it requires a separate, dedicated TCP connection. This is essentially the same thing as domain sharding as each host requires its own IP address to which the client can connect, and its behavior is counterproductive to HTTP/2*. What we need to do to continue the benefits of a single, optimized TCP connection while being able to shard the back end is to architect a different solution in the "big black box" that is the network. To be precise, we need to take advantage of the advanced capabilities of a proxy-based load balancing service rather than a simple load balancer. An HTTP/2 Enabling Network Architecture for Microservices That means we need to enable a single connection between the client and the server and then utilize capabilities like Y-axis sharding (content switching, L7 load balancing, etc...) in "the network" to maintain the performance benefits of HTTP/2 to the client while enabling all the operational and development benefits of a microservices architecture. What we can do is insert a layer 7 load balancer between the client and the local microservice load balancers. The connection on the client side maintains a single connection in the manner specified (and preferred) by HTTP/2 and requires only a single DNS lookup, one TCP session start up, and incurs the penalties from TCP slow start only once. On the service side, the layer 7 load balancer also maintains persistent connections to the local, domain load balancing services which also reduces the impact of session management on performance. Each of the local, domain load balancing services can be optimized to best distribute requests for each service. Each maintains its own algorithm and monitoring configurations which are unique to the service to ensure optimal performance. This architecture is only minimally different from the default, but the insertion of a layer 7 load balancer capable of routing application requests based on a variety of HTTP variables (such as the cookies used for persistence or to extract user IDs or the unique verb or noun associated with a service from the URL of a RESTful API call) results in a network architecture that closely maintains the intention of HTTP/2 without requiring significant changes to a microservice based application architecture. Essentially, we're combining X- and Y-axis scalability patterns to architect a collaborative operational architecture capable of scaling and supporting microservices without compromising on the technical aspects of HTTP/2 that were introduced to improve performance, particularly for mobile applications. Technically speaking we're still doing sharding, but we're doing it inside the network and without breaking the one TCP connection per app specified by HTTP/2. Which means you get the best of both worlds - performance and efficiency. Why DevOps Matters The impact of new architectures - like microservices - on the network and the resources (infrastructure) that deliver those services is not always evident to developers or even ops. That's one of the reasons DevOps as a cultural force within IT is critical; because it engenders a breaking down of the isolated silos between ops groups that exist (all four of them) and enables greater collaboration that leads to more efficient deployment, yes, but also more efficient implementations. Implementations that don't necessarily cause performance problems that require disruptive modification to applications or services. Collaboration in the design and architectural phases will go along way towards improving not only the efficacy of the deployment pipeline but the performance and efficiency of applications across the entire operational spectrum. * It's not good for HTTP/1, either, as in this scenario there is essentially no difference** between HTTP/1 and HTTP/2. ** In terms of network impact. HTTP/2 still receives benefits from its native header compression and other performance benefits.1.6KViews0likes2Comments