In the last installment, we took a look at some programming fundamentals, with the hopes to get people on the same page so that we could start building towards some more advanced concepts. To do so, however, we would be remiss if we didn’t look at both sides of the equation. You see when looking to write iRules the programming side of things is only half of the picture. This is one of the things that makes iRules so unique. It is truly a network aware programming language, and as such you need to not only understand the fundamentals of programming, but also how F5 devices work, what options you have within the product, the terminology we use, etc.
Let’s face it, if I hand you a bucket full of iRules commands and tell you to go solve a problem you aren’t going to get very far if you don’t know what a pool is, what to call a client IP address, or how to make use of a VIP. Consider the last article a starter’s guide for programming basics to help get the non-programmers up to speed. If that’s the case, this installment of #The101: iRules is intended to be a primer for non networking (or at least, non F5 aware) users to get up to speed. iRules is an amazingly powerful language, but it is only as powerful as the abilities of the person doing the coding, and those abilities start squarely with an understanding of what the device you’re programming is capable of.
Using the same format as before, let’s take a look at a glossary of basic terms/technologies that you will see often come up when talking about or working with F5 technology. Here is a glossary of F5 technology for iRules programmers:
We’ll take each of these concepts in turn and give a brief overview to illuminate what they mean when speaking in the context of F5 devices. For the more detailed information as it relates to specific versions of TMOS, head over to My F5.
A Virtual IP, or VIP, also known as a Virtual Server is a key component in any BIG-IP configuration. It’s kind of the starting point most times when people are thinking about building a configuration for a given application. The VIP is the destination (combination of IP and port) to which requests will be sent when bound for whatever application lives behind the BIG-IP. For instance if you have a server hosting your web application living behind an F5 device, it would no longer have a public facing internet address. Instead you would assign that public address to the BIG-IP in the form of a VIP with whichever accompanying port you are expecting traffic on, likely 80, 443 or all (*) in this case. So you would end up with a VIP on the front (or “client side”) of your BIG-IP that directs traffic to the server(s) on the back end. The VIP is integral as it is where all traffic is directed from the outside, where profiles and other configuration options are defined, and much more.
It is not uncommon for application configurations to make use of multiple VIPs, especially when they receive traffic on multiple ports, or if they need to make use of multiple profiles for some reason or another (perhaps some requests use a client SSL certificate and others don’t). So it is important to remember that one VIP does not necessarily mean one application. A VIP is a configuration object on the BIG-IP that allows you to tie together a destination IP:port combination and process traffic for that combination. Whether it is to route to a back end server, redirect elsewhere, deny, discard, inspect, or simply log information about said traffic…there is a near limitless number of options with what you can do with the traffic once it enters the F5 device. To get it there, however, you’ll need a VIP.
A pool, in the simplest terms I can muster, is a grouping of servers. Like a VIP, a pool is an integral BIG-IP configuration object. This one, however, can be considered effectively one step lower in the configuration stack, as it were. Meaning you must have a VIP in place to allow traffic into your F5 device, generally speaking, and only once it’s there do pools become relevant. A pool is a collection of one or more servers, referred to as members, which we will get into In a moment. The pool is where the type of load balancing desired is chosen, where some options such as rate limiting and the like are applied, and one of the most important of the many layers of monitoring that can be applied to an application’s stack within a BIG-IP.
Monitoring the pool level is important because it will allow you a clear representation of which groups of servers are or aren’t available at a given time. Each VIP has the option of selecting a default pool, but it is also possible to direct to another pool should the primary pool be unavailable. In some configurations there isn’t a default pool stated, and instead a pool is chosen based on criteria that is gleaned from the connection once it is in place. Regardless, pools are where the servers hosting the application being served live, and as such they are a crucial part of any deployment.
A member is one of the servers associated with a given pool. You will hear the term “pool member” often, and this is why, it is a term referring to one of the particular servers associated with the designated pool. Pool members play an important role, obviously, because they are the representation of the actual servers themselves in your configuration. The combination of a VIP, pool and pool members makes up the overall, general structure of a basic application stack within a BIG-IP. There are thousands of permutations and possible options of course, but this is the most basic, generic view, and is important to understand for a starting point. Pool members can have many options toggled on them as well, in addition to the configuration options already inherently in place on any traffic destined to the members of a pool due to the configuration of the pool or VIP upstream from the members in the config hierarchy.
So again, generally speaking, traffic will come in destined for a particular VIP. That VIP will then route the traffic to a given pool, based on either the default pol selected for the VIP or some other criteria, perhaps an iRule. The traffic will then arrive at that pool and a load balancing decision will be made within the pool, based on currently available members and the select load balancing algorithm, and traffic will finally be directed to a pool member, which is the final destination (I.E. server) which will process the request and respond accordingly.
What if you have a VIP, but don’t want to route traffic to the member of one of the pools on your BIG-IP. What if instead you would rather route traffic to a particular server via IP address, regardless of whether it is part of your configuration, lives behind your BIG-IP, or is a member of a pool? This is the concept of a node. A node is any destination IP to which you would like to direct traffic. It is not treated the same as a pool, no load balancing decision is made, no monitoring is done, it is a rather “fire and forget” type of action. It has many uses, but is not generally recommended for the majority of a normal application’s traffic flow. Still, the term will come up in the iRules world, as there is a specific command for sending traffic to a node from within an iRule, so it’s important to understand what one is.
A profile is the heart of much of the processing done for each session that is established and flows through the BIG-IP. A profile gets applied to a VIP and dictates what type of traffic is expected, TCP or UDP, it dictates whether or not SSL offloading will be done on the client side, which SSL profile is used if it is, whether or not a particular protocol will be used such as HTTP or SIP, whether or not ONECONNECT will be enabled, and much, much more. If the VIP is the destination for your application traffic think of the profile attached to said VIP as the control center that tags, inspects, interprets and categorizes the traffic once it arrives.
A profile is essential for many reasons, not the least of which is that, from an iRules perspective, there are several commands that are only available for use on a VIP that has certain profiles applied. For instance the popular and often used HTTP commands are only available on VIPs that have an HTTP profile applied. This is because the profile applied to the VIP does a large amount of processing, as I’ve said, and the commands rely on some of the data that results from that processing and interrogation.
There are many different kinds of profiles when talking about F5 technology. SSL profiles, Auth Profiles, Protocol Profiles and more. The concept is similar for all of them, with different nuances and outcomes. Most often when dealing with iRules you’ll need to be sure you’re using the appropriate Protocol profile (such as HTTP) and applying an SSL profile when required, if you’re trying to inspect and process SSL traffic. To do so you must terminate SSL at the BIG-IP, which is done by applying a ClientSSL profile to the VIP in question.
Unlike the terms in this glossary up this point, client side is not an actual configuration object. It is instead a concept that I feel is important to describe and express to those looking to gain a level of comfort with F5 technology, and iRules in particular. The BIG-IP is a full proxy architecture, which means that there are separate IP stacks for each side of the proxy, client side and server side. This means that as traffic progresses through the BIG-IP , at some point it is transferred from one stack to the other, and vice versa on the responses from the server to the client.
This is important because different profiles, configuration objects and iRules commands themselves are only available or function differently within different contexts. It is important to know whether you are trying to or need to affect client side traffic or server side traffic to accomplish what it is you’re looking to do. This is a term you will hear thrown around quite a bit as you delve deeper into working with F5 technology and particularly iRules, and the simplest way to describe it is “anything that occur on the client side of the proxy architecture is in the client side context”.
Server side, as you can likely deduce, is the exact opposite of the above. It is a term used to describe something occurring on the server side of the proxy architecture. This is usually the responses from application servers, but keep in mind that client side and server side depend entirely on which side of the proxy initiates the transaction. If you have a server in a pool reaching out through your BIG-IP to perform an action of its own accord, that server is now the “client” for the purposes of the proxy discussion, and as such it is on the client side of the transaction, despite it being a server itself.
Someone could easily write a several thousand-page paper on the TMM in and of itself, so I will not attempt to discuss it in detail. I only want to ensure that when the term is referenced it is not completely foreign. The TMM is the Traffic Management Microkernel. It is the custom kernel that F5 developed specifically to handle traffic processing and routing. It is designed from the ground up to be high performance, reliable, and flexible for our needs. The TMM is what does all of the actual traffic processing on any BIG-IP. Whether it is an iRule being executed, a profile inspecting traffic as it comes through a VIP, or just about anything else that touches the traffic as it traverses an F5 device, it happens within the TMM. The TMM lives separately from the “host OS”, which is likely the only other important thing to know in regards to TMM at this point. The host OS handles things such as syslog, sshd, httpd, etc. while leaving the TMM free to do only what it is best at – processing traffic in absurdly high volumes.
Again, there are papers that already exist that depict CMP in a far more thorough and articulate manner than I could hope to achieve, so I will give the primer version to elucidate the very basics of the concept. CMP is “Clustered Multi-Processing”. This is F5’s proprietary way of dealing with multiple core devices. In essence, in a very rough sense, each core in a device is assigned its own individual TMM (see above) to handle processing of traffic for that core. There is then a custom disaggregator (DAG) built into the system that decides which TMM, and as such, which core to send traffic to for processing. In this way F5 is able to achieve massively linear scalability in multiple CPU, multiple core systems. That barely skims the surface of this technology, but when discussing iRules it is important to know that CMP is a good thing, and breaking or demoting from CMP is, generally speaking, bad. Hopefully this explains why that is, and what CMP is at a basic level.
Armed with a basic understanding of these concepts I am hopeful that the world of F5 and iRules will be far easier to understand for those that may not have been exposed to such before. Now that everyone has a solid basis in both programming and F5 concepts we will move forward with discussing iRules in particular. In the next article we will delve into iRules as a technology, how it works, why it exists, how to make use of an iRule and more.