video
59 TopicsiRules Concepts: Considering Context part 1
Language is a funny thing. It's so easily misunderstood, despite the many rules in place to govern its use. This is in part, of course, due to dialects and slang, but context shouldn't be underestimated as an important piece of understanding linguistics. A perfect example of context confusing the intent of a statement is the ever present blurbs we've all seen accompanying movie trailers. "riveting!" claims one trailer, "a masterpiece of filmmaking..." boasts another when in reality the full quotes may have been "This waste of space is far from riveting!" and "Not even a blind man could consider this a masterpiece of filmmaking". I think you'll agree that the quotes, taken out of context, have a quite different and unintentional meaning compared to the originals. It's clear then, that context can swiftly dictate the very meaning of a statement, and determine whether the intended result is expressed. When programming we have similar, if not more stringent, rules and guides for the way in which we express ourselves. Yet still, context plays a massive role in how we are able to achieve our goals. This is especially true in a system with multiple contexts itself, aside from the languages being used to facilitate its functions. The BIG-IP, being a full proxy platform, inherently has two contexts, clientside & serverside. With two separate and fully functional TCP stacks this context is very much enforced. As a manner of course your connections through the BIG-IP bounce from one stack to the other, and therefore from one context to the other, though you wouldn't know it if you didn't' look closely. This duality plays into every piece of iRules code ever written, whether consciously by the author or not. The event in which you choose to execute your code, the particular commands you're running, even the results some of those commands may return, all are contextually influenced at some level. On top of clientside vs. serverside there are other less obvious contexts to consider when iRuling, such as whether a piece of information you're collecting or a logic structure you're trying to apply is session or request based. It seems prudent then to discuss context a bit more and give some examples of how it works within iRules and how to bend it to your whim. So what are the different contexts that we will cover? Clientside Serverside Modifiers & Special Cases Connection based Request based What does being in one of these contexts really mean, though? For clientside & serverside at least, It means that there are certain pieces of information that have been interrogated from the session by the profile applied. For instance, the HTTP profile, which is a client profile, automatically gathers and stores the HTTP host and uri along with a wealth of other information for related commands. This means that if you want to have access to the commands that make use of that data, in this case HTTP::host and HTTP::uri, you must have that profile applied and parsing the traffic in question. Since that is a client profile, obviously it is only going to be applied to the clientside traffic being processed by the VIP. This makes those commands clientside commands by default, as they won't function without that client profile applied. Keep in mind that "clientside" and "serverside" don't necessarily indicate directional flow into or out of a DMZ, or even that a server exists at all. For all the BIG-IP cares it might be forwarding information from one client to another. For an internal VIP that a server is initiating connections through, "clientside" context may very well be in the DMZ with a server initiating things. For our purposes clientside simply means the side on which the connection was initiated by the "client", and serverside indicates the side on which the connection was initiated by the LTM out of necessity to pass the traffic to a "server". This diagram may help clarify: In the illustration above (big thanks to Jason Rahm for the Visio wizardry) you can get a clear picture of what I mean when describing the individual stacks and contexts. Clientside Given that it is, generally speaking, the most widely used context in iRules development, we'll start with clientside operations. These actions are things that take place on the data being passed between the client and the BIG-IP. Whenever a request inbound from the client is terminated on the BIG-IP, any iRules executed against the data in transit will be in the clientside context until just after the load balancing decision is made, immediately before the data leaves the BIG-IP bound for the selected server. This includes things like URI manipulation, authentication requests, custom error page hosting, client request manipulation via the STREAM profile, and much more. A few common clientside events & commands: CLIENT_ACCEPTED HTTP_REQUEST CLIENT_DATA HTTP::host HTTP::uri TCP::client_port As you can see these events and commands are common, highly leveraged tools in the iRules world. Whenever any commands are issued under these events they are assumed to be in the clientside context unless otherwise specified, and these commands either assume or require clientside context to function. You can't make decisions based on the URI in a serverside context because, as far as the serverside TCP stack is concerned, it has no idea what an HTTP::uri is. Serverside The opposite of clientside, serverside context should be largely self explanatory if you're following along. Any connection initiated from the BIG-IP, bound for the destination server is a serverside connection, with an appropriately matching context for any iRules commands being issued. Most profiles on the BIG-IP, such as HTTP, SMTP, DNS, etc. largely function on the client side of the connection. As such, many of the pieces of information retrieved from the client connection via those profiles will not be directly available in a serverside context. If you want to access things such as the HTTP::uri in the serverside context, you'll need to either set a local variable or use a modifier command, which I'll explain next. A few common serverside events & commands: SERVER_CLOSED SERVER_CONNECTED HTTP_RESPONSE ONECONNECT::reuse As you can see these are all events and commands that execute on the server side connection. They affect the data in flight between the BIG-IP and the server(s). There are fewer commands and events that are specifically serverside in context as much of the inspection and altering of connection information happens on the clientside. This doesn't mean that serverside events or commands aren't highly useful and in many cases required. As much as you can't inspect the HTTP::uri directly in a serverside event, you also can't determine when a server's connection has closed from a clientside event. Depending on what you're doing, there is plenty to be done with serverside events and commands. Modifiers & Special Cases In some cases there's a need for a particular piece of info that you just can't retrieve at the time you want in the context you're in. One of the iRules Challenges had a requirement like this built in, wherein the challengees had to make use of the HTT URI in a serverside context. This leaves one of two choices, either set a local variable (which is a perfectly viable and functional option, just not as ideal as avoiding it when possible) or use a context modifier. The clientside and serverside commands within iRules are precisely that. The two modifiers are: clientside serverside If you want to, as in this case, access the HTTP URI within a serverside context you'll need to make use of the clientside command, along with a particularly tricky event, HTTP_REQUEST_SEND. This is the last event that occurs for an HTTP session on BIG-IP before the data is sent from the BIG-IP to the server. That means that the LB decision is made,a server is picked, etc. Why would you need to do this, you ask? Perhaps you want to set a requirement that certain URIs are only allowed to specific back-end servers? To get both of those pieces of data together in one iRule, you must combine a piece of clientside data and a piece of serverside data. So you're either heading to variable city, or you're making use of one of the modifier commands. There are also a few special commands within iRules that have the context built into them. These commands are things such as: IP::client_addr TCP::client_port IP::server_addr TCP::server_port These commands have their context explicitly set. IP::client_addr, for instance, is equivalent to [clientside [IP::remote_addr]] whereas IP::server_addr is equivalent to [serverside [IP::remote_addr]]. They are statically tied to the context they were built for and can't be forced out of it, but as such also don't require a modifier when being used outside of their native context. Meaning you could use IP::client_addr inside a serverside event without having to specify the clientside command along with it to achieve the same result as you would from within a clientside event. There aren't many commands like this in iRules as they were built largely to simplify certain repeated tasks or to compensate for old v4.x commands. These are the most common examples. As you can see, given my example of the iRules Challenge Solution, there are sometimes solid reasons for doing this: when CLIENT_ACCEPTED { if { ([IP::addr [IP::client_addr] equals 10.10.10.0/28]) || ([class match [RESOLV::lookup -ptr [IP::client_addr]] eq authed_hosts]) } { set secure 1 } } when HTTP_REQUEST_SEND { if {!($secure)} { if { ([class match [IP::server_addr] eq secure_servers]) && ([class match [clientside {[HTTP::uri]}] eq uris]) } { drop log 172.27.10.10 local0.info "Bad request from [IP::client_addr] to [IP::server_addr] requesting [clientside {[HTTP::uri]}]" } } } That pretty much covers clientside vs. serverside. In Part 2 I'll cover connection vs. request based contexts and how to work with them within iRules to increase performance and scalability.1.1KViews0likes1CommentF5 Animated Whiteboards
F5 Animated Whiteboards offer a quick, visual story to some of today's technology challenges and how we can help solve those headaches. Our latest covers Next Generation IPS and how SSL encryption of applications is becoming the norm. SSL ensures the integrity and privacy of transactions, but poses a visibility problem for IPS systems. This lack of visibility means attackers can evade IPS by encrypting their transmissions within SSL. Learn how F5's intelligent ADC combined with next-generation IPS provides a solution that eliminates dangerous blind spots. ps Other F5 Animated Whiteboards F5 Enterprise Mobility Gateway Animated Whiteboard Intelligent DNS Animated Whiteboard F5 VoLTE Animated Whiteboard F5 Secure Web Gateway Whiteboard F5 DDoS Animated Whiteboard SSL Animated Whiteboard Application Availability Between Hybrid Data Centers Multi-Tenancy for the High Performance Services Fabric Animated Whiteboard Technorati Tags: f5,big-ip,security,whiteboard,video,synthesis,silva Connect with Peter: Connect with F5:570Views0likes0CommentsBuilding an elastic environment requires elastic infrastructure
One of the reasons behind some folks pushing for infrastructure as virtual appliances is the on-demand nature of a virtualized environment. When network and application delivery infrastructure hits capacity in terms of throughput - regardless of the layer of the application stack at which it happens - it's frustrating to think you might need to upgrade the hardware rather than just add more compute power via a virtual image. The truth is that this makes sense. The infrastructure supporting a virtualized environment should be elastic. It should be able to dynamically expand without requiring a new network architecture, a higher performing platform, or new configuration. You should be able to just add more compute resources and walk away. The good news is that this is possible today. It just requires that you consider carefully your choices in network and application network infrastructure when you build out your virtualized infrastructure. ELASTIC APPLICATION DELIVERY INFRASTRUCTURE Last year F5 introduced VIPRION, an elastic, dynamic application networking delivery platform capable of expanding capacity without requiring any changes to the infrastructure. VIPRION is a chassis-based bladed application delivery controller and its bladed system behaves much in the same way that a virtualized equivalent would behave. Say you start with one blade in the system, and soon after you discover you need more throughput and more processing power. Rather than bring online a new virtual image of such an appliance to increase capacity, you add a blade to the system and voila! VIPRION immediately recognizes the blade and simply adds it to its pools of processing power and capacity. There's no need to reconfigure anything, VIPRION essentially treats each blade like a virtual image and distributes requests and traffic across the network and application delivery capacity available on the blade automatically. Just like a virtual appliance model would, but without concern for the reliability and security of the platform. Traditional application delivery controllers can also be scaled out horizontally to provide similar functionality and behavior. By deploying additional application delivery controllers in what is often called an active-active model, you can rapidly deploy and synchronize configuration of the master system to add more throughput and capacity. Meshed deployments comprising more than a pair of application delivery controllers can also provide additional network compute resources beyond what is offered by a single system. The latter option (the traditional scaling model) requires more work to deploy than the former (VIPRION) simply because it requires additional hardware and all the overhead required of such a solution. The elastic option with bladed, chassis-based hardware is really the best option in terms of elasticity and the ability to grow on-demand as your infrastructure needs increase over time. ELASTIC STORAGE INFRASTRUCTURE Often overlooked in the network diagrams detailing virtualized infrastructures is the storage layer. The increase in storage needs in a virtualized environment can be overwhelming, as there is a need to standardize the storage access layer such that virtual images of applications can be deployed in a common, unified way regardless of which server they might need to be executing on at any given time. This means a shared, unified storage layer on which to store images that are necessarily large. This unified storage layer must also be expandable. As the number of applications and associated images are made available, storage needs increase. What's needed is a system in which additional storage can be added in a non-disruptive manner. If you have to modify the automation and orchestration systems driving your virtualized environment when additional storage is added, you've lost some of the benefits of a virtualized storage infrastructure. F5's ARX series of storage virtualization provides that layer of unified storage infrastructure. By normalizing the namespaces through which files (images) are accessed, the systems driving a virtualized environment can be assured that images are available via the same access method regardless of where the file or image is physically located. Virtualized storage infrastructure systems are dynamic; additional storage can be added to the infrastructure and "plugged in" to the global namespace to increase the storage available in a non-disruptive manner. An intelligent virtualized storage infrastructure can further make more efficient the use of the storage available by tiering the storage. Images and files accessed more frequently can be stored on fast, tier one storage so they are loaded and execute more quickly, while less frequently accessed files and images can be moved to less expensive and perhaps less peformant storage systems. By deploying elastic application delivery network infrastructure instead of virtual appliances you maintain stability, reliability, security, and performance across your virtualized environment. Elastic application delivery network infrastructure is already dynamic, and offers a variety of options for integration into automation and orchestration systems via standards-based control planes, many of which are nearly turn-key solutions. The reasons why some folks might desire a virtual appliance model for their application delivery network infrastructure are valid. But the reality is that the elasticity and on-demand capacity offered by a virtual appliance is already available in proven, reliable hardware solutions today that do not require sacrificing performance, security, or flexibility. Related articles by Zemanta How to instrument your Java EE applications for a virtualized environment Storage Virtualization Fundamentals Automating scalability and high availability services Building a Cloudbursting Capable Infrastructure EMC unveils Atmos cloud offering Are you (and your infrastructure) ready for virtualization?505Views0likes4CommentsIn 5 Minutes or Less - PCoIP Proxy for VMware Horizon View
In this special Contestant Edition of In 5 Minutes or Less, I welcome Paul Pindell, F5 Solution Architect, to be the first contestant to see if he can beat the clock. Paul shows how to configure BIG-IP APM to natively support VMware's PCoIP for the Horizon View Client. BIG-IP APM offers full proxy support for PC-over-IP (PCoIP) protocol. F5 is the first to provide this functionality which allows organizations to simplify their VMware Horizon View architectures. Combining PCoIP proxy with the power of the BIG-IP platform delivers hardened security and increased scalability for end-user computing. In addition to PCoIP, F5 supports a number of other VDI solutions, giving customers flexibility in designing and deploying their network infrastructure. ps Related: F5 Friday: Simple, Scalable and Secure PCoIP for VMware Horizon View Inside Look - PCoIP Proxy for VMware Horizon View Solutions for VMware applications F5 BIG-IQ Cloud integration with VMware Solutions VMware Horizon View F5 for VMware View F5's YouTube Channel In 5 Minutes or Less Series (24 videos – over 2 hours of In 5 Fun) Inside Look Series Life@F5 Series Technorati Tags: vdi,PCoIP,VMware,Access,Applications,Infrastructure,Performance,Security,Virtualization,silva,video,inside look,big-ip,apm Connect with Peter: Connect with F5:489Views0likes0CommentsInside Look - SAML Federation with BIG-IP APM
I get an Inside Look at BIG-IP's new #SAML #Federation functionality in v11.3 with Sr Security Solution Architect, Gary Zaleski. We cover BIG-IP as a SAML Service Provider (SP) and as a SAML Identity Provider (IdP). Watch how users can easily connect to Salesforce, SharePoint, Office365 and Google. Solving Substantiation with SAML. ps Related: Solving Substantiation with SAML – White Paper (pdf) In 5 Minutes or Less: BIG-IP Advanced Firewall Manager Inside Look - BIG-IP Advanced Firewall Manager BIG-IP Access Policy Manager Overview (pdf) F5 Enhances Application Delivery Security with the World’s Fastest Firewall F5's YouTube Channel In 5 Minutes or Less Series (23 videos – over 2 hours of In 5 Fun) Inside Look Series Life@F5 Technorati Tags: saml,authentication,f5,apm,cloud computing,big-ip,silva,video,inside look,federation Connect with Peter: Connect with F5:451Views0likes0CommentsCloud Computing: Vertical Scalability is Still Your Problem
Horizontal scalability achieved through the implementation of a load balancing solution is easy. It's vertical scalability that's always been and remains difficult to achieve, and it's even more important in a cloud computing or virtualized environment because now it can hurt you where it counts: the bottom line. Horizontal scalability is the ability of an application to be scaled up to meet demand through replication and the distribution of requests across a pool or farm of servers. It's the traditional load balanced model, and it's an integral component of cloud computing environments. Vertical scalability is the ability of an application to scale under load; to maintain performance levels as the number of concurrent requests increases. While load balancing solutions can certainly assist in optimizing the environment in which an application needs to scale by reducing overhead that can negatively impact performance (such as TCP session management, SSL operations, and compression/caching functionality) it can't solve core problems that prevent vertical scalability. The problem is that a single database table or SQL query that is poorly constructed can destroy vertical scalability and actually increase the cost of deploying in the cloud. Because you generally pay on a resource basis, if the application isn’t scaling up well it will require more resources to maintain performance levels and thus cost a lot more. Cloud computing isn’t going to magically optimize code or database queries or design database tables with performance in mind, that’s still squarely in the hands of the developers regardless of whether or not cloud computing is used as the deployment model. The issue of vertical scalability is very important when considering the use of cloud computing because you’re often charged based on compute resources used, much like the old mainframe model. If an application doesn’t vertically scale well, it’s going to increase the costs to run in the cloud. Cloud computing providers can't, and probably wouldn't if they could (it makes them money, after all), address vertical scalability issues because they are peculiar to the application. No external solution can optimize code such that the application will magically scale up vertically. External solutions can improve overall performance, certainly, by optimizing protocols, reducing protocol and application overhead, and reducing bandwidth requirements, but it can't dig into the application code and rearrange the order in which joins are performed inside an SQL query, rewrite a particularly poorly written loop, or refactor code to use a more efficient data structure. Vertical scalability, whether the application is deployed inside the local data center or out there in the cloud, is still the domain of the application developer. While developers can certainly take advantage of technologies like network-side scripting and inherent features in application delivery solutions to assist with efforts to increase vertical scalability, there is a limit to what solutions can do to address the root cause of an application's failure to vertically scale. Improving the vertical scalability of applications is important in achieving the benefits of a reduction in costs associated with cloud computing and virtualization. Applications that fail to vertically scale well may end up costing more when deployed in the cloud because of the additional demand on compute resources required as demand increases. Things you can do to improve vertical scalability Optimize SQL / database queries Take advantage of offload capabilities of application delivery solutions available Be aware of the impact on performance and scalability of decomposing applications into too finely grained services Remember that API usage will impact vertical scalability Understand the bottlenecks associated with the programming language(s) used and address them Cloud computing and virtualization can certainly address vertical scalability limitations by using horizontal scaling techniques to ensure capacity meets demand and performance level agreements are met. But doing so may cost you dearly and eliminate many of the financial incentives that led you to adopt cloud computing or virtualization in the first place. Related articles by Zemanta Infrastructure 2.0: The Diseconomy of Scale Virus Why you should not use clustering to scale an application Top 10 Concepts That Every Software Engineer Should Know Can Today's Hardware Handle the Cloud? Vendors air the cloud's pros and cons Twitter and the Architectural Challenges of Life Streaming Applications424Views0likes3CommentsLightboard Lessons: SSO to Legacy Web Applications
IT organizations have a simple goal: make it easy for workers to access all their work applications from any device. But that simple goal becomes complicated when new apps and old, legacy applications do not authenticate in the same way. In this Lightboard Lesson, I draw out how VMware and F5 helps remove these complexities and enable productive, any-device app access. By enabling secure SSO to Kerberos constrained delegation (KCD) and header-based authentication apps, VMware Workspace ONE and F5 BIG-IP APM help workers securely access all the apps they need—mobile, cloud and legacy—on any device anywhere. ps Related: Single Sign-On (SSO) to Legacy Apps Using BIG-IP & VMware Workspace ONE Simple & Secure Access to Legacy Enterprise Applications (pdf) Lightboard Lessons Playlist409Views0likes0CommentsIf Load Balancers Are Dead Why Do We Keep Talking About Them?
Commoditized from solution to feature, from feature to function, load balancing is no longer a solution but rather a function of more advanced solutions that’s still an integral component for highly-available, fault-tolerant applications. Unashamed Parody of Monty Python and the Holy Grail Load balancers: I'm not dead. The Market: 'Ere, it says it’s not dead. Analysts: Yes it is. Load balancers: I'm not. The Market: It isn't. Analysts: Well, it will be soon, it’s very ill. Load balancers: I'm getting better. Analysts: No you're not, you'll be stone dead in a moment. Earlier this year, amidst all the other (perhaps exaggerated) technology deaths, Gartner declared that Load Balancers are Dead. It may come as surprise, then, that application delivery network folks keep talking about them. As do users, customers, partners, and everyone else under the sun. In fact, with the increased interest in cloud computing it seems that load balancers are enjoying a short reprieve from death. LOAD BALANCERS REALLY ARE SO LAST CENTURY They aren’t. Trust me, load balancers aren’t enjoying anything. Load balancing on the other hand, is very much in the spotlight as scalability and infrastructure 2.0 and availability in the cloud are highlighted as issues today’s IT staff must deal with. And if it seems that we keep mentioning load balancers despite their apparent demise, it’s only because the understanding of what a load balancer does is useful to slowly moving people toward what is taking its place: application delivery. Load balancing is an integral component to any high-availability and/or on-demand architecture. The ability to direct application requests across a (cluster|pool|farm|bank) of servers (physical or virtual) is an inherent property of cloud computing and on-demand architectures in general. But it is not the be-all and end-all of application delivery, it’s just the point at which application delivery begins and an integral function of application delivery controllers. Load balancers, back in their day, were “teh bomb.” These simple but powerful pieces of software (which later grew into appliances and later into full-fledged application switches) offered a way for companies to address the growing demand for Web-based access to everything from their news stories to their products to their services to their kids’ pictures. But as traffic demands grew so did the load on servers and eventually new functionality began to be added to load balancers – caching, SSL offload and acceleration, and even security-focused functionality. From the core that was load balancing grew an entire catalog of application-rich features that focused on keeping applications available while delivering them fast and securely. At that point we were no longer simply load balancing applications, we were delivering them. Optimizing them. Accelerating them. Securing them. LET THEM REST IN PEACE… So it made sense that in order to encapsulate the concept of application delivery and move people away from focusing on load balancing that we’d give the product and market a new name. Thus arose the term “application delivery network” and “application delivery controller.” But at the core of both is still load balancing. Not load balancers, but load balancing. A function, if you will, of application delivery. But not the whole enchilada; not by a long shot. If we’re still mentioning load balancing (and even load balancers, as incorrect as that term may be today) it’s because the function is very, very, very important (I could add a few more “verys” but I think you get the point) to so many different architectures and to meeting business goals around availability and performance and security that it should be mentioned, if not centrally then at least peripherally. So yes. Load balancers are very much outdated and no longer able to provide the biggest bang for your buck. But load balancing, particularly when leveraged as a core component in an application delivery network, is very much in vogue (it’s trendy, like iPhones) and very much a necessary part of a successfully implemented high-availability or on-demand architecture. Long live load balancing. The House that Load Balancing Built A new era in application delivery Infrastructure 2.0: The Diseconomy of Scale Virus The Politics of Load Balancing Don't just balance the load, distribute it WILS: Network Load Balancing versus Application Load Balancing Cloud computing is not Burger King. You can’t have it your way. Yet. The Revolution Continues: Let Them Eat Cloud405Views0likes0CommentsF5 Synthesis: The Reference Architectures
The next-generation App-Focused, Solution Driven model for supporting all of your business applications. Your business uses countless applications in a given day. At F5, we built a reputation as an industry leader by helping organizations deliver the most secure, fast, and reliable applications to anyone anywhere at any time. Our pioneering focus on application services gives us a unique advantage in designing the solutions that drive business forward. F5 Synthesis isn’t built on new products and features. It’s built on comprehensive solutions. We took a big-picture look at the trends affecting businesses today—from security to mobility to performance and beyond—and designed architectures that pull together specific device, network and application scenarios to help you better identify and understand which solutions meet your network needs. The F5 Synthesis ™ architectural vision helps customer improve service velocity and accelerate time to market through automated provisioning and intelligent service orchestration of application services. The F5 Synthesis elastic, high-performance services fabric reduces the cost and complexity of deploying software defined application services ™ (SDAS ™ ) across all types of systems and environments, including software defined networks (SDN), virtual infrastructures, and cloud. F5’s prescriptive reference architectures, optimized licensing models, and deployment options give organizations the tools to align services with user and business expectations for applications, overcoming persistent IT challenges around availability, optimization, security, and mobility. Here are the Architectural overview videos of the F5 Synthesis Reference Architectures ps Related: f5 Synthesis F5 Introduces Synthesis Architecture F5 Synthesis: Software Defined Application Services F5 Synthesis: The Time is Right F5's Partner Ecosystem Supports Synthesis F5 and Cisco: Application-Centric from Top to Bottom and End to End When Applications Drive the Network F5 Synthesis Aims To Fill SDN Gap F5 Introduces Synthesis App Delivery Architecture for Cloud, Data Center Technorati Tags: synthesis,sdas,big-ip,f5,video,reference architecture Connect with Peter: Connect with F5:399Views0likes3Comments