devcentral basics
29 TopicsWhat is HTTP?
tl;dr - The Hypertext Transfer Protocol, or HTTP, is the predominant tool in the transferring of resources on the web, and a "must-know" for many application delivery concepts utilized on BIG-IP HTTP defines the structure of messages between web components such as browser or command line clients, servers like Apache or Nginx, and proxies like the BIG-IP. As most of our customers manage, optimize, and secure at least some HTTP traffic on their BIG-IP devices, it’s important to understand the protocol. This introductory article is the first of eleven parts on the HTTP protocol and how BIG-IP supports it. The series will take the following shape: What is HTTP? (this article) HTTP Series Part II - Underlying Protocols HTTP Series Part III - Terminology HTTP Series Part IV - Clients, Proxies, & Servers — Oh My! HTTP Series Part V - Profile Basic Settings HTTP Series Part VI - Profile Enforcement HTTP Series Part VII - Oneconnect HTTP Series Part VIII - Compression & Caching HTTP Series Part IX - Policies & iRules HTTP Series Part X- HTTP/2 A Little History Before the World Wide Web of Hypertext Markup Language (HTML) was pioneered, the internet was alive and well with bulletin boards, ftp, and gopher, among other applications. In fact, by the early 1990’s, ftp accounted for more than 50% of the internet traffic! But with the advent of HTML and HTTP, it only took a few years for the World Wide Web to completely rework the makeup of the internet. By the late 1990’s, more than 75% of the internet traffic belonged to the web. What makes up the web? Well get ready for a little acronym salad. There are three semantic components of the web: URIs, HTML, and HTTP. The URI is the Uniform Resource Identifier. Think of the URI as a pointer. The URI is a simple string and consists of three parts: the protocol, the server, and the resource. Consider https://devcentral.f5.com/s/articles/ . The protocol is https, the server is devcentral.f5.com, and the resources is /articles/. URL, which stands for Uniform Resource Locator, is actually a form of a URI, but for the most part they can be used interchangeably. I will clarify the difference in the terminology article. HTML is short for the HyperText Markup Language. It’s based on the more generic SGML, or Standard Generic Markup Language. HTML allows content creators to provide structure, text, pictures, and links to documents. In our context, this is the HTTP payload that BIG-IP might inspect, block, update, etc. HTTP as declared earlier, is the most common way of transferring resources on the web. It’s core functionality is a request/response relationship where messages are exchanged. An example of a GET message in the HTTP/1.1 version is shown in the image below. This is eye candy for now as we’ll dig in to the underlying protocols and HTTP terminology shown here in the following two articles. But take notice of the components we talked about earlier defined there. The protocol is identified as HTTP. Following the method is our resource /home, and the server is identified in the Host header. Also take note of all those silly carriage returns and new lines. Oh, the CRLF!! If you’ve dealt with monitors, you can feel our collective pain! HTTP Version History Whereas HTTP/2 has been done for more than two years now, current usage is growing but less than 20%, with HTTP/1.1 laboring along as the dominant player. We’ll cover version-specific nuances later in this series, but the major releases throughout the history of the web are: HTTP/0.9 - 1990 HTTP/1.0 - 1996 HTTP/1.1 - 1999 HTTP/2 - 2015 Given the advancements in technology in the last 18 years, the longevity of HTTP/1.1 is a testament to that committee (or an indictment on the HTTP/2 committee, you decide!) Needless-to-say, due to the longevity of HTTP/1.1, most of the industry expertise exists here. We’ll wrap this series with HTTP/2, but up front, know that it’s a pretty major departure from HTTP/1.1, most notably is that it is a binary protocol, whereas earlier versions of HTTP were all textual.1.6KViews4likes7CommentsWhat is the Edge?
Where oh where to begin? "The Edge" excitement today is reminiscent of "The Cloud" of many moons ago. Everyone, I mean EVERYONE, had a "to the cloud" product to advertise. CS Lewis (The Chronicles of Narnia) wrote an essay titled "The Death of Words" where he bemoaned the decay of words that transitioned from precise meanings to something far more vague. One example he used was gentleman, which had a clear objective meaning (a male above the station of yeoman whose family possessed a coat of arms) but had decayed (and is to this day) to a subjective state of referring to someone well-mannered. This is the case with industry shifts like cloud and edge, and totally works to the advantage of marketing/advertising. The result, however, is usually confusion. In this article, I'll briefly break down the edge in layman's terms, then link out to the additional reading you should do to familiarize yourself with the edge, why it's hot, and how F5 can help with your plans. What is edge computing? The edge, plainly, is all about distribution, taking services once available only in private datacenters and public clouds and shifting them out closer to where the requests are, whether those requests are coming from humans or machines. This shift of services is comprehensive, so while technologies from the infancy of the edge like CDNs are still in play, the new frontier of compute, security, apps, storage, etc, enhances the user experience and broadens the scope of real-time possibilities. CDNs were all about distributing content. The modern edge is all about application and data distribution. Where is the edge, though? But, you say, how is that not the cloud? Good question. Edge computing builds on the technology developed in the cloud era, where de-centralized compute and storage architectures were honed. But the clouds are still regional datacenters. A good example to bring clarity might be an industrial farm. Historically, data from these locations would be sent to a centralized datacenter or cloud for processing, and depending on the workloads, tractors or combines might be idle (or worse: errant) while waiting for feedback. With edge computing, a local node (consider this an enterprise edge) would be gathering all that data, processing, analyzing, and responding in real-time to the equipment, and then sending up to the datacenter/cloud anything relevant for further processing or reporting. Another example would be self-driving car or gaming technology, where perhaps the heavy compute for these is at the telco edge instead of having to backhaul all of it to a centralized processing hub. Where is the edge? Here, there, and everywhere. The edge, conceptually, can be at any point in between the user (be it human, animal, or machine) and the datacenter/cloud. Physically, though, understand that just like "serverless" applications still have to run on an actual server somewhere, edge technology isn't magic, it has to be hosted somewhere as well. The point is that host knows no borders; it can be in a provider, a telco, an enterprise, or even in your own home (see Lori's "Find My Cat" use case). The edge is coming for you The stats I've seen from Gartner and others are pretty shocking. 76% already have plans to deploy at the edge, and 75% of data will be processed at the edge by 2025? I'm no math major, but that sounds like one plus two, carry the three, uh, tomorrow! Are you ready for this? The good news is we are here to help. The best leaps forward in anything in our industry have always come from efforts bringing simplicity to the complexities. Abstraction is the key. Think of the progression of computer languages and how languages like C abstract the needs in Assembler, or how dynamically typed languages like python even abstract away the need for types. Or how hypervisors abstract lower level resources and allow you to carve out compute. Whether you're a netops persona thankful for tools that abstract BGP configurations from the differing syntax of various routers, or a developer thankful for libraries that abstract the nuances of different DNS providers so you can generate your SSL certificates with Let's Encrypt, all of that is abstraction. I like to know what's been abstracted. That's practical at times, but not often. Maybe in academia. Frankly, the cost associated to knowing "all the things" ins't one for which most orgs will pay. Volterra delivers that abstraction, to the compute stack and the infrastructure connective tissue, in spades, thus removing the tenuous manual stitching required to connect and secure your edge services. General Edge Resources Extending Adaptive Applications to the Edge Edge 2.0 Manifesto: Redefining Edge Computing Living on the Edge: How we got here Increasing Diversity of Location and Users is Driving Business to the Edge Application Edge Integration: A Study in Evolution The role of cloud in edge-native applications Edge Design & Data | The Edgevana Podcast (Youtube) Volterra Specific Resources Volterra and Power of the Distributed Cloud (Youtube) Multi-Cloud Networking with Volterra (Youtube) Network Edge App: Self-Service Demo (Youtube) Volterra.io Videos477Views4likes0CommentsThe BIG-IP Application Security Manager Part 1: What is the ASM?
tl;dr - BIG-IP Application Security Manager (ASM) is a layer 7 web application firewall (WAF) available on F5's BIG-IP platforms. Introduction This article series was written a while back, but we are re-introducing it as a part of our Security Month on DevCentral. I hope you enjoy all the features of this very powerful module on the BIG-IP! This is the first of a 10-part series on the BIG-IP ASM. This module is a very powerful and effective tool for defending your applications and your peace of mind, but what is it really? And, how do you configure it correctly and efficiently? How can you take advantage of all the features it has to offer? Well, the purpose of this article series is to answer these fundamental questions. So, join me as we dive into this really cool technology called the BIG-IP ASM! The Basics The BIG-IP ASM is a Layer 7 ICSA-certified Web Application Firewall (WAF) that provides application security in traditional, virtual, and private cloud environments. It is built on TMOS...the universal product platform shared by all F5 BIG-IP products. It can run on any of the F5 Application Delivery Platforms...BIG-IP Virtual Edition, BIG-IP 2000 -> 11050, and all the VIPRION blades. It protects your applications from a myriad of network attacks including the OWASP Top 10 most critical web application security risks It is able to adapt to constantly-changing applications in very dynamic network environments It can run standalone or integrated with other modules like BIG-IP LTM, BIG-IP DNS, BIG-IP APM, etc Why A Layer 7 Firewall? Traditional network firewalls (Layer 3-4) do a great job preventing outsiders from accessing internal networks. But, these firewalls offer little to no support in the protection of application layer traffic. As David Holmes points out in his article series on F5 firewalls, threat vectors today are being introduced at all layers of the network. For example, the Slowloris and HTTP Flood attacks are Layer 7 attacks...a traditional network firewall would never stop these attacks. But, nonetheless, your application would still go down if/when it gets hit by one of these. So, it's important to defend your network with more than just a traditional Layer 3-4 firewall. That's where the ASM comes in... Some Key Features The ASM comes pre-loaded with over 2,200 attack signatures. These signatures form the foundation for the intelligence used to allow or block network traffic. If these 2,200+ signatures don't quite do the job for you, never fear...you can also build your own user-defined signatures. And, as we all know, network threats are always changing so the ASM is configured to download updated attack signatures on a regular basis. Also, the ASM offers several different policy building features. Policy building can be difficult and time consuming, especially for sites that have a large number of pages. For example, DevCentral has over 55,000 pages...who wants to hand-write the policy for that?!? No one has that kind of time. Instead, you can let the system automatically build your policy based on what it learns from your application traffic, you can manually build a policy based on what you know about your traffic, or you can use external security scanning tools (WhiteHat Sentinel, QualysGuard, IBM AppScan, Cenzic Hailstorm, etc) to build your policy. In addition, the ASM comes configured with pre-built policies for several popular applications (SharePoint, Exchange, Oracle Portal, Oracle Application, Lotus Domino, etc). Did you know? The BIG-IP ASM was the first WAF to integrate with a scanner. WhiteHat approached all the WAFs and asked about the concept of building a security policy around known vulnerabilities in the apps. All the other WAFs said "no"...F5 said "of course!" and thus began the first WAF-scanner integration. The ASM also utilizes Geolocation and IP address intelligence to allow for more sophisticated and targeted defense measures. You can allow/block users from specific locations around the world, and you can block IP addresses that have built a bad reputation on other sites around the Internet. If they were doing bad things on some other site, why let them access yours? The ASM is also built for Payment Card Industry Data Security Standard (PCI DSS) compliance. In fact, you can generate a real-time PCI compliance report at the click of a button! The ASM also comes loaded with the DataGuard feature that automatically blocks sensitive data (Credit Card numbers, SSN, etc) from being displayed in a browser. In addition to the PCI reports, you can generate on-demand charts and graphs that show just about every detail of traffic statistics that you need. The following screenshot is a representative sample of some real traffic that I pulled off a site that uses the ASM. Pretty powerful stuff! I could go on for days here...and I know you probably want me to, but I'll wrap it up for this first article. I hope you can see the value of the ASM both as a technical solution in the defense of your network and also a critical asset in the long-term strategic vision of your company. So, if you already have an ASM and want to know more about it or if you don't have one yet and want to see what you're missing, come on back for the next article where I will talk about the cool features of policy building. What is the BIG-IP ASM? Policy Building The Importance of File Types, Parameters, and URLs Attack Signatures XML Security IP Address Intelligence and Whitelisting Geolocation Data Guard Username and Session Awareness Tracking Event Logging26KViews4likes6CommentsWhat Is BIG-IP?
tl;dr - BIG-IP is a collection of hardware platforms and software solutions providing services focused on security, reliability, and performance. F5's BIG-IP is a family of products covering software and hardware designed around application availability, access control, and security solutions. That's right, the BIG-IP name is interchangeable between F5's software and hardware application delivery controller and security products. This is different from BIG-IQ, a suite of management and orchestration tools, and F5 Silverline, F5's SaaS platform. When people refer to BIG-IP this can mean a single software module in BIG-IP's software family or it could mean a hardware chassis sitting in your datacenter. This can sometimes cause a lot of confusion when people say they have question about "BIG-IP" but we'll break it down here to reduce the confusion. BIG-IP Software BIG-IP software products are licensed modules that run on top of F5's Traffic Management Operation System® (TMOS). This custom operating system is an event driven operating system designed specifically to inspect network and application traffic and make real-time decisions based on the configurations you provide. The BIG-IP software can run on hardware or can run in virtualized environments. Virtualized systems provide BIG-IP software functionality where hardware implementations are unavailable, including public clouds and various managed infrastructures where rack space is a critical commodity. BIG-IP Primary Software Modules BIG-IP Local Traffic Manager (LTM) - Central to F5's full traffic proxy functionality, LTM provides the platform for creating virtual servers, performance, service, protocol, authentication, and security profiles to define and shape your application traffic. Most other modules in the BIG-IP family use LTM as a foundation for enhanced services. BIG-IP DNS - Formerly Global Traffic Manager, BIG-IP DNS provides similar security and load balancing features that LTM offers but at a global/multi-site scale. BIG-IP DNS offers services to distribute and secure DNS traffic advertising your application namespaces. BIG-IP Access Policy Manager (APM) - Provides federation, SSO, application access policies, and secure web tunneling. Allow granular access to your various applications, virtualized desktop environments, or just go full VPN tunnel. Secure Web Gateway Services (SWG) - Paired with APM, SWG enables access policy control for internet usage. You can allow, block, verify and log traffic with APM's access policies allowing flexibility around your acceptable internet and public web application use. You know.... contractors and interns shouldn't use Facebook but you're not going to be responsible why the CFO can't access their cat pics. BIG-IP Application Security Manager (ASM) - This is F5's web application firewall (WAF) solution. Traditional firewalls and layer 3 protection don't understand the complexities of many web applications. ASM allows you to tailor acceptable and expected application behavior on a per application basis . Zero day, DoS, and click fraud all rely on traditional security device's inability to protect unique application needs; ASM fills the gap between traditional firewall and tailored granular application protection. BIG-IP Advanced Firewall Manager (AFM) - AFM is designed to reduce the hardware and extra hops required when ADC's are paired with traditional firewalls. Operating at L3/L4, AFM helps protect traffic destined for your data center. Paired with ASM, you can implement protection services at L3 - L7 for a full ADC and Security solution in one box or virtual environment. BIG-IP Hardware BIG-IP hardware offers several types of purpose-built custom solutions, all designed in-house by our fantastic engineers; no white boxes here. BIG-IP hardware is offered via series releases, each offering improvements for performance and features determined by customer requirements. These may include increased port capacity, traffic throughput, CPU performance, FPGA feature functionality for hardware-based scalability, and virtualization capabilities. There are two primary variations of BIG-IP hardware, single chassis design, or VIPRION modular designs. Each offer unique advantages for internal and collocated infrastructures. Updates in processor architecture, FPGA, and interface performance gains are common so we recommend referring to F5's hardware pagefor more information.70KViews3likes3CommentsWhat is iCall?
tl;dr - iCall is BIG-IP’s event-based granular automation system that enables comprehensive control over configuration and other system settings and objects. The main programmability points of entrance for BIG-IP are the data plane, the control plane, and the management plane. My bare bones description of the three: Data Plane - Client/server traffic on the wire and flowing through devices Control Plane - Tactical control of local system resources Management Plane - Strategic control of distributed system resources You might think iControl (our SOAP and REST API interface) fits the description of both the control and management planes, and whereas you’d be technically correct, iControl is better utilized as an external service in management or orchestration tools. The beauty of iCall is that it’s not an API at all—it’s lightweight, it’s built-in via tmsh, and it integrates seamlessly with the data plane where necessary (via iStats.) It is what we like to call control plane scripting. Do you remember relations and set theory from your early pre-algebra days? I thought so! Let me break it down in a helpful way: P = {(data plane, iRules), (control plane, iCall), (management plane, iControl)} iCall allows you to react dynamically to an event at a system level in real time. It can be as simple as generating a qkview in the event of a failover or executing a tcpdump on a server with too many failed logins. One use case I’ve considered from an operations perspective is in the event of a core dump to have iCall generate a qkview, take checksums of the qkview and the dump file, upload the qkview and generate a support case via the iHealth API, upload the core dumps to support via ftp with the case ID generated from iHealth, then notify the ops team with all the appropriate details. If I had a solution like that back in my customer days, it would have saved me 45 minutes easy each time this happened! iCall Components Three are three components to iCall: events, handlers, and scripts. Events An event is really what drives the primary reason to use iCall over iControl. A local system event (whether it’s a failover, excessive interface or application errors, too many failed logins) would ordinarily just be logged or from a system perspective, ignored altogether. But with iCall, events can be configured to force an action. At a high level, an event is "the message," some named object that has context (key value pairs), scope (pool, virtual, etc), origin (daemon, iRules), and a timestamp. Events occur when specific, configurable, pre-defined conditions are met. Example (placed in /config/user_alert.conf) alert local-http-10-2-80-1-80-DOWN "Pool /Common/my_pool member /Common/10.2.80.1:80 monitor status down" { exec command="tmsh generate sys icall event tcpdump context { { name ip value 10.2.80.1 } { name port value 80 } { name vlan value internal } { name count value 20 } }" } Handlers Within the iCall system, there are three types of handlers: triggered, periodic, and perpetual. Triggered A triggered handler is used to listen for and react to an event. Example (goes with the event example from above:) sys icall handler triggered tcpdump { script tcpdump subscriptions { tcpdump { event-name tcpdump } } } Periodic A periodic handler is used to react to an interval timer. Example: sys icall handler periodic poolcheck { first-occurrence 2017-07-14:11:00:00 interval 60 script poolcheck } Perpetual A perpetual handler is used under the control of a deamon. Example: handler perpetual core_restart_watch sys icall handler perpetual core_restart_watch { script core_restart_watch } Scripts And finally, we have the script! This is simply a tmsh script moved under the /sys icall area of the configuration that will “do stuff" in response to the handlers. Example (continuing the tcpdump event and triggered handler from above:) modify script tcpdump { app-service none definition { set date [clock format [clock seconds] -format "%Y%m%d%H%M%S"] foreach var { ip port count vlan } { set $var $EVENT::context($var) } exec tcpdump -ni $vlan -s0 -w /var/tmp/${ip}_${port}-${date}.pcap -c $count host $ip and port $port } description none events none } Resources iCall Codeshare Lightboard Lessons on iCall Threshold violation article highlighting periodic handler8.3KViews2likes10CommentsGetting Started with AFM
tl;dr - BIG-IP AFM is a stateful firewall solution available on BIG-IP infrastructure targeted for datacenter traffic protection. The BIG-IP Advanced Firewall Manager (AFM) is a high-performance, stateful, full-proxy network security solution designed to guard against incoming threats that enter the network on the most widely deployed protocols. It’s an industry leader in network protection, and one of its most impressive features is the scalability it can handle. It leverages the high performance and flexibility of F5's TMOS architecture in order to provide large data center scalability features that take second place to no one. In this article, we’ll cover the nomenclature and architectural components of the AFM module. A little history The truth is, BIG-IP has always had firewall features built-in. By nature, it is a default deny device. The only way to pass traffic through the BIG-IP is through a virtual server, which is an ip and a port. That ip could be 0.0.0.0/0 and that port could be 0, and thus you are allowing all IPs and all protocols, but that still is a configuration choice you made, not a default behavior of the BIG-IP. So given that a) the BIG-IP is already making the decision to allow or deny traffic based on virtual servers, and b) the capacity is far greater than most traditional network firewalls can handle, why not take the necessary steps to achieve certification as a firewall and give customers an opportunity to eliminate a layer of infrastructure for inbound application services? And thus AFM was born (David Holmes had a great story about the origins in our roundtable last year.) But it was more than just slapping on a brand name and calling it a day. Some things had to happen to make this viable for the majority of customers. A couple show stoppers that are obvious firewall functions are a solid logging solution and an adequate rule building interface. Logging Any good firewalling function needs logs. What else will Mr 1983 here to the right have to do all night in his mom’s basement if he didn’t have logs to parse? Seriously though, without logs, how could you determine what is being blocked, and more important, what isn’t that should be? And in the event of a compromise, the information to properly handle the incident response. BIG-IP’s high speed logging (HSL) functionality had been around for a while but it was enhanced over time to have a robust interface from several of the system modules. All the sources, formats, publishers, and destinations are configurable, and what’s cool about the interface is the pool functionality, so logs can be sprayed across a collection cluster so no one server is responsible for being 100% available. Follow the rules! It’s all about context.. For rule building, some underlying infrastructure had to change. The flexibility of BIG-IP allows for all services to be configured at a virtual server level, but that may not always be desired. So a global context was added to handle policy decisions at a system level. Just below the global context is the route domain. This level of separation allows administrators to have unique policies by route domain, segmenting strategically at routing boundaries for use cases like tenant deployments. Within the route domain, context rules can be applied to virtual servers and self IPs. During packet processing, AFM attempts to match the packet contents to criteria specified in its active security rules. These rules are maintained in a hierarchical context. Rules can be global and apply to all addresses on the BIG-IP that match the rule, or they can be specific, applying only to a specific route domain, virtual server, self-IP, or the management port. The first context list of rules a packet passes through are the Global rules. If a packet matches a rule's criteria, then the system takes the action specified. If a packet does not match a rule, then the system compares the packet against the next rule, continuing through the context hierarchy and checking, as appropriate, rules relating to route domains, virtual servers, self-IPs, and the management port. If no match is found, the packet is dropped by the default deny rule. This fall-through is shown well here: The Object Model Several sets of database tables have been created to support AFM rules. For each collection, a table set exists for each type of container. There is a table for global rules, virtual IP rules, self IP rules and management IP rules. There is a table for source addresses for global rules as well as a table for source addresses for virtual server rules. But each table essentially contains the same information, with keys that point at different parent containers. Instead of jamming everything into one table, normalization is done based on the type of parent configuration object that contains rules. The classification module has two components: a compiler that generates a classifier in compiled form from configuration directives, and a classification engine that uses the compiled classifier to determine the set of rules matching a packet based on the packet contents and other relevant inputs. The compiler resides in the control plane and the classification engine resides in the packet processing path, as part of the TMM process. Objects that use the classifier are: Whitelists Blacklists SYN-cookies Rate limiters iRules L4-7 signatures ACLs From a configuration perspective, the following containers of security rules are supported: Global Rules: global rules affect all traffic except for traffic on the management interface. There is only one container object for global rules, and this is the first rule set that a packet is processed against. Context Rules: context rules include rules for self-IP, virtual servers, route domains, SNATs and NATs, and management IP. Note that the management interface is not controlled by TMM, therefore it is handled by iptables in the Linux kernel. There can be multiple container objects for context rules since these rules are applied to specific objects and not globally. Rule Lists are collections of rules that can be referred to by any of the other rule containers listed here. Nesting of rule groups is not supported, and a rule group may not refer to another rule group. Also, a rule group is path/folder aware. Packet flow The packet flow is hinted at in the context section above, but for clarity this is a better visualization. There isanother version of this drawingwith even more details which we based one of our Whiteboard Wednesday’s on last year. Deployment modes The first mode is ADC mode. This enforcement mode implicitly allow configured virtual server traffic while all other traffic is blocked. In ADC mode the source and destination settings of each virtual server (and self IP) imply corresponding firewall rules. The second is firewall mode. This enforcement mode is a strict default deny configuration. All traffic is blocked through BIG-IP AFM, and any traffic you want to allow through must be explicitly configured in the security rules. On top of these two operation modes there exists a global default. The purpose of the global default is to deny traffic which does not match listener. The global default can not be changed. You must configure explicit rules to allow traffic. But Wait, There's More! We've covered some of the core functionality that needed to be enhanced, but what other surprises are up the AFM's sleeve? No rabbits, but the AFM is chock-full of useful features, including: Bad actor blacklisting IP reputation automation iRules extensions DNS firewall DDoS capabilities FQDN support in ACL rules ACL flow idle timeout UDP flood protection We will be working on more articles in the near future to futher flesh out the feature list in AFM. Conclusion The AFM is quite a powerful security tool to wield for your inbound application services. Hopefully this article has been helpful in breaking down some nomenclature and architecture on the product, and whet your appetite for more firewall goodness. There is a lot more to come in this series, which you can link to from the article listing below.1.2KViews1like2CommentsGetting Started with iControl: History
tl;dr - iControl provides access to BIG-IP management plane services through SOAP and REST API interfaces. The Early Days iControl started back in early 2000. F5 had 2 main products: BIG-IP and 3-DNS (later GTM, now BIG-IP DNS). BIG-IP managed the local datacenter's traffic, while 3-DNS was the DNS orchestrator for all the BIG-IP's in numerous data centers. The two products needed a way to communicate with each other to ensure they were making the right traffic management decisions respective to all of the products in the system. At the time, the development team was focused on developing the fastest running code possible and that idea found it's way into the cross product communication feature that was developed. The technology the team chose to use was the Common Object Request Broker Architecture (CORBA) as standardized by the Object Management Group (OMG). Coming hot off the heels of F5's first management product SEE-IT (which was another one of my babies), the dev team coined this internal feature as "LINK-IT" since it "linked" the two products together. With the development of our management, monitoring, and visualization product SEE-IT, we needed a way to get the data off of the BIG-IP. SEE-IT was written for Windows Server and we really didn't want to go down the route of integrating into the CORBA interface due to several factors. So, we wrote a custom XML provider on BIG-IP and 3-DNS to allow for configuration and statistic data to be retrieved and consumed by SEE-IT. It was becoming clear to me that automation and customization of our products would be beneficial to our customers who had been previously relying on our SNMP offerings. We now had 2 interfaces for managing and monitoring our devices: one purely internal (LINK-IT) and the other partially (XML provider). The XML provider was very specific to our SEE-IT products use case and we didn't see a benefit of trying to morph that so we looked back at LINK-IT to see what we could to do make that a publicly supported interface. We began work on documenting and packaging it into F5's first public SDK. About that time, a new standard was emerging for exchanging structured information. The Simple Object Access Protocol (SOAP), which allows for structured information exchange, was being developed but not fully ratified until version 1.2 in 2003. I had to choose to roll our own XML implementation or make use of this new proposed specification. There was risk as the specification was not a standard yet but I made the choice to go the SOAP route as I felt that picking a standard format would give us the best 3rd party software compatibility down the road. Our CORBA interface was built on a nice class model which I used as a basis for an SOAP/XML wrapper on top of that code. I even had a great code name for the interface: X-LINK-IT! For those who were around when I gave my "XML at F5" presentation to F5 Product Development, you may remember the snide comments going around afterwards about how XML was not a great technology and a big mistake supporting. Good thing I didn't listen to them... At this point in mid-2001, the LINK-IT SDK was ready to go and development of X-LINK-IT was well underway. Well, let's just say that Marketing didn't agree with our ingenious product naming and jumped in to VETO our internal code names for our public release. I'll give our Chief Marketer Jeff Pancottine credit for coining the term "iControl" which was explained to me as "Internet Control". This was the start of F5's whole Internet Controlled Architecture messaging by the way. So, LINK-IT and X-LINK-IT were dead and iControl CORBA and iControl SOAP were born. The Death of CORBA, All Hail SOAP The first version of the iControl SDK for CORBA was released on 5/25/2001 with the SOAP version trailing it by a few months. This was around the BIG-IP version 3 time frame. We chugged along for a few years through the BIG-IP version 4 life and then a big event occurred that was the demise for CORBA - well, it's actually 2 events. The first event was the full rewrite of the BIG-IP data plane when TMOS was introduced in BIG-IP, version 9 (we skipped from version 4 to version 9 for some reason that slips my mind). Since virtually the entire product was rewritten, the interfaces that were tied to the product features, would have to change drastically. We used this as an opportunity to look at the next evolution of iControl. Until this point, iControl SOAP was just a shim on top of CORBA and it had some performance issues so we worked at splitting them apart and having SOAP talk directly to our configuration engine. Now we had 2 interface stacks side by side. The second event was learning we only had 1 confirmed customer using the CORBA interface compared to the 100's using SOAP. Given that knowledge and now that BIG-IP and 3-DNS no longer used iControl CORBA to talk to each other, the decision was made to End of Life iControl CORBA with Version 9.0. But, iControl SOAP still used the CORBA IDL files for it's API definitions and documentation so fun trivia note: the same CORBA tools are still in place in today's iControl SOAP build tools that were there in version 3 of BIG-IP. I'm fairly sure that is the longest running component in our build system. The Birth of DevCentral I can't speak about iControl without mentioning DevCentral. We had our iControl SDK out but no where to directly support the developers using it. At that time, F5 was a "hardware" company and product support wasn't ready to support application developers. Not many know that DevCentral was created due to the popularity of iControl with our customer base and was born on a PC under my desk in 2003. I continued to help with DevCentral part time for a few years but in 2007 I decided to work full time on building our community and focusing 100% on DevCentral. It was about this time that we were pushing the idea of merging the application and infrastructure teams together - or at least getting them to talk more frequently. This was a precursor to the whole DevOps mentality so I'd like to think we were a bit ahead of the curve on that. Enter iControl REST In 2013, iControl was reaching it's teenage years and starting to show it's age a bit. While SOAP is still supported by all the major tool vendors, application development was shifting to richer browser-based apps. And with that, Representational State Transfer (REST) was gaining steam. REST defined a usage pattern for using browser based mechanisms with HTTP to access objects across the network with a JavaScript Object Notation (JSON) format for the content. To keep up with current technologies, the PD team at F5 developed the first REST/JSON interface in BIG-IP version 11.5 as Early Access and was made Generally Available in version 11.6. With the REST interface, more modern web paradigms could be supported and you could actually code to iControl directly from a browser! There were also additional interface based tools for developing and debugging built directly into the service to give service listings and schema definitions. At the time of this writing, F5 supports both of the main iControl interfaces (SOAP and REST) but are focusing all new energy on our REST platform for the future. For those who have developed SOAP integrations, have no fear as that interface is not going away. It will just likely not get all the new feature support that will be added into the REST interfaces over time. SDKs, Toolkits, and Libraries Through the years, I've developed several client libraries (.Net, PowerShell, Java) for iControl SOAP to assist with ramp-up time for initial development. There have also been numerous other language based libraries for languages like Ruby, PHP, and Python developed by the community and other development teams. Most recently, F5 has published the iControl library for Python which is available as part of our OpenStack integration. DevCentral is your place to for our API documentation where we update our wiki with API changes on each major release. And as time rolls on, we are adding REST support for new and existing products such as iWorkflow, BIG-IQ, and other products yet to be released that will include SDKs and other reference material. F5 has a strong commitment to our end users and their automation and integration projects with F5 technologies. Coming full circle, my current role is overseeing APIs and SDKs for standards, consistency, and completeness in our Programmability and Orchestration (P&O) team so keep a look out for future articles from me on our efforts on that front. And REST assured, we will continue to do all we can to help our customers move to new architectures and deployment models with programmability and automation with F5 technologies.3KViews1like1CommentAn Illustrated Hands-on Intro to AWS VPC Networking
Quick Intro If you're one of those who knows a bit of networking but you feel uncomfortable touching AWS networking resources, then this article is for you. We're going to go through real AWS configuration and you can follow along to solidify your understanding. I'm going through the process of what I personally do to create 2 simple virtual machines, one in a private subnet and another one in a public subnet running Amazon Linux AMI instance. I will assume you already have an AWS account and corresponding credentials. If not, please go ahead and create your free tier AWS account. Just keep in mind that Amazon's equivalent to a Virtual Machine (VM) is known as EC2 instance. VPC, Subnets, Route Tables and Internet Gateways In short, we can think of Virtual Private Cloud (VPC) as our personal Data Centre. Our little private space in the cloud. Because it's our personal Data Centre, networking-wise we should have our own CIDR block. When we first create our VPC, a CIDR block is a compulsory field. Think of a CIDR block as the major subnet where all the other small subnets will be derived from. When we create subnets, we create them as smaller chunks from CIDR block. After we create subnets, there should be just a local route to access "objects" that belong to or are attached to the subnet. Other than that, if we need access to the Internet, we should create and attach an Internet Gateway (IGW) to our VPC and add a default route pointing to the IGW to route table. That should take care of it all. Our Topology for Reference This summarises what we're going to do. It might be helpful to use it as a reference while you follow along: Don't worry if you don't understand everything in the diagram above. As you follow along this hands-on article, you can come back to it and everything should make sense. What we'll do here I'll explain the following VPC components as we go along configuring them: Subnets Route Tables Internet Gateway NAT Gateway Egress-Only Gateway Quick Recap (I'll just quick summarise what we've done so far because our little virtual DC should be ready to go now!) We'll then perform the tests: Launching EC2 Instance from Amazon Marketplace (That's where we create a virtual machine) First attempt to connect via SSH (that's where we try to connect to our instance via SSH but fail! Hold on, I'll fix it!) Network ACLs and Security Groups (that's where I point the features that are to blame for our previous failed attempt and fix what's wrong) Connect via SSH again (now we're successful) Note that we only tested our Public instance above as it'd be very repetitive configuring Private instance so I added Private Instance config to Appendix section: Spinning Up Private EC2 Instance VPC Components The first logical question I get asked by those with little experience with AWS is which basic components do we need to build our core VPC infrastructure? First we pick an AWS Region: This is the region we are going to physically run our virtual infrastructure, i.e. our VPC. Even though your infrastructure is in the Cloud, Amazon has Data Centres (DC) around the world in order to provide first-class availability service to your resources if you need to. With that in mind, Amazon has many DCs located in many differentRegions(EU, Asia Pacific, US East, US West, etc). The more specific location of AWS DCs are called Availability Zones (AZ). That's where you'll find one (or more DCs). So, we create a VPC within a Region and specify a CIDR block and optionally request an Amazon assigned /56 IPv6 CIDR block: If you're a Network Engineer, this should sound familiar, right? Except for the fact that we're configuring our virtual DC in the Cloud. Subnets Now that we've got our own VPC, we need to create subnets within the CIDR block we defined (192.168.0.0/16). Notice that I also selected the option to retrieve anAmazon's provided IPv6 CIDR blockabove. That's because we can't choose an IPv6 CIDR block. We've got to stick to what Amazon automatically assigns to us if we want to use IPv6 addresses. For IPv6, Amazon always assigns a fixed /56 CIDR block and we can only create /64 subnets. Also, IPv6 addresses are always Public and there is no NAT by design. Our assigned CIDR block here was2600:1f18:263e:4e00::/56. Let's imagine we're hosting webserver/database tiersin 2 separate subnets but keep in mind this just for lab test purposes only. A real configuration would likely have instances in multiple AZs. For ourPublic WebServer Subnet, we'll use192.168.1.0/24and2600:1f18:263e:4e00:01:/64. For ourPrivate Database Subnet, we'll use192.168.2.0/24and2600:1f18:263e:4e00:02:/64 Here's how we create ourPublic WebServer Subneton Availability Zoneus-east-1a: Here's how we configure our Private Database Subnet: Notice that I putPrivate Database Subnetin a different Availability Zone. In real life, we'd likely create 1 public and 1 private subnet in one Availability Zone and another public and private subnet in a different Availability Zone for redundancy purposes as mentioned before. For this article, I'll stick to our config above for simplicity sake. That's just a learn by doing kind of article! :) Route Tables If we now look at the Route Table, we'll see that we now have 2 local routes similar to what would appear if we had configured 2 interfaces on a physical router: However, that's the default/main route table that AWS automatically created for our DevCentral VPC. If we want our Private Subnet to be really private, i.e. no Internet access for example, we can create a separate route table for it. Let's create 2 route tables, one named Public RT and the other Private RT: Private RT should be created in the same way as above with a different name. The last step is to associate our Public subnet to our Public RT and Private subnet to our Private RT. The association will bind the subnet to route table making them directly connected routes: Up to know, both tables look similar but as we configure Internet Gateway in next section, they will look different. Internet Gateway Yes, we want to make them different because we want Public RT to have direct access to the Internet. In order to accomplish that we need to create an Internet Gateway and attach it to our VPC: And lastly create a default IPv4/IPv6 route in Public RT pointing to Internet Gateway we've just created: So our Public route table will now look like this: EC2 instances created within Public Subnet should now have Internet access both using IPv4 and IPv6. NAT Gateway Our database server in the Private subnet will likely need outbound Internet access to install updates or for ssh access, right? So, first let's create a Public Subnet where our NAT gateway should reside: We then create a NAT gateway in above Public Subnet with an Elastic (Public) IPv4 address attached to it: Yes, NAT Gateways need a Public (Elastic) IPv4 address that is routable over the Internet. Next, we associate NAT Public Subnet to our Private Route Table like this: Lastly, we create a default route in our Private RT pointing to NAT gateway for IPv4 Internet traffic: We're pretty much done with IPv4. What about IPv6 Internet access in our Private subnet? Egress-Only Gateway As we know, IPv6 doesn't have NAT and all IPv6 addresses are Global so the trick here to make an EC2 instance using IPv6 to behave as if it was using a "private" IPv4 address behind NAT is to create an Egress-only Gateway and point a default IPv6 route to it. As the name implies, an Egress-only Gateway only allows outbound Internet traffic. Here we create one and and then add default IPv6 route (::/0) pointing to it: Quick Recap What we've done so far: Created VPC Created 2 Subnets (Private and Public) Created 2 Route tables (one for each Subnet) Attached Public Subnet to Public RT and Private Subnet to Private RT Created 1 Internet Gateway and added default routes (IPv4/IPv6) to our Public RT Created 1 NAT Gateway and added default IPv4 route to our Private RT Created 1 Egress-only Gateway and added default IPv6 route to our Private RT Are we ready to finally create an EC2 instance running Linux, for example, to test Internet connectivity from both Private and Public subnets? Launching EC2 Instance from Amazon Marketplace Before we get started, let's create a key-pair to access our EC2 instance via SSH: Our EC2 instances are accessed using a key-pair rather than a password. Notice that it automatically downloads the private key for us. Ok, let's create our EC2 instance. We need to click onLaunch Instanceand Select an image from AWS Marketplace: As seen above, I pickedAmazon Linux 2 AMIfor testing purposes. I selected the t2.micro type that only has 1 vCPU and 1 GB of memory. For the record, AWS Marketplace is a repository of AWS official images and Community images. Images are known as Amazon Machine Images (AMI). Amazon has many instance types based on the number of vCPUs available, memory, storage, etc. Think of it as how powerful you'd like your EC2 instance to be. We then configure our Instance Details by clicking onNext: Configure Instance Details button: I'll sum up what I've selected above: Network: we selected our VPC (DevCentral) Subnet: Public WebServer Subnet Auto-assign Public IP: Enabled Auto-assign IPv6 IP:Enabled The reason we selected "Enabled" to auto-assignment of IP addresses was because we want Amazon to automatically assign an Internet-routable Public IPv4 address to our instance. IPv6 addresses are always Internet-routable but I want Amazon to auto-assign an IPv6 address for me here so I selected Enabled to Auto-assign IPv6 IP too.. Notice that if we scroll down in the same screen above we could've also specified our private IPv4 address in the range of Public WebServer Subnet (192.168.1.0/24): The Public IPv4 address is automatically assigned by Amazon but once instance is rebooted or terminated it goes back to Amazon Public IPv4 pool. There is no guarantee that the same IPv4 address will be re-used. If we need an immutable fixed Public IPv4 address, we would need to add an Elastic IPv4 address to our VPC instead and then attach it to our EC2 instance. IPv6 address is greyed out because we opted for an auto-assigned IPv6 address, remember? We could've gone ahead and selected our storage type by clicking onNext: Add Storagebut I'll skip this. I'll add a Name tag of DevCentral-Public-Instance, select default Security Group assigned to our VPC as well as our previously created key-pair and lastly click on Launch to spin our instance up (Animation starts at Step 4): After that, if we click on Instances, we should see our instance is now assigned a Private as well as a Public IPv4 address: After a while, Instance State should change to Running: First Attempt to Connect via SSH If we click on Connect button above, we will get the instructions on how to SSH to our Public instance: Let's give it a go then: It didn't work! That would make me crack up once I got started with AWS, until I learn about Network ACLs and Security Groups! Network ACLs and Security Groups When we create a VPC, a default NACL and a Security Group are also created. All EC2 instances' interfaces belong to a Security Group and the subnet it belongs to have an associated NACL protecting it. NACL is a stateless Firewall that protects traffic coming in/out to/from Subnet. Security Group is a stateful Firewall that protects traffic coming in/out to/from an EC2 instance, more specifically its vNIC. The following simplified diagram shows that: What's the different between stateful and stateless firewall? A Security Group (stateful) rule that allows an outbound HTTP traffic, also allows return traffic corresponding to outbound request to be allowed back in. This is why it's called stateful as it keeps track of session state. A NACL (stateless) rule that allows an outbound HTTP traffic does not allow return traffic unless you create an inbound rule to allow it. This is why it's called stateless as it does not keep track of session state. Now let's try to work out why our SSH traffic was blocked. Is the problem in the default NACL? Let's have a look. This is what we see when we click onSubnets→ Public WebServer Subnet: As we can see above, the default NACL is NOT blocking our SSH traffic as it's allowing everything IN/OUT. Is the problem the default Security Group? This is what we see when we click onSecurity Groups→ sg-01.db...→ Inbound Rules: Yes! SSH traffic from my external client machine is being blocked by above inbound rule. The above rule says that our EC2 instance should allow ANY inbound traffic coming from other instances that also belong to above Security Group. That means that our external client traffic will not be accepted. We don't need to check outbound rules here because we know that stateful firewalls would allow outbound ssh return traffic back out. Creating a new Security Group To fix the above issue, let's do what we should've done while we were creating our EC2 instance. We first create a new Security Group: A newly created non-default SG comes with no inbound rules, i.e. nothing is allowed, not even traffic coming from other instances that belong to security group itself. There's always an explicit deny all rule in a security group, i.e. whatever is not explicitly allowed, is denied. For this reason, we'll explicitly allow SSH access like this: In real world, you can specify a more specific range for security purposes. And lastly we change our EC2 instance's SG to the new one by going toEC2 → Instances → <Name of Instance> → Networking → Change Security Groups: Another Window should appear and here's what we do: Connecting via SSH again Now let's try to connect via SSH again: It works! That's why it's always a good idea to create your own NACL and Security Group rules rather than sticking to the default ones. Appendix - Spinning Up EC2 instance in Private Subnet Let's create our private EC2 instance to test Internet access using our NAT gateway and Egress-Only Gateway here. Our Private RT has a NAT gateway for IPv4 Internet access and an Egress-Only Gateway for IPv6 Internet access as shown below: When we create our private EC2 instance, we won't enable Auto-assign Public IP (for IPv4) as seen below: It's not shown here, but when I got to the Security Group configuration part I selected the previous security group I created that allows SSH access from everyone for testing purposes. We could have created a new SG and added an SSH rule allowing access only from our local instances that belong to our 192.168.0.0/16 range to be more restrictive. Here's my Private Instance config if you'd like to replicate: Here's the SSH info I got when I clicked on Connect button: Here's my SSH test: All Internet tests passed and you should now have a good understanding of how to configure basic VPC components. I'd advise you to have a look at our full diagram again and any feedback about the animated GIFs would be appreciated. Did you like them? I found them better than using static images.2.2KViews1like13CommentsHands-on Intro to Infrastructure as Code using Terraform
Related Articles Automate Application Delivery with F5 and HashiCorp Terraform and Consul Quick Intro Terraform is a way of uniquely declaring how to build your infrastructure in a centralised manner using a single declarative language in order to avoid the pain of having to manually configure them in different places. Funnily enough, Terraform not only configures your infrastructure but also boots up your environment. You can literally keep your whole infrastructure declared in a couple of files. Other configuration management tools like Ansible are imperative in nature, i.e. they focus on how the tool should configure something. Terraform is declarative, i.e. it focus on what you want to do and Terraform is supposed to work out how to do it. In terms of commands, the bulk of what you will be doing with Terraform is executing mainly 3 commands:terraform init,terraform planandterraform apply. BIG-IP can be configured using Terraform and there are examples onclouddocs. For all configuration options you can go throughF5 BIG-IP Provider's page. In this article, I will walk you through how to spin up a Kubernetes Cluster on Google Cloud using Terraform just to get you started. We also have an article with an example on how to configure basic BIG-IP settings using Terraform: Automate Application Delivery with F5 and HashiCorp Terraform and Consul How to Install it? Download terraform fromhttps://www.terraform.io/downloads.html After that, you should unpack the executable and on Linux and Mac place its full path into $PATH environment variable to make it searchable. Here's what I did after I downloaded it to my Mac: It should be similar on Linux. How to get started? The best thing to do to get started with Terraform is to boot up a simple instance of an object from one of providers you're already comfortable with. For example, if you know how to set up a Kubernetes cluster on GCP then try to spin it up using Terraform. I'll show you how to create a Kubernetes Cluster on GCP with Terraform here. Creating a Kubernetes Cluster on GCP with Terraform Adding Provider Information First thing to do is to tell Terraform what kind of provider it is going to configure, i.e. GCP? AWS? BIG-IP? Terraform reads *.tf files from your terraform directory. Let's create a file named providers.tf with Google Cloud provider information: Note that we usedthe keyword "provider" to add our provider's authentication information (GCP in this case). To access your GCP account via a third-party source, different app or Terraform as is the case here, you'd normally create aService Account. I created one specifically for Terraform and that's what most people do. The rest is self-explanatory butcredentialsis where you add the full path to your auth file that you download once you create your service account. Essentially, this will make Terraform authenticate to your GCP cloud account. If you ever logged in to a Google Cloud account, you should know whatprojectsandregions/zonesare. I added the following permissions to my sample Terraform Service Account: Adding Resource Configuration Likewise, if we want to create resources we simply add"resource"keyword. In this case, google_container_cluster means we're spinning up a Kubernetes cluster on GCP. I'm going to create a separate file to keep our resources separate from provider's information just to keep it more tidy. There is no problem doing that at all as Terraform reads through all *.tf files. Here's our file for our Kubernetes cluster: For Kubernetes clusters we usegoogle_container_clusterfollowed by the name of the cluster (I named it rod-clusterin this case). Follow the breakdown of the information I added to above file: name: this is the name of the cluster location: this should be the GCP location where your Kubernetes project resides initial_node_count: this is the number of worker nodes in your cluster nodeconfig/oauth_scopes: these are the required components we need permission to run a Kubernetes cluster on GCP. Now we're ready to initialise Terraform to spin up our GCP Kubernetes cluster. However, before we jump into it, let's quickly answer one simple question. How do we know what keyword exactly to declare in a Terraform file? You can find the command's syntax in Terraform's documentation. Just pick the provider:https://www.terraform.io/docs/providers/index.html Initialising Terraform Now, back to business. The way we initialise Terraform is by executingterraform initcommand in the same directory where your *.tf files reside: In the background, Terraform downloaded GCP's plugin and placed it into .terraform folder: Spinning up Execution Plan The next step is to executeterraform plancommand so that Terraform can automatically check for errors in your *.tf file, connect to your provider using your credentials, and lastly confirm what you've requested is something doable: Our plan seems to be ok so let's move to our last step. Applying Terraform changes This is usually reliable but we never know until we apply it, right? The way we do it is by usingterraform applycommand: You will see the above prompt asking for your confirmation. Just answer yes and Terraform will connect to your provider and apply the changes automatically. Here's what follows the 'yes' answer: If we check my GCP account, we can confirm Terraform created the cluster for us: How to apply changes or destroy? You can useterraform applyagain if you've modified something. To destroy everything you've done you can useterraform destroycommand: Next steps There's much more to Terraform than what we've been through here. We can use variables and there are plenty of other commands. For example, we can create workspaces, ask Terraform to format your code for you and plenty of other things like listing resources Terraform is currently managing, graphs, etc. This is just to give that initial taste of how powerful Terraform is.1.1KViews1like0CommentsGetting Started with iApps: A Conceptual Overview
tl;dr - iApps provide admins and service desks a template solution for application deployment and management services. Deploying and managing applications require a lot of information across several disciplines. Architects have their holistic view of the application ecosystem and relevant lifecycles. Developers have their granular relationship with each application under their umbrella. Networks admins make sure applications are behaving appropriately on the network instead of hijacking QoS classes or hijacking DNS. Then there are those missing details that no one wants to own until something breaks (looking at you Java CA store). Originally F5 introduced deployment guides to help administrators understand the requirements and configurations needed to deploy popular applications behind BIG-IP. However, after the deployment was complete, those configurations were still managed through object types alone (e.g. virtual servers, pools, profiles, iRules, monitors). That can get quite tedious when you have hundreds of applications on a single BIG-IP stack. Someone somewhere said “Wouldn’t it be nice if we could have an application-based view of all the different objects that help us deploy, manage, and secure each application”? Enter iApps Introduced in BIG-IP 11.0, iApps are a customizable framework for deploying and managing application as a service. Using out-of-the-box templates, administrators can deploy commonly-used applications such as Oracle, SAP, or Exchange by completing a series of questions that relate to their management and infrastructure needs. Rather than create a bunch of virtual servers, followed by a handful of monitors, then a plethora of whatever, the responses to iApps questions create all of the BIG-IP objects needed to properly run your application. The iApps application service becomes the responsible manager of all virtual servers, monitors, policies, profiles, and iRules required to run. Consolidating these into a single view makes management and troubleshooting much easier to handle. iApps Framework iApps consist of two main elements, the template and application services created by publishing a template. We’ll dive into this in our next article, Getting Started With iApps: Components. Templates: The base configuration object which contains the layout and scripting used to configure and publish application instances. Some templates are prebuilt and included in BIG-IP, while others can be download from DevCentral (are not officially supported) or F5 support (certified and supported). Developer-oriented teams can also build custom templates for frequently used configurations or services. Application Service: An application service is the result of using an iApps template to drive the configuration process. The Administrator uses the configuration utility to create a new application service from the selected iApps template. Created objects are grouped into components of the application service and are managed accordingly. The iApps Advantage iApps are not for everyone. If you like keeping tribal control over your BIG-IP ecosystem or if you like naming virtual servers after your pets, iApps may not be for you. iApps do have an advantage if you want to templatize your deployment scenarios or wish to allow other administrators access to the services they manage. iApps reduce a lot of the mystique and intimidation a lengthy set of profiles, policies, and pools can sometimes cause to the new or intermediate administrator. Above we show an example of building a highly available LDAP namespace for internal applications with the default built-in LDAP iApps template. By providing a certificate and and answering a few questions, an LDAP environment is created for all of your internal directory authentication or lookup requirements. From there, modifying the configuration is easy as selecting the Reconfigure tab in the existing application service. Changing settings within iApps Sometimes you just want a template to assist with application deployment and from there you’re perfectly fine managing the individual object types. The Component view will show you all objects affected by the application service but if you try to apply a change, you’ll receive an error similar to: This is by design because the iApps application service is the rightful owner of the system object and shouldn’t be edited directly. However in certain cases you don’t the iApp anymore or want more granular control of some features the iApps my not have, there is an option. Each application service published via iApps have a Properties tab which allow you to disable the Strict Updates method of management. Unchecked, each object is configurable on it’s own but will deviate from the templates last known state. Some administrators prefer to operate this way, only using the iApp as a deployment method, and that’s perfectly fine. We’re leaving your application management style and method up to you. As BIG-IP expands to cover more of the application landscape, people are increasingly taking advantage of more programmatic features and iApps is no exception. Allowing our administrators to improve their ease of deployment and use is why iApps exist and we’ll continue to develop and improve these features. Our next article Getting Started with iApps: Components will dive into more detail on the properties required to create and manage iApps. Take the time to get to know iApps, they’re your ally for keeping your applications in order.1.8KViews1like1Comment