devcentral basics
29 TopicsWhat Is BIG-IP?
tl;dr - BIG-IP is a collection of hardware platforms and software solutions providing services focused on security, reliability, and performance. F5's BIG-IP is a family of products covering software and hardware designed around application availability, access control, and security solutions. That's right, the BIG-IP name is interchangeable between F5's software and hardware application delivery controller and security products. This is different from BIG-IQ, a suite of management and orchestration tools, and F5 Silverline, F5's SaaS platform. When people refer to BIG-IP this can mean a single software module in BIG-IP's software family or it could mean a hardware chassis sitting in your datacenter. This can sometimes cause a lot of confusion when people say they have question about "BIG-IP" but we'll break it down here to reduce the confusion. BIG-IP Software BIG-IP software products are licensed modules that run on top of F5's Traffic Management Operation System® (TMOS). This custom operating system is an event driven operating system designed specifically to inspect network and application traffic and make real-time decisions based on the configurations you provide. The BIG-IP software can run on hardware or can run in virtualized environments. Virtualized systems provide BIG-IP software functionality where hardware implementations are unavailable, including public clouds and various managed infrastructures where rack space is a critical commodity. BIG-IP Primary Software Modules BIG-IP Local Traffic Manager (LTM) - Central to F5's full traffic proxy functionality, LTM provides the platform for creating virtual servers, performance, service, protocol, authentication, and security profiles to define and shape your application traffic. Most other modules in the BIG-IP family use LTM as a foundation for enhanced services. BIG-IP DNS - Formerly Global Traffic Manager, BIG-IP DNS provides similar security and load balancing features that LTM offers but at a global/multi-site scale. BIG-IP DNS offers services to distribute and secure DNS traffic advertising your application namespaces. BIG-IP Access Policy Manager (APM) - Provides federation, SSO, application access policies, and secure web tunneling. Allow granular access to your various applications, virtualized desktop environments, or just go full VPN tunnel. Secure Web Gateway Services (SWG) - Paired with APM, SWG enables access policy control for internet usage. You can allow, block, verify and log traffic with APM's access policies allowing flexibility around your acceptable internet and public web application use. You know.... contractors and interns shouldn't use Facebook but you're not going to be responsible why the CFO can't access their cat pics. BIG-IP Application Security Manager (ASM) - This is F5's web application firewall (WAF) solution. Traditional firewalls and layer 3 protection don't understand the complexities of many web applications. ASM allows you to tailor acceptable and expected application behavior on a per application basis . Zero day, DoS, and click fraud all rely on traditional security device's inability to protect unique application needs; ASM fills the gap between traditional firewall and tailored granular application protection. BIG-IP Advanced Firewall Manager (AFM) - AFM is designed to reduce the hardware and extra hops required when ADC's are paired with traditional firewalls. Operating at L3/L4, AFM helps protect traffic destined for your data center. Paired with ASM, you can implement protection services at L3 - L7 for a full ADC and Security solution in one box or virtual environment. BIG-IP Hardware BIG-IP hardware offers several types of purpose-built custom solutions, all designed in-house by our fantastic engineers; no white boxes here. BIG-IP hardware is offered via series releases, each offering improvements for performance and features determined by customer requirements. These may include increased port capacity, traffic throughput, CPU performance, FPGA feature functionality for hardware-based scalability, and virtualization capabilities. There are two primary variations of BIG-IP hardware, single chassis design, or VIPRION modular designs. Each offer unique advantages for internal and collocated infrastructures. Updates in processor architecture, FPGA, and interface performance gains are common so we recommend referring to F5's hardware pagefor more information.70KViews3likes3CommentsThe BIG-IP Application Security Manager Part 1: What is the ASM?
tl;dr - BIG-IP Application Security Manager (ASM) is a layer 7 web application firewall (WAF) available on F5's BIG-IP platforms. Introduction This article series was written a while back, but we are re-introducing it as a part of our Security Month on DevCentral. I hope you enjoy all the features of this very powerful module on the BIG-IP! This is the first of a 10-part series on the BIG-IP ASM. This module is a very powerful and effective tool for defending your applications and your peace of mind, but what is it really? And, how do you configure it correctly and efficiently? How can you take advantage of all the features it has to offer? Well, the purpose of this article series is to answer these fundamental questions. So, join me as we dive into this really cool technology called the BIG-IP ASM! The Basics The BIG-IP ASM is a Layer 7 ICSA-certified Web Application Firewall (WAF) that provides application security in traditional, virtual, and private cloud environments. It is built on TMOS...the universal product platform shared by all F5 BIG-IP products. It can run on any of the F5 Application Delivery Platforms...BIG-IP Virtual Edition, BIG-IP 2000 -> 11050, and all the VIPRION blades. It protects your applications from a myriad of network attacks including the OWASP Top 10 most critical web application security risks It is able to adapt to constantly-changing applications in very dynamic network environments It can run standalone or integrated with other modules like BIG-IP LTM, BIG-IP DNS, BIG-IP APM, etc Why A Layer 7 Firewall? Traditional network firewalls (Layer 3-4) do a great job preventing outsiders from accessing internal networks. But, these firewalls offer little to no support in the protection of application layer traffic. As David Holmes points out in his article series on F5 firewalls, threat vectors today are being introduced at all layers of the network. For example, the Slowloris and HTTP Flood attacks are Layer 7 attacks...a traditional network firewall would never stop these attacks. But, nonetheless, your application would still go down if/when it gets hit by one of these. So, it's important to defend your network with more than just a traditional Layer 3-4 firewall. That's where the ASM comes in... Some Key Features The ASM comes pre-loaded with over 2,200 attack signatures. These signatures form the foundation for the intelligence used to allow or block network traffic. If these 2,200+ signatures don't quite do the job for you, never fear...you can also build your own user-defined signatures. And, as we all know, network threats are always changing so the ASM is configured to download updated attack signatures on a regular basis. Also, the ASM offers several different policy building features. Policy building can be difficult and time consuming, especially for sites that have a large number of pages. For example, DevCentral has over 55,000 pages...who wants to hand-write the policy for that?!? No one has that kind of time. Instead, you can let the system automatically build your policy based on what it learns from your application traffic, you can manually build a policy based on what you know about your traffic, or you can use external security scanning tools (WhiteHat Sentinel, QualysGuard, IBM AppScan, Cenzic Hailstorm, etc) to build your policy. In addition, the ASM comes configured with pre-built policies for several popular applications (SharePoint, Exchange, Oracle Portal, Oracle Application, Lotus Domino, etc). Did you know? The BIG-IP ASM was the first WAF to integrate with a scanner. WhiteHat approached all the WAFs and asked about the concept of building a security policy around known vulnerabilities in the apps. All the other WAFs said "no"...F5 said "of course!" and thus began the first WAF-scanner integration. The ASM also utilizes Geolocation and IP address intelligence to allow for more sophisticated and targeted defense measures. You can allow/block users from specific locations around the world, and you can block IP addresses that have built a bad reputation on other sites around the Internet. If they were doing bad things on some other site, why let them access yours? The ASM is also built for Payment Card Industry Data Security Standard (PCI DSS) compliance. In fact, you can generate a real-time PCI compliance report at the click of a button! The ASM also comes loaded with the DataGuard feature that automatically blocks sensitive data (Credit Card numbers, SSN, etc) from being displayed in a browser. In addition to the PCI reports, you can generate on-demand charts and graphs that show just about every detail of traffic statistics that you need. The following screenshot is a representative sample of some real traffic that I pulled off a site that uses the ASM. Pretty powerful stuff! I could go on for days here...and I know you probably want me to, but I'll wrap it up for this first article. I hope you can see the value of the ASM both as a technical solution in the defense of your network and also a critical asset in the long-term strategic vision of your company. So, if you already have an ASM and want to know more about it or if you don't have one yet and want to see what you're missing, come on back for the next article where I will talk about the cool features of policy building. What is the BIG-IP ASM? Policy Building The Importance of File Types, Parameters, and URLs Attack Signatures XML Security IP Address Intelligence and Whitelisting Geolocation Data Guard Username and Session Awareness Tracking Event Logging26KViews4likes6CommentsWhat is Load Balancing?
tl;dr - Load Balancing is the process of distributing data across disparate services to provide redundancy, reliability, and improve performance. The entire intent of load balancing is to create a system that virtualizes the "service" from the physical servers that actually run that service. A more basic definition is to balance the load across a bunch of physical servers and make those servers look like one great big server to the outside world. There are many reasons to do this, but the primary drivers can be summarized as "scalability," "high availability," and "predictability." Scalability is the capability of dynamically, or easily, adapting to increased load without impacting existing performance. Service virtualization presented an interesting opportunity for scalability; if the service, or the point of user contact, was separated from the actual servers, scaling of the application would simply mean adding more servers or cloud resources which would not be visible to the end user. High Availability (HA) is the capability of a site to remain available and accessible even during the failure of one or more systems. Service virtualization also presented an opportunity for HA; if the point of user contact was separated from the actual servers, the failure of an individual server would not render the entire application unavailable. Predictability is a little less clear as it represents pieces of HA as well as some lessons learned along the way. However, predictability can best be described as the capability of having confidence and control in how the services are being delivered and when they are being delivered in regards to availability, performance, and so on. A Little Background Back in the early days of the commercial Internet, many would-be dot-com millionaires discovered a serious problem in their plans. Mainframes didn't have web server software (not until the AS/400e, anyway) and even if they did, they couldn't afford them on their start-up budgets. What they could afford was standard, off-the-shelf server hardware from one of the ubiquitous PC manufacturers. The problem for most of them? There was no way that a single PC-based server was ever going to handle the amount of traffic their idea would generate and if it went down, they were offline and out of business. Fortunately, some of those folks actually had plans to make their millions by solving that particular problem; thus was born the load balancing market. In the Beginning, There Was DNS Before there were any commercially available, purpose-built load balancing devices, there were many attempts to utilize existing technology to achieve the goals of scalability and HA. The most prevalent, and still used, technology was DNS round-robin. Domain name system (DNS) is the service that translates human-readable names (www.example.com) into machine recognized IP addresses. DNS also provided a way in which each request for name resolution could be answered with multiple IP addresses in different order. Figure 1: Basic DNS response for redundancy The first time a user requested resolution for www.example.com, the DNS server would hand back multiple addresses (one for each server that hosted the application) in order, say 1, 2, and 3. The next time, the DNS server would give back the same addresses, but this time as 2, 3, and 1. This solution was simple and provided the basic characteristics of what customer were looking for by distributing users sequentially across multiple physical machines using the name as the virtualization point. From a scalability standpoint, this solution worked remarkable well; probably the reason why derivatives of this method are still in use today particularly in regards to global load balancing or the distribution of load to different service points around the world. As the service needed to grow, all the business owner needed to do was add a new server, include its IP address in the DNS records, and voila, increased capacity. One note, however, is that DNS responses do have a maximum length that is typically allowed, so there is a potential to outgrow or scale beyond this solution. This solution did little to improve HA. First off, DNS has no capability of knowing if the servers listed are actually working or not, so if a server became unavailable and a user tried to access it before the DNS administrators knew of the failure and removed it from the DNS list, they might get an IP address for a server that didn't work. Proprietary Load Balancing in Software One of the first purpose-built solutions to the load balancing problem was the development of load balancing capabilities built directly into the application software or the operating system (OS) of the application server. While there were as many different implementations as there were companies who developed them, most of the solutions revolved around basic network trickery. For example, one such solution had all of the servers in a cluster listen to a "cluster IP" in addition to their own physical IP address. Figure 2: Proprietary cluster IP load balancing When the user attempted to connect to the service, they connected to the cluster IP instead of to the physical IP of the server. Whichever server in the cluster responded to the connection request first would redirect them to a physical IP address (either their own or another system in the cluster) and the service session would start. One of the key benefits of this solution is that the application developers could use a variety of information to determine which physical IP address the client should connect to. For instance, they could have each server in the cluster maintain a count of how many sessions each clustered member was already servicing and have any new requests directed to the least utilized server. Initially, the scalability of this solution was readily apparent. All you had to do was build a new server, add it to the cluster, and you grew the capacity of your application. Over time, however, the scalability of application-based load balancing came into question. Because the clustered members needed to stay in constant contact with each other concerning who the next connection should go to, the network traffic between the clustered members increased exponentially with each new server added to the cluster. The scalability was great as long as you didn't need to exceed a small number of servers. HA was dramatically increased with these solutions. However, since each iteration of intelligence-enabling HA characteristics had a corresponding server and network utilization impact, this also limited scalability. The other negative HA impact was in the realm of reliability. Network-Based Load balancing Hardware The second iteration of purpose-built load balancing came about as network-based appliances. These are the true founding fathers of today's Application Delivery Controllers. Because these boxes were application-neutral and resided outside of the application servers themselves, they could achieve their load balancing using much more straight-forward network techniques. In essence, these devices would present a virtual server address to the outside world and when users attempted to connect, it would forward the connection on the most appropriate real server doing bi-directional network address translation (NAT). Figure 3: Load balancing with network-based hardware The load balancer could control exactly which server received which connection and employed "health monitors" of increasing complexity to ensure that the application server (a real, physical server) was responding as needed; if not, it would automatically stop sending traffic to that server until it produced the desired response (indicating that the server was functioning properly). Although the health monitors were rarely as comprehensive as the ones built by the application developers themselves, the network-based hardware approach could provide at least basic load balancing services to nearly every application in a uniform, consistent manner—finally creating a truly virtualized service entry point unique to the application servers serving it. Scalability with this solution was only limited by the throughput of the load balancing equipment and the networks attached to it. It was not uncommon for organization replacing software-based load balancing with a hardware-based solution to see a dramatic drop in the utilization of their servers. HA was also dramatically reinforced with a hardware-based solution. Predictability was a core component added by the network-based load balancing hardware since it was much easier to predict where a new connection would be directed and much easier to manipulate. The advent of the network-based load balancer ushered in a whole new era in the architecture of applications. HA discussions that once revolved around "uptime" quickly became arguments about the meaning of "available" (if a user has to wait 30 seconds for a response, is it available? What about one minute?). This is the basis from which Application Delivery Controllers (ADCs) originated. The ADC Simply put, ADCs are what all good load balancers grew up to be. While most ADC conversations rarely mention load balancing, without the capabilities of the network-based hardware load balancer, they would be unable to affect application delivery at all. Today, we talk about security, availability, and performance, but the underlying load balancing technology is critical to the execution of all. Next Steps Ready to plunge into the next level of Load Balancing? Take a peek at these resources: Go Beyond POLB (Plain Old Load Balancing) The Cloud-Ready ADC BIG-IP Virtual Edition Products, The Virtual ADCs Your Application Delivery Network Has Been Missing Cloud Balancing: The Evolution of Global Server Load Balancing22KViews0likes1CommentWhat is BIG-IP APM?
tl;dr - BIG-IP APM provides granular access controls to discreet applications and networks supporting 2FA and federated identity management. Providing application access is a complicated process. You have distributed users, insecure clients, and unknown devices all vying for connectivity to your trusted applications.What's an admin to do in order to protect investments and still provide easy access anywhere? F5'sBIG-IP Access Policy Manager (APM) provides multiple services to protect and manage access to your applications. APM is available on hardware, in the cloud, or as a virtual appliance and provides access control wherever your applications live. APM offers: Identity Federation and SSO - Creates a single point of policy-based access for cloud and on premise/private applications with MFA support. Client and Web-based SSL VPN Access - Policy-based access to network VPN service through web-plugins or clients on mobile and desktop operating systems. Web Portal Access to Applications - Open web applications to users instead of opening up your network. Great for contractors and remote workers who don't need full VPN tunnels. Desktop Application and VDI Support - Policy-based access to virtualized applications through a single, consolidated gateway along with native VDI support and a customizable, web portal. Access Policy Deployment and Management Solutions - Using the visual policy editor, administrators create highly customizable security policesallowing granular control over application and network access. Secure Web Gateway Proxy Services - Provides web-based malware protection and URL filtering through Secure Web Gateway Services. Policy Access Made Easy (or complex if you want) I said policy-based a lot, didn't I? Well, I repeat myself because it's an important part of access management. You want the right users accessing the right apps... right? The Visual Policy Editor allows administrators granular control over who has what access to individual applications, instead of full network access. Below is an example of a basic SAML access policy using Active Directory to not only initiate allowed authentication but the queries AD to determine if the user is allowed to access to selected SaaS resources assigned to this policy. BIG-IP APM also integrates withother F5 solutions to aid in application and user security. BIG-IP Application Security Manager (ASM) - Include web application firewall functionality allowing your application security visibility into who's using it (and if they should be). Software Web Gateway (SWG) - Combined with APM, you can create access controlled URL categorization. Combining APM with SWGallows for greater transparency and control to your users browsing and application access. BIG-IQ - Centralize your policy management, distribution, and access monitoring into one location. BIG-IQ becomes your window into your vast BIG-IP APM network. BIG-IP APM offers a lot of flexibility for user access and security control but don't just take my word for it. This article provides you a very general overview of what APM is and what is can do for you. Follow the below links to see real scenarios of APM in use and learn more about why access control and security is a good thing. And as alwaysif you have questions or comments drop us a line! On DevCentral: Strong Authentication Two-Factor Authentication - Remote Desktop Gateway Configuration Examples: BIG-IP APM as SAML IdP for AWS Two-Factor Authentication: Captive Portal On F5.com: Getting Started with BIG-IP Access Policy Manager (APM)13KViews0likes0CommentsWhat is a Proxy?
The term ‘Proxy’ is a contraction that comes from the middle English word procuracy, a legal term meaning to act on behalf of another. You may have heard of a proxy vote. Where you submit your choice and someone else votes the ballot on your behalf. In networking and web traffic, a proxy is a device or server that acts on behalf of other devices. It sits between two entities and performs a service. Proxies are hardware or software solutions that sit between the client and the server and does something to requests and sometimes responses. The first kind of proxy we’ll discuss is a half proxy. With a Half-Proxy, a client will connect to the proxy and the proxy will establish the session with the servers. The proxy will then respond back to the client with the information. After that initial connection is set up, the rest of the traffic with go right through the proxy to the back-end resources. The proxy may do things like L4 port switching, routing or NAT’ing but at this point it is not doing anything intelligent other than passing traffic. Basically, the half-proxy sets up a call and then the client and server does their thing. Half-proxies are also good for Direct Server Return (DSR). For protocols like streaming protocols, you’ll have the initial set up but instead of going through the proxy for the rest of the connections, the server will bypass the proxy and go straight to the client. This is so you don’t waste resources on the proxy for something that can be done directly server to client. A Full Proxy on the other hand, handles all the traffic. A full proxy creates a client connection along with a separate server connection with a little gap in the middle. The client connects to the proxy on one end and the proxy establishes a separate, independent connection to the server. This is bi-directionally on both sides. There is never any blending of connections from the client side to the server side – the connections are independent. This is what we mean when we say BIG-IP is a full proxy architecture. The full proxy intelligence is in that OSI Gap. With a half-proxy, it is mostly client side traffic on the way in during a request and then does what it needs…with a full proxy you can manipulate, inspect, drop, do what you need to the traffic on both sides and in both directions. Whether a request or response, you can manipulate traffic on the client side request, the server side request, the server side response or client side response. You get a lot more power with a full proxy than you would with a half proxy. With BIG-IP (a full proxy) on the server side it can be used as a reverse proxy. When clients make a request from the internet, they terminate on the reverse proxy sitting in front of application servers. Reverse proxies are good for traditional load balancing, optimization, SSL offloading, server side caching, and security functionality. If you know certain clients or IP spaces are acceptable, you can whitelist them. Same with known malicious sources or bad ranges/clients, you can blacklist them. You can do it at the IP layer (L4) or you can go up the stack to Layer 7 and control an http/s request. Or add a BIG-IP ASM policy on there. As it inspects the protocol traffic if it sees some anomaly that is not native to the application like a SQL injection, you can block it. On the client side, BIG-IP can also be a forward proxy. In this case, the client connects to the BIG-IP on an outbound request and the proxy acts on behalf of the client to the outside world. This is perfect for things like client side caching (grabbing a video and storing locally), filtering (blocking certain time-wasting sites or malicious content) along with privacy (masking internal resources) along with security. You can also have a services layer, like an ICAP server, where you can pass traffic to an inspection engine prior to hitting the internet. You can manipulate client side traffic out to the internet, server side in from the internet, handle locally on the platform or or pass off to a third party services entity. A full proxy is your friend in an application delivery environment. If you'd like to learn more about Proxies, check out the resources below including the Lightboard Lesson: What is a Proxy? ps Related: Lightboard Lessons: What is a Proxy? Encrypted malware vs. F5's full proxy architecture The Concise Guide to Proxies The Full-Proxy Data Center Architecture Three things your proxy can't do unless it's a full-proxy Back to Basics: The Many Modes of Proxies9.9KViews0likes0CommentsWhat is iCall?
tl;dr - iCall is BIG-IP’s event-based granular automation system that enables comprehensive control over configuration and other system settings and objects. The main programmability points of entrance for BIG-IP are the data plane, the control plane, and the management plane. My bare bones description of the three: Data Plane - Client/server traffic on the wire and flowing through devices Control Plane - Tactical control of local system resources Management Plane - Strategic control of distributed system resources You might think iControl (our SOAP and REST API interface) fits the description of both the control and management planes, and whereas you’d be technically correct, iControl is better utilized as an external service in management or orchestration tools. The beauty of iCall is that it’s not an API at all—it’s lightweight, it’s built-in via tmsh, and it integrates seamlessly with the data plane where necessary (via iStats.) It is what we like to call control plane scripting. Do you remember relations and set theory from your early pre-algebra days? I thought so! Let me break it down in a helpful way: P = {(data plane, iRules), (control plane, iCall), (management plane, iControl)} iCall allows you to react dynamically to an event at a system level in real time. It can be as simple as generating a qkview in the event of a failover or executing a tcpdump on a server with too many failed logins. One use case I’ve considered from an operations perspective is in the event of a core dump to have iCall generate a qkview, take checksums of the qkview and the dump file, upload the qkview and generate a support case via the iHealth API, upload the core dumps to support via ftp with the case ID generated from iHealth, then notify the ops team with all the appropriate details. If I had a solution like that back in my customer days, it would have saved me 45 minutes easy each time this happened! iCall Components Three are three components to iCall: events, handlers, and scripts. Events An event is really what drives the primary reason to use iCall over iControl. A local system event (whether it’s a failover, excessive interface or application errors, too many failed logins) would ordinarily just be logged or from a system perspective, ignored altogether. But with iCall, events can be configured to force an action. At a high level, an event is "the message," some named object that has context (key value pairs), scope (pool, virtual, etc), origin (daemon, iRules), and a timestamp. Events occur when specific, configurable, pre-defined conditions are met. Example (placed in /config/user_alert.conf) alert local-http-10-2-80-1-80-DOWN "Pool /Common/my_pool member /Common/10.2.80.1:80 monitor status down" { exec command="tmsh generate sys icall event tcpdump context { { name ip value 10.2.80.1 } { name port value 80 } { name vlan value internal } { name count value 20 } }" } Handlers Within the iCall system, there are three types of handlers: triggered, periodic, and perpetual. Triggered A triggered handler is used to listen for and react to an event. Example (goes with the event example from above:) sys icall handler triggered tcpdump { script tcpdump subscriptions { tcpdump { event-name tcpdump } } } Periodic A periodic handler is used to react to an interval timer. Example: sys icall handler periodic poolcheck { first-occurrence 2017-07-14:11:00:00 interval 60 script poolcheck } Perpetual A perpetual handler is used under the control of a deamon. Example: handler perpetual core_restart_watch sys icall handler perpetual core_restart_watch { script core_restart_watch } Scripts And finally, we have the script! This is simply a tmsh script moved under the /sys icall area of the configuration that will “do stuff" in response to the handlers. Example (continuing the tcpdump event and triggered handler from above:) modify script tcpdump { app-service none definition { set date [clock format [clock seconds] -format "%Y%m%d%H%M%S"] foreach var { ip port count vlan } { set $var $EVENT::context($var) } exec tcpdump -ni $vlan -s0 -w /var/tmp/${ip}_${port}-${date}.pcap -c $count host $ip and port $port } description none events none } Resources iCall Codeshare Lightboard Lessons on iCall Threshold violation article highlighting periodic handler8.3KViews2likes10CommentsTroubleshooting BIG-IP - The Basics
Architecture 101 doesn't recommend going live withevery feature and complicated requirement enabled at launch. As such nor should your BIG-IP configuration. Yet reviewing countless Q&A and support cases, a lot of basic steps are overlooked. You may be so focused on tricking out that sweet iRule you forgot to enable a Client SSL profile or even as simple as forgetting to assign a SNAT pool. It happens to all of us and the best way to fix these problems is to reduce the complexity and start with the basics! Know Your Lingo Just like every other vendor BIG-IP does have some terminology unfamiliar to people outside of the network-speaking world. Once you have the common terms locked down, everything else will fall into place. Here's some of the terms used in this article that are handy to remember. vip - When we refer to a VIP, we're referring to the virtual IP assigned to a virtual server. Often we'll use the term VIP interchangeably referring to the virtual server. The vip is an object within BIG-IP that listens for address and service requests. A client send traffic to the vip which routes according to the virtual server's configuration. Node - The node is the server and service assigned to receive traffic from a virtual IP/Server. You will usually have more than one node defined to receive traffic from behind a virtual server. Pool - The virtual server will have a pool defined to send traffic to. Server nodes are assigned to one or more pools and the pool defines how to balance the traffic between them. ADC - BIG-IP is an Application Delivery Controller. Load Balancing, SSL Offloading, Compression, Acceleration, and traffic management all are features that define how an application delivery controller operate. SNAT - SNAT or secure network address translation translates the source IP address within a connection to a BIG-IP system IP address that you define. The destination node then uses that new source address as it's destination address when responding to the request. SNAT ensures server nodes always send traffic back through the BIG-IP system. There are always one-off cases where you don't want this but SNAT is your friend. Test Offline Hopefully you can test out your full application stack prior to going live. There are those times though when a go-live scenario is an application release nightmare and you're pushing out features left and right following cutover. That's no fun and it will make troubleshooting worse. If you have disaster recovery scenarios in place, you SHOULD have a redundant environment or something resembling one. You can test and troubleshoot against this offline "data center" or whatever you have running so you're not causing constant resets to your live application. If you have n+1 redundant application stacks (in production or other environment levels), test against the one with the least traffic (no traffic is preferred). Some people run backup procedures against offline data centers which is great if you're not troubleshooting a problem. Additional traffic will muddle the waters, especially if you're running vague tcpdumps. Don't test half of the application stack. Are you testing via IP only instead of using the DNS to resolve the application FQDN? Is the database in your offline instance synched? Make sure you're testing the full stack regardless of it being offline or not. If you had a DNS issue and were only using the IP, you'd never duplicate the problem. Whatever your offline instance is (test, stage, production, development) be wary of variables that will skew troubleshooting results; at best note them down for later inspection if needed. Even offline, applications can be chatty if integrated to other systems; federation, other data integration systems, directory syncs. If possible, temporarily suspend these external influences. It could be a simple as pausing a script or it could be suspending OLAP Cube genration within SQL. Noise always introduces variation. Be aware of these outside influences and inspect accordingly. Remember the Core Concepts There are two core needs for any ADC to operate properly and these need to work prior to dissecting your application. An ADC has to properly operate on your network and be able to speak to a client and server networks. These can be the same network and you're simply hair-pinning your ADC traffic, or you have segregated networking needs. Separate interfaces, properly configured trunks, tagged VLANS..... you know the drill. Trying to figure out why your application doesn't work is going to take a long time if BIG-IP stack isn't able to talk to your server network. Part two of this is remembering the core concepts of a virtual IP. You need a valid IP, you need a pool, you need a node and you need a port to listen on. These things do get overlooked so if you're surprised, don't be. It happens. System Requirements Can you reach the BIG-IP from the client network you're testing on? An admin can slide a firewall change affecting application A and inadvertently break access to application B, C, D... Making sure you can reach your BIG-IP from all required networks is sometimes a good thing to check. Believe me, this is an issue more than we like to admit. Is BIG-IP accepting and distributing traffic properly? If you're building your first application, this is a normal step. If this is your 30th application, you assume BIG-IP is behaving properly. There are cases where you'll need to step back and make sure BIG-IP is receiving traffic on listening interfaces and attempting to distribute traffic to your nodes. You can check BIG-IP statistics for some basic sanity checks but it always helps to run a tcpdump or spin up Wireshark to just give you that warm and fuzzy feeling of self assurance. Virtual IP Requirements Is your VIP on a valid client network and listening? It's easy to build a VIP for network X but select network Y for VLAN and Tunnel Traffic options. Port scan from a client to validate! Does your VIP have a valid pool and active pool members ready to receive client traffic? A surprising amount of support calls are resolved because the admin, in haste, just threw a tcp or tcp_half_open monitor to get the node available in the pool and the service behind the required port was actually down. If you're hurrying the basics, you're going to have a bad time! Make sure those nodes are up and listening on proper monitors, and they're available for use in your intended application pool. Is your traffic going to BIG-IP but you're not seeing anything come back? Are you running asynchronous routing? If so, did you remember to SNAT? A very common issue is misunderstanding when SNAT is needed. Many times we'll have a developer or admin state "but I need to see the source IP of the client traffic"... that's a separate problem. You're not going to see anything if your application works. Either SNAT your traffic or make BIG-IP your outbound application gateway. SNAT is not discussion, it's a way of life! Read up heavily on this hopefully PRIOR to implementation but if you don't, you'll just have some additional clean up down the road. Leave that for the intern. Reduce Complexity Overly complex installations require a lot of troubleshooting if something goes wrong during go-live. This is often why people do cutovers through staged releases; they're releasing smaller changes that can be easily managed. When a problem arises, it's very helpful to isolate the issue quickly and reduced complexity or starting with simple problem solving is your best bet. Remember that firewall admin that slipped in an ACL change that broke your application? If you didn't start with basics, you'd still be checking certificate dates, http profiles, and iRule syntax before you had the epiphany to see if ANY traffic was reaching your BIG-IP. As in testing offline, if possible use an offline datacenter. This lowers the traffic significantly and can make tcpdumps quite manageable. Disable all but one node. Reducing the client traffic to a single server node eases traffic inspection by an order of magnitude. If the problem you're solving is isolated to a single back end server, this too can also speed up the isolation process. Drop out of SSL and go unencrypted. If you're having "weird" issues, is it reproducible with non-SSL traffic? This may not always be as easy as it sounds, but being able to determine if encryption or security is playing a problematic role can speed up troubleshooting significantly. Does the application work without BIG-IP involved? Sounds silly but it's a valid question and where you should generally start. Make sure the application responds with basic functionality because ADC stacks for all their value, do add complexity to your environment. Being able to segregate the two for sanity checks is sometimes a good idea. Your vendor may also force you to do this if you call them with an application question. Or lie to them. That's cool too. Additional Tools for Diagnosing Problems I've run many applications behind BIG-IP and my toolset has remained mostly unchanged, mostly. Sometimes I start a little too deep for basic troubleshooting by diving into a packet capture from the get go, but I've used Wireshark enough that it's second nature now. As an application owner, all of your tools for problem diagnosis should be second nature too. Wireshark - I have to say I started out with Bloodhound (the Microsoft internal network monitor tool) way back in the NT 3.51 days. But when Wireshark released, it was a game changer. Being able to easily reassemble VOIP traffic into a listenable wav file to illustrate to a customer the jitter analysis in the tcp dump was amazing. Nowadays, there are plenty of players in the packet capture/analysis game, but there's a reason Wireshark is a verb; it's the standard... and we have an F5 Wireshark plugin for it too. tcpdump/ssldump - Knowing how to run tcpdumpandssldump on your BIG-IP is a requirement when contacting support so you might as well learn it. It'll end up coming in handy down the road when you also need to run ring dumps from a server looking for problematic traffic. Nmap - Install it everywhere. It's available for every operating system so there's no reason not to have it installed. Quickly analyze system availability and determine if the application's even listening to your requests. Nmap can do a lot more but as an advanced port scanner, it's all you'll ever need. Openssl - It's good to run Openssl for many reasons, from certificate analysis and CSR creation, to running your own CA for testing. The bonus for troubleshooting is the s_client SSL/TLS program. Connect and see what happens behind the SSL/TLS negotiation without needing a packet capture. Security professionals, networks admins, and application owners rely on Openssl's s_client to validate their TLS configurations. Curl - The website is down. Is it? Or is it your browser's inability to pass traffic due to the 30 extensions you have running? Curl is your site's sanity check to see exactly what's loading. It's quick and painless and can answer several initial troubleshooting questions right off the bat. And it does TLS so you can even overlap your Openssl s_client tests if you need. HttpWatch or Fiddler - These are the real winners here when troubleshooting an application response. Especially when you don't own the entire application stack. Each have their strengths and weaknesses but between the two, you can diagnose almost any web application issue quickly. Is the web site responding? Are you receiving the correct certificate? Is the data loading after CSS? What's that weird 3rd party script running? All can be answered with either of these tools. All of these recommendations were written up based on real support calls made by competent administrators who are new to BIG-IP or are new to their role as application administrator. If you're a developer and are new to BIG-IP, welcome and don't feel bad, we all started out making the exact same mistakes. Practice makes perfect so dive into your BIG-IP environment or purchase a BIG-IP Developer Lab License for yourself just to play around with. Hopefully you're feeling only slightly frustrated but just remember to break down your problems and take them one at a time. It's a good life lesson but it's also how you're going to fix your BIG-IP too.8.3KViews1like2CommentsWhat is BIG-IQ?
tl;dr - BIG-IQ centralizes management, licensing, monitoring, and analytics for your dispersed BIG-IP infrastructure. If you have more than a few F5 BIG-IP's within your organization, managing devices as separate entities will become an administrative bottleneck and slow application deployments. Deploying cloud applications, you're potentially managing thousands of systems and having to deal with traditionallymonolithic administrative functions is a simple no-go. Enter BIG-IQ. BIG-IQ enables administrators to centrally manage BIG-IP infrastructure across the IT landscape. BIG-IQ discovers, tracks, manages, and monitors physical and virtual BIG-IP devices - in the cloud, on premise, or co-located at your preferred datacenter. BIG-IQ is a stand alone product available from F5 partners, or available through the AWS Marketplace. BIG-IQ consolidates common management requirements including but not limited to: Device discovery and monitoring: You can discovery, track, and monitor BIG-IP devices - including key metrics including CPU/memory, disk usage, and availability status Centralized Software Upgrades: Centrally manage BIG-IP upgrades (TMOS v10.20 and up) by uploading the release images to BIG-IQ and orchestrating the process for managed BIG-IPs. License Management: Manage BIG-IP virtual edition licenses, granting and revoking as you spin up/down resources. You can create license pools for applications or tenants for provisioning. BIG-IP Configuration Backup/Restore: Use BIG-IQ as a central repository of BIG-IP config files through ad-hoc or scheduled processes. Archive config to long term storage via automated SFTP/SCP. BIG-IP Device Cluster Support: Monitor high availability statuses and BIG-IP Device clusters. Integration to F5 iHealth Support Features: Upload and read detailed health reports of your BIG-IP's under management. Change Management: Evaluate, stage, and deploy configuration changes to BIG-IP. Create snapshots and config restore points and audit historical changes so you know who to blame. 😉 Certificate Management: Deploy, renew, or change SSL certs. Alerts allow you to plan ahead before certificates expire. Role-Based Access Control (RBAC): BIG-IQ controls access to it's managed services with role-based access controls (RBAC). You can create granular controls to create view, edit, and deploy provisioned services. Prebuilt roles within BIG-IQ easily allow multiple IT disciplines access to the areas of expertise they need without over provisioning permissions. Fig. 1 BIG-IQ 5.2 - Device Health Management BIG-IQ centralizes statistics and analytics visibility, extending BIG-IP's AVR engine. BIG-IQ collects and aggregates statistics from BIG-IP devices, locally and in the cloud. View metrics such as transactions per second, client latency, response throughput. You can create RBAC roles so security teams have private access to view DDoS attack mitigations, firewall rules triggered, or WebSafe and MobileSafe management dashboards. The reporting extends across all modules BIG-IQ manages, drastically easing the pane-of-glass view we all appreciate from management applications. For further reading on BIG-IQ please check out the following links: BIG-IQ Centralized Management @ F5.com Getting Started with BIG-IQ @ F5 University DevCentral BIG-IQ BIG-IQ @ Amazon Marketplace8.1KViews1like1CommentWhat is BIG-IP DNS?
tl;dr - BIG-IP DNS provides global load balancing (GSLB), DNS services, and basic DDoS protection features. By now we all understand the concepts behind load balancing; creating a virtual access point to distribute traffic across multiple resources. Keeping that idea in mind the next question is how do we advertise our application available across separate data centers? BIG-IP DNS (formerly Global Traffic Manager or GTM) first and foremost is a global load balancer for DNS queries. Using similar algorithms for load balancing decision made by BIG-IP Local Traffic Manager (LTM), BIG-IP DNS routes your DNS traffic to the best suited datacenter either on premise, co-located, or in your preferred cloud provider. BIG-IP DNS also provides DNS resolution services, including caching and traffic throttling to ensure queries made about your applications are always answered and fast. Vocabulary To understand BIG-IP DNS, we'll first define a few product terms. Wide IP - Owns your services FQDN and responds to listener requests. The Wide IP contains one or more pools which in turn contain one or more virtual servers. Server - In this case, the server defined in BIG-IP DNS is either a BIG-IP or other 3rd party system responsible for owning one or more virtual server service. GSLB - Global Server Load Balancing. The GSLB section within BIG-IP DNS configuration is the core of intelligent DNS resolution services. Listener - BIG-IP uses TCP/UDP listeners to respond to DNS queries. Pool - In BIG-IP DNS a pool contains one or more virtual servers. How BIG-IP DNS Works BIG-IP DNS has grown over the years to incorporate many new features, but we'll stick to discussing the core global server load balancing (GSLB) functionality. Let's first take a look at a traditional DNS query (we're assuming no system has example cached): Client queries www.example.com to local DNS (LDNS) LDNS queries ROOT Servers ROOT Servers send the query to the .com TLD server TLD Servers provide the name server IP for example.com to LDNS server (glue records if you got em) example.com name servers lookup www entry and send to LDNS request LDNS Server returns IP for www.example.com to client Client is now browsing. BIG-IP DNS enters the picture at step 5 and adds a few extra steps: BIG-IP DNS Listener receives the query for example.com The Wide-IP associated to example.com makes a load balancing decision on what pool to send the request to The chosen pool makes a secondary load balancing decision on what virtual server to send the request to The virtual server IP is returned to the originating LDNS server Client is more happy because they were routed to a regionally located server with faster response times. In this scenario, the BIG-IP DNS provided a faster application experience for the user by determining the region the user resided and provided the fastest performing server's as the IP for the FQDN requested by DNS. BIG-IP DNS provides more features to enhance the GSLB features including accelerating DNS resolution and acting as a fast secondary DNS server. Below you can learn more about BIG-IP DNS and as always if you have questions or commentplease let us know. DevCentral Basics - What is DNS? Lightboard Lessons: BIG-IP DNS Load Balancing Intro Lightboard Lessons: DNS Scalability & Security Getting Started with BIG-IP DNS (formerly GTM) @F5 University6.1KViews0likes0CommentsWhat are F5 Access and BIG-IP Edge Clients?
tl;dr - F5 Access and BIG-IP Edge are VPN clients that connect to APM access policies for L3 network connectivity. Building on the DevCentral Basics article What is BIG-IP APM, a few questions remain. How do mobile clients access web application resources? How can I easily turn on and off VPN connectivity? The question distill down to connectivity options for clients connecting to BIG-IP APM infrastructure. Users are limited to using web client connectivity which may not always be a preferred or allowed option. F5 provides several client-based options for connectivity to BIG-IP APM. F5 Access When used in conjunction withBIG-IP APM access policies, F5 Access provides traditional L3 VPN connectivity to your corporate resources. F5 Access is supported on Windows 10, Windows 10 Mobile, iOS, and Android. Currently the client features does not have parity across the different operating systems for various reasons. For a complete supported version matrix please see the F5 Apps Compatibility Matrix. F5 Edge Client As of version 3.0 the F5 Edge Client is renamed to the above F5 Access client. Prior to the 3.0 version, F5'sEdge Client was the preferred client solution for L3 VPN access. This client is still supported through BIG-IP version 13 but will be eventually deprecated as the F5 Access client matures into full feature compliancy. F5 Edge Portal We previously discussed BIG-IP APM's Web Portal gateway, allowing policy-based granular access to web applications directly instead of requiring full VPN. The F5 Edge Portal offers a client version of the Web Portal for easier mobile access to web portal applications. The F5 Edge Portal will not continue support into iOS 11 or Android 8. Please the EOL plans:F5 BIG-IP Edge Portal - End of Support and End of Availability Announcement. As the BIG-IP APM product evolves and customer security requirements and requests changes, we'll continue to keep updating our client functionality to anticipate those requirements. The F5 Access client is the future of BIG-IP client connectivity for those who don't wish to use the web client offered with BIG-IP APM. We'll keep you updated here and through AskF5, our authoritative support resource. Please check out the below links for more information on F5 client functionality, supportability, and how to configure your client access policies. As always, please let us know any content you would like to see expanded. Happy Networking! On DevCentral: F5 BIG-IP Edge Portal - End of Support and End of Availability Announcement F5 Access for Your Chromebook F5 Access for Windows 10/Windows 10 Mobile Now Available On F5.com: F5 Apps Compatibility Matrix F5 Access & BIG-IP Edge Apps Documentation University.f5.com - Search for F5 Access AND/OR Edge (requires F5 Support login)4.9KViews0likes2Comments