big-ip next
35 TopicsWhat is BIG-IP Next?
BIG-IP Next LTM and BIG-IP Next WAF hit general availability back in October, and we hit the road for a tour around North America for its arrival party! Those who attended one of our F5 Academy sessions got a deep-dive presentation into BIG-IP Next conceptually, and then a lab session to work through migrating workloads and deploying them. I got to attend four of the events and discuss with so many fantastic community members what's old, what's new, what's borrowed, what's blue...no wait--this is no wedding! But for those of us who've been around the block with BIG-IP for a while, if not married to the tech, we definitely have a relationship with it, for better and worse, right? And that's earned. So any time something new, or in our case "Next" comes around, there's risk and fear involved personally. But don't fret. Seriously. It's going to be different in a lot of ways, but it's going to be great. And there are a crap-ton (thank you Mark Rober!) of improvements that once we all make it through the early stages, we'll embrace and wonder why we were even scared in the first place. So with all that said, will you come on the journey with me? In this first of many articles to come from me this year, I'll cover the high-level basics of what is so next about BIG-IP Next, and in future entries we'll be digging into the tech and learning together. BIG-IP and BIG-IP Next Conceptually - A Comparison BIG-IP has been around since before the turn of the century (which is almost old enough to rent a car here in the United States) and this year marks the 20 year anniversary of TMOS. That the traffic management microkernel (TMM) is still grokking like a boss all these years later is a testament to that early innovation! So whereas TMOS as a system is winding down, it's heart, TMM, will go on (cue sappy Celine Dion ditty in 3, 2, 1...) Let's take a look at what was and what is. With TMOS, the data plane and control plane compete for resources as it's one big system. With BIG-IP, the separation of duties is more explicit and intentionally designed to scale on the control plane. Also, the product modules are no longer either completely integrated in TMM or plugins to TMM, but rather, isolated to their own container structures. The image above might convey the idea that LTM or WAF or any of the other modules are single containers, but that's just shown that way for brevity. Each module is an array of containers. But don't let that scare you. The underlying kubernetes architecture is an abstraction that you may--but certainly are not required to--care about. TMM continues to be its awesome TMM self. The significant change operationally is how you interact with BIG-IP. With TMOS, historically you engage directly with each device, even if you have some other tools like BIG-IQ or third-party administration/automation platforms. With BIG-IP Next, everything is centralized on Central Manager, and the BIG-IP Next instances, whether they are running on rSeries, VELOS, or Virtual Edition, are just destinations for your workloads. In fact, outside of sidecar proxies for troubleshooting, instance logins won't even be supported! Yes, this is a paradigm shift. With BIG-IP Next, you will no longer be configuration-object focused. You will be application-focused. You'll still have the nerd-knobs to tweak and turn, but they'll be done within the context of an application declaration. If you haven't started your automation journey yet, you might not be familiar with AS3. It's been out now for years and works with BIG-IP to deploy applications declaratively. Instead of following a long pre-flight checklist with 87 steps to go from nothing to a working application, you simply define the parameters of your application in a blob of JSON data and click the easy button. For BIG-IP Next, this is the way. Now, in the Central Manager GUI, you might interact with FAST templates that deliver a more traditional view into configuring applications, but the underlying configuration engine is all AS3. For more, I hosted aseries of streams in December to introduce AS3 Foundations, I highly recommend you take the time to digest the basics. Benefits I'm Excited About There are many and you can read about them on the product page on F5.com. But here's my short list: API-first. Period. BIG-IP had APIs with iControl from the era before APIs were even cool, but they were not first-class citizens. The resulting performance at scale requires effort to manage effectively. Not only performance, but feature parity among iControl REST, iControl SOAP, tmsh, and the GUI has been a challenge because of the way development occurred over time. Not so with BIG-IP Next. Everything is API-first, so all tooling is able to consume everything. This is huge! Migration assistance. Central Manager has the JOURNEYS tool on sterroids built-in to the experience. Upload your UCS, evaluate your applications to see what can be migrated without updates, and deploy! It really is that easy. Sure, there's work to be done for applications that aren't fully compatible yet, but it's a great start. You can do this piece (and I recommend that you do) before you even think about deploying a single instance just to learn what work you have ahead of you and what solutions you might need to adapt to be ready. Simplified patch/upgrade process. If you know, you know...patches are upgrades with BIG-IP, and not in place at that. This is drastically improved with BIG-IP Next! Because of the containerized nature of the system, individual containers can be targeted for patching, and depending on the container, may not even require a downtime consideration. Release cycle. A more frequent release cadence might terrify the customers among us that like to space out their upgrades to once every three years or so, but for the rest of us, feature delivery to the tune of weeks instead of twice per year is an exciting development (pun intended!) Features I'm Excited About Versioning for iRules and policies. For those of us who write/manage these things, this is huge! Typically I'd version by including it in the title, and I know some who set release tags in repos. With Central Manager, it's built-in and you can deploy iRules and polices by version and do diffs in place. I'm super excited about this! Did I mention the API? On the API front...it's one API, for all functionality. No digging and scraping through the GUI, tmsh, iControl REST, iControl SOAP, building out a node.js app to deploy a custom API endpoint with iControl LX, if even possible with some of the modules like APM or ASM. Nope, it's all there in one API. Glorious. Centralized dashboards. This one is for the Ops teams! Who among us has spent many a day building custom dashboards to consume stats from BIG-IPs across your org to have a single pane of glass to manage? I for one, and I'm thrilled to see system, application, and security data centralized for analysis and alerting. Log/metric streaming. And finally, logs and metrics! Telemetry Streaming from the F5 Automation Toolchain doesn't come forward in BIG-IP Next, but the ideas behind it do. If you need your data elsewhere from Central Manager, you can set up remote logging with OpenTelemetry (see the link in the resources listed below for a first published example of this.) There are some great features coming with DNS, Access, and all the other modules when they are released as well. I'll cover those when they hit general availability. Let's Go! In the coming weeks, I'll be releasing articles on installation and licensing walk-throughs for Central Manager and the instances, andcontent from our awesome group of authors is already starting to flow as well. Here are a few entries you can feast your eyes on, including an instance Proxmox installation: For the kubernetes crowd, BIG-IP Next CNF Solutions for RedHat Openshift Installing BIG-IP Next Instance on Proxmox Remote Logging with BIG-IP Next and OpenTelemetry Are you ready? Grab a trial licensefrom your MyF5 dashboard and get going! And make sure to join us in the BIG-IP Next Academy group here on DevCentral. The launch team is actively engaged there for next-related questions/issues, so that's the place to be in your early journey! Also...if you want the ultimate jump-start for all things BIG-IP Next, join usatAppWorld 2024 in SanJose next month!6.6KViews18likes5CommentsEmbracing AS3: Foundations
(updated to remove the event-nature of this post) Last fall, a host of teams took to the road to support the launch of BIG-IP Next in the form of F5 Academy roadshows, where we shared the BIG-IP story: where we started, where we are, and where we're going with it; complete with hands-on LTM and WAF labs with the attendees. For this spring's roadshows, we added SSLO and Access labs. Across the fall and spring legs, I attended in person in Kansas City (twice!), St. Louis, Cincinnati, Columbus, Omaha, and (soon) Chicago and talked with customers at all stages of the automation journey. Some haven't automated much of anything. Some have been using a variety of on-/off-box scripts, and some are all-in, baby! That said, when I ask about AS3 as a tool in their tool belt, not that many have adopted or even investigated it yet. For classic BIG-IP AS3 is not a requirement, but in BIG-IP Next, AS3 is a critical component as it's THE underlying configuration language for all applications. Because of this, I did a five-part live stream series in December to get you started with AS3. Details below. Beyond Imperatives—What the heck is AS3? In this first episode, I covered the history of automation on BIG-IP, the differences between imperative and declarative models, and the basics of AS3 from data structure and systems architecture perspectives. Top 10 Features to Know in the VSCode F5 Extension Special guest and friend of the show Ben Novak, author of the F5 Extension for VSCode, joined me to dig into the features most important to know and learn to use for understanding how to take stock BIG-IP configurations and turn applications into declarations. Migrating and Deploying Applications in VSCode In this episode, armed with the knowledge I learned from Ben in the last episode, I dug into the brass tacks of AS3! I reviewed diagnostics and migrated applications from standard configuration to AS3 declarations and deployed as well. For migrating active workloads, I discussed the steps necessary to reduce transition impact. Creating New Apps and Using Shared Objects I've always tried to alter existing things before creating new things with tech, and this is no different. Now that I had a few migrations under my belt, I attacked a net-new application and looked at shared objects and how and when to use them. Best Practices Finally, I closed this series with a look at several best practices when working with AS3. The conclusion to this series, though, is hopefully just the stepping off point for everyone new to AS3, and we can continue the conversation right here on DevCentral.2.4KViews7likes0CommentsCreate F5 BIG-IP Next Instance on Proxmox Virtual Environment
If you are looking to deploy a F5 BIG-IP Next instance on Proxmox Virtual Environment (henceforth referred to as Proxmox for the sake of brevity), perhaps in your home lab, here's how: First, download the BIG-IP Next Central Manager and BIG-IP Next QCOW Files from MyF5 Downloads. Click on the "Copy Download Link" Copy the QCOW file to your Proxmox host. I am using the download links from above in the example below. proxmox $ curl -O -L -J [link for Central Manager from F5 downloads] proxmox $ curl -O -L -J [link for Next from F5 downloads] On the Proxmox host, extract the contents in the QCOW files. You will need to rename the Central Manager file from .qcow to .qcow2. proxmox $ cd ~/ proxmox $ mv BIG-IP-Next-CentralManager-20.2.1-0.3.25.qcow BIG-IP-Next-CentralManager-20.2.1-0.3.25.qcow2 proxmox $ tar -zxvf BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2.tar.gz BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2 BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2.sha512 BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2.sha512.sig BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2.sha512sum.txt.asc BIG-IP-Next-20.2.1-F5-ca-bundle.cert BIG-IP-Next-20.2.1-F5-certificate.cert Then, run the command below to create a virtual machine (VM) from the extracted QCOW files. replace the values to match your environment. # # Central Manager # # use either DHCP or Static IP example # # using DHCP (change values to match your environment) proxmox $ qm create 105 --memory 16384 --sockets 1 --cores 8 --net0 virtio,bridge=vmbr0 --name my-central-manager --scsihw=virtio-scsi-single --ostype=l26 --cpu=x86-64-v2-AES --citype nocloud --ipconfig0 ip=dhcp --ciupgrade=0 --ide2=local-lvm:cloudinit # static IP (change values to match your environment) # proxmox $ qm create 105 --memory 16384 --sockets 1 --cores 8 --net0 virtio,bridge=vmbr0 --net1 virtio,bridge=vmbr1 --name my-central-manager --scsihw=virtio-scsi-single --ostype=l26 --cpu=x86-64-v2-AES --citype nocloud --ipconfig0 ip=192.168.1.5/24,gw=192.168.1.1 --nameserver 192.168.1.1 --ciupgrade=0 --ide2=local-lvm:cloudinit # import disk qm set 105 --virtio0 local-lvm:0,import-from=/root/BIG-IP-Next-CentralManager-20.2.1-0.3.25.qcow2 --boot order=virtio0 # # Next instance # # Note that you need at least two interfaces, one for management and one for data-plane # # use either DHCP or Static IP example # # DHCP proxmox $ qm create 107 --memory 16384 --sockets 1 --cores 8 --net0 virtio,bridge=vmbr0 --net1 virtio,bridge=vmbr1 --name my-next-instance --scsihw=virtio-scsi-single --ostype=l26 --cpu=x86-64-v2-AES --citype nocloud --ipconfig0 ip=dhcp --ciupgrade=0 --ciuser=admin --cipassword=admin --ide2=local-lvm:cloudinit # static IP # proxmox $ qm create 107 --memory 16384 --sockets 1 --cores 8 --net0 virtio,bridge=vmbr0 --net1 virtio,bridge=vmbr1 --name my-next-instance --scsihw=virtio-scsi-single --ostype=l26 --cpu=x86-64-v2-AES --citype nocloud --ipconfig0 ip=192.168.1.7/24,gw=192.168.1.1 --nameserver 192.168.1.1 --ciupgrade=0 --ciuser=admin --cipassword=admin --ide2=local-lvm:cloudinit # import disk proxmox $ proxmox $ qm set 107 --virtio0 local-lvm:0,import-from=/root/BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2 --boot order=virtio0 You should now see a new VM created on the Proxmox GUI. Finally, start the VM. This will take a few minutes. The BIG-IP Next VM is now ready to be onboarded per instructions found here.2.3KViews6likes4CommentsIntroducing F5 BIG-IP Next CNF Solutions for Red Hat OpenShift
5G and Red Hat OpenShift 5G standards have embraced Cloud-Native Network Functions (CNFs) for implementing network services in software as containers. This is a big change from previous Virtual Network Functions (VNFs) or Physical Network Functions (PNFs). The main characteristics of Cloud-Native Functions are: Implementation as containerized microservices Small performance footprint, with the ability to scale horizontally Independence of guest operating system,since CNFs operate as containers Lifecycle manageable by Kubernetes Overall, these provide a huge improvement in terms of flexibility, faster service delivery, resiliency, and crucially using Kubernetes as unified orchestration layer. The later is a drastic change from previous standards where each vendor had its own orchestration. This unification around Kubernetes greatly simplifies network functions for operators, reducing cost of deploying and maintaining networks. Additionally, by embracing the container form factor, allows Network Functions (NFs) to be deployed in new use cases like far edge. This is thanks to the smaller footprint while at the same time these can be also deployed at large scale in a central data center because of the horizontal scalability. In this article we focus on Red Hat OpenShift which is the market leading and industry reference implementation of Kubernetes for IT and Telco workloads. Introduction to F5 BIG-IP Next CNF Solutions F5 BIG-IP Next CNF Solutions is a suite of Kubernetes native 5G Network Functions, implemented as microservices. It shares the same Cloud Native Engine (CNE) as F5 BIG-IP Next SPK introduced last year. The functionalities implemented by the CNF Solutions deal mainly with user plane data. User plane data has the particularity that the final destination of the traffic is not the Kubernetes cluster but rather an external end-point, typically the Internet. In other words, the traffic gets in the Kubernetes cluster and it is forwarded out of the cluster again. This is done using dedicated interfaces that are not used for the regular ingress and egress paths of the regular traffic of a Kubernetes cluster. In this case, the main purpose of using Kubernetes is to make use of its orchestration, flexibility, and scalability. The main functionalities implemented at initial GA release of the CNF Solutions are: F5 Next Edge Firewall CNF, an IPv4/IPv6 firewall with the main focus in protecting the 5G core networks from external threads, including DDoS flood protection and IPS DNS protocol inspection. F5 Next CGNAT CNF, which offers large scale NAT with the following features: NAPT, Port Block Allocation, Static NAT, Address Pooling Paired, and Endpoint Independent mapping modes. Inbound NAT and Hairpining. Egress path filtering and address exclusions. ALG support: FTP/FTPS, TFTP, RTSP and PPTP. F5 Next DNS CNF, which offers a transparent DNS resolver and caching services. Other remarkable features are: Zero rating DNS64 which allows IPv6-only clients connect to IPv4-only services via synthetic IPv6 addresses. F5 Next Policy Enforcer CNF, which provides traffic classification, steering and shaping, and TCP and video optimization. This product is launched as Early Access in February 2023 with basic functionalities. Static TCP optimization is now GA in the initial release. Although the CGNAT (Carrier Grade NAT) and the Policy Enforcer functionalities are specific to User Plane use cases, the Edge Firewall and DNS functionalities have additional uses in other places of the network. F5 and OpenShift BIG-IP Next CNF Solutions fully supportsRed Hat OpenShift Container Platform which allows the deployment in edge or core locations with a unified management across the multiple deployments. OpenShift operators greatly facilitates the setup and tuning of telco grade applications. These are: Node Tuning Operator, used to setup Hugepages. CPU Manager and Topology Manager with NUMA awareness which allows to schedule the data plane PODs within a NUMA domain which is aligned with the SR-IOV NICs they are attached to. In an OpenShift platform all these are setup transparently to the applications and BIG-IP Next CNF Solutions uniquely require to be configured with an appropriate runtimeClass. F5 BIG-IP Next CNF Solutions architecture F5 BIG-IP Next CNF Solutions makes use of the widely trusted F5 BIG-IP Traffic Management Microkernel (TMM) data plane. This allows for a high performance, dependable product from the start. The CNF functionalities come from a microservices re-architecture of the broadly used F5 BIG-IP VNFs. The below diagram illustrates how a microservices architecture used. The data plane POD scales vertically from 1 to 16 cores and scales horizontally from 1 to 32 PODs, enabling it to handle millions of subscribers. NUMA nodes are supported. The next diagram focuses on the data plane handling which is the most relevant aspect for this CNF suite: Typically, each data plane POD has two IP address, one for each side of the N6 reference point. These could be named radio and Internet sides as shown in the diagram above. The left-side L3 hop must distribute the traffic amongst the lef-side addresses of the CNF data plane. This left-side L3 hop can be a router with BGP ECMP (Equal Cost Multi Path), an SDN or any other mechanism which is able to: Distribute the subscribers across the data plane PODs, shown in [1] of the figure above. Keep these subscribers in the same PODs when there is a change in the number of active data plane PODs (scale-in, scale-out, maintenance, etc...) as shown in [2] in the figure above. This minimizes service disruption. In the right side of the CNFs, the path towards the Internet, it is typical to implement NAT functionality to transform telco's private addresses to public addresses. This is done with the BIG-IP Next CG-NAT CNF. This NAT makes the return traffic symmetrical by reaching the same POD which processed the outbound traffic. This is thanks to each POD owning part of this NAT space, as shown in [3] of the above figure. Each POD´s NAT address space can be advertised via BGP. When not using NAT in the right side of the CNFs, it is required that the network is able to send the return traffic back to the same POD which is processing the same connection. The traffic must be kept symmetrical at all times, this is typically done with an SDN. Using F5 BIG-IP Next CNF Solutions As expected in a fully integrated Kubernetes solution, both the installation and configuration is done using the Kubernetes APIs. The installation is performed using helm charts, and the configuration using Custom Resource Definitions (CRDs). Unlike using ConfigMaps, using CRDs allow for schema validation of the configurations before these are applied. Details of the CRDs can be found in this clouddocs site. Next it is shown an overview of the most relevant CRDs. General network configuration Deploying in Kubernetes automatically configures and assigns IP addresses to the CNF PODs. The data plane interfaces will require specific configuration. The required steps are: Create Kubernetes NetworkNodePolicies and NetworkAttchment definitions which will allow to expose SR-IOV VFs to the CNF data planes PODs (TMM). To make use of these SR-IOV VFs these are referenced in the BIG-IP controller's Helm chart values file. This is described in theNetworking Overview page. Define the L2 and L3 configuration of the exposed SR-IOV interfaces using the F5BigNetVlan CRD. If static routes need to be configured, these can be added using the F5BigNetStaticroute CRD. If BGP configuration needs to be added, this is configured in the BIG-IP controller's Helm chart values file. This is described in the BGP Overview page. It is expected this will be configured using a CRD in the future. Traffic management listener configuration As with classic BIG-IP, once the CNFs are running and plumbed in the network, no traffic is processed by default. The traffic management functionalities implemented by BIG-IP Next CNF Solutions are the same of the analogous modules in the classic BIG-IP, and the CRDs in BIG-IP Next to configure these functionalities are conceptually similar too. Analogous to Virtual Servers in classic BIG-IP, BIG-IP Next CNF Solutions have a set of CRDs that create listeners of traffic where traffic management policies are applied. This is mainly the F5BigContextSecure CRD which allows to specify traffic selectors indicating VLANs, source, destination prefixes and ports where we want the policies to be applied. There are specific CRDs for listeners of Application Level Gateways (ALGs) and protocol specific solutions. These required several steps in classic BIG-IP: first creating the Virtual Service, then creating the profile and finally applying it to the Virtual Server. In BIG-IP Next this is done in a single CRD. At time of this writing, these CRDs are: F5BigZeroratingPolicy - Part of Zero-Rating DNS solution; enabling subscribers to bypass rate limits. F5BigDnsApp - High-performance DNS resolution, caching, and DNS64 translations. F5BigAlgFtp - File Transfer Protocol (FTP) application layer gateway services. F5BigAlgTftp - Trivial File Transfer Protocol (TFTP) application layer gateway services. F5BigAlgPptp - Point-to-Point Tunnelling Protocol (PPTP) application layer gateway services. F5BigAlgRtsp - Real Time Streaming Protocol (RTSP) application layer gateway services. Traffic management profiles and policies configuration Depending on the type of listener created, these can have attached different types of profiles and policies. In the case of F5BigContextSecure it can get attached thefollowing CRDs to define how traffic is processed: F5BigTcpSetting - TCP options to fine-tune how application traffic is managed. F5BigUdpSetting - UDP options to fine-tune how application traffic is managed. F5BigFastl4Setting - FastL4 option to fine-tune how application traffic is managed. and the following policies for security and NAT: F5BigDdosPolicy - Denial of Service (DoS/DDoS) event detection and mitigation. F5BigFwPolicy - Granular stateful-flow filtering based on access control list (ACL) policies. F5BigIpsPolicy - Intelligent packet inspection protects applications from malignant network traffic. F5BigNatPolicy - Carrier-grade NAT (CG-NAT) using large-scale NAT (LSN) pools. The ALG listeners require the use of F5BigNatPolicy and might make use for the F5BigFwPolicyCRDs.These CRDs have also traffic selectors to allow further control over which traffic these policies should be applied to. Firewall Contexts Firewall policies are applied to the listener with best match. In addition to theF5BigFwPolicy that might be attached, a global firewall policy (hence effective in all listeners) can be configured before the listener specific firewall policy is evaluated. This is done with F5BigContextGlobal CRD, which can have attached a F5BigFwPolicy. F5BigContextGlobal also contains the default action to apply on traffic not matching any firewall rule in any context (e.g. Global Context or Secure Context or another listener). This default action can be set to accept, reject or drop and whether to log this default action. In summary, within a listener match, the firewall contexts are processed in this order: ContextGlobal Matching ContextSecure or another listener context. Default action as defined by ContextGlobal's default action. Event Logging Event logging at high speed is critical to provide visibility of what the CNFs are doing. For this the next CRDs are implemented: F5BigLogProfile - Specifies subscriber connection information sent to remote logging servers. F5BigLogHslpub - Defines remote logging server endpoints for the F5BigLogProfile. Demo F5 BIG-IP Next CNF Solutions roadmap What it is being exposed here is just the begin of a journey. Telcos have embraced Kubernetes as compute and orchestration layer. Because of this, BIG-IP Next CNF Solutions will eventually replace the analogous classic BIG-IP VNFs. Expect in the upcoming months that BIG-IP Next CNF Solutions will match and eventually surpass the features currently being offered by the analogous VNFs. Conclusion This article introduces fully re-architected, scalable solution for Red Hat OpenShift mainly focused on telco's user plane. This new microservices architecture offers flexibility, faster service delivery, resiliency and crucially the use of Kubernetes. Kubernetes is becoming the unified orchestration layer for telcos, simplifying infrastructure lifecycle, and reducing costs. OpenShift represents the best-in-class Kubernetes platform thanks to its enterprise readiness and Telco specific features. The architecture of this solution alongside the use of OpenShift also extends network services use cases to the edge by allowing the deployment of Network Functions in a smaller footprint. Please check the official BIG-IP Next CNF Solutions documentation for more technical details and check www.f5.com for a high level overview.2.1KViews3likes2CommentsGetting Started with BIG-IP Next: Fundamentals
In the first article in this series, I introduced BIG-IP Next at the 50,000-foot (or meter for the saner parts of the world...) level. In this article, I will get closer to the brass tacks of tackling some technical tasks, but still hover over the trenches so I can lay a little more groundwork into the components of the BIG-IP Next: Central Manager and instances. Central Manager The Central Manager is the brains of the operation, and aptly named since it is the centralized location where most management tasks regarding BIG-IP Next instances will coalesce. Gone are the days of logging into BIG-IP devices. It won't be supported! Also gone are the days of creating a node to create a pool and creating some profiles and iRules and snat pools and then slapping all that together on a virtual server. That's not to say that some shared objects won't exist--they will, or at least they can. In classic BIG-IP, the virtual server was the "top dog" from an object perspective unless you already have used iApps or AS3 declarations, in which case those options are similar to what we have with BIG-IP Next, where the application service wears the crown. Everything about that application service is defined within that context, including multiple virtual servers where necessary. That will be done in the GUI via application templates, or via the API with AS3 directly or via FAST templates. The included http application template in the Central Manager GUI allows for a lot of checkbox functionality, but accessing some of the functionality you may be used to will require additional or edited templates. Beyond managing the instances and the application services, you'll also be able to manage your security policies, attack and bot signature security services updates and monitor/report on deployed policies. And of course, you'll be able to manage users and performance maintenance on the Central Manager system itself. There is no license required for Central Manager; you can download it now and get started with your discovery as soon as you're ready! I have it installed on my iMac in VMware Fusion currently, and I'll be writing articles in the next couple of weeks on installation for Fusion and ESXi. Instances Whereas Central Manager is the brain of the BIG-IP Next operation, the instances are the brawn. They can take the form of a tenant on F5 VELOS or rSeries hardware, a KVM and/or VMware Virtual Edition for private clouds and coming soon, or a Virtual Edition on select public clouds. (Note: Instances can also take the form of CNFs in headless kubernetes deployments, but that won't be addressed in this series.) Onboarding instances is not as complex a process as setting up classic BIG-IP because day one operations are not intermingled with day two and beyond. You define the CPU, memory, disk, and network resources you need depending on what modules you're licensing for use and fire it up. Once that candle is lit, you run through a few onboarding steps with either a postman collection or write an onboarding script to walk through those steps for you. That's it for setup on the instances; the rest of the process is managed on Central Manager. Limited access will be available on instances for troubleshooting through a sidecar proxy, but even that is configured and managed through Central Manager. Instances are licensed. Make sure to check with your account team; you might already be entitled to BIG-IP Next licensing, but a conversion transaction will be necessary. For lab discovery, you can generate a trial license on MyF5 to get started! I'll cover installation on KVM, Fusion, and ESXi in the next couple of weeks. Leon Seng has already written up installing a BIG-IP Next instance on Proxmox! "Next" Up Alrighty then! Enough talk, Jason, let's do something! I hear you, I hear you...starting next week, I'll be releasing incremental steps into the installation, onboarding, licensing, upgrading, backup/restore, etc, of both the Central Manager and the instances. Here's the general workflow I'll follow: Ignore the platform. I'll step through all the support versions I have access to and keep placeholders to circle back as more platforms are supported. I hope to see you all at AppWorld, but if not, don't be a stranger here on DevCentral, reach out any time!1.9KViews7likes0CommentsHow I did it - "Remote logging with F5 BIG-IP Next and OpenTelemetry"
In this installment of “How I Did it” we’ll be diving into the next generation of BIG-IP software, F5 BIG-IP Nextand how—by using OpenTelemetry-organizations can now effortlessly integrate BIG-IP Next with an extensive array of third-party analytics and visualization tools. A little background While at its core, BIG-IP Next is still the same BIG-IP that F5 customers know and trust, the platform has been modernized and optimized for the future. With respect to the topic at hand,it’s worth noting that Central Manager itself provides robust native application/security reporting and visibility (see below). With that said, many enterprises prefer to aggregate their various telemetry streams onto a single platform. To that end, BIG-IP NextusesOpenTelemetry for telemetry streaming within the Next infrastructure as well as for data exportation tothird-partyanalytics tools. OpenTelemetry (OTEL) is a set of APIs, libraries, agents, instrumentation, and instrumentation standards designed to provide observability for applications through the generation, collection, and description of telemetry data such as traces, metrics, and logs. The Otel collector which is a component of the OTEL toolkit provides a proxy to receive, process and export telemetry data including traces, metrics, and logs to a variety of third-party analytics platforms.While many analytics and observability vendors have their own flavor of the collector available, I used the community version, 'collector-contrib'running on a Linux instance for this demonstration. The contrib version affords me the ability to specify a variety of third-party vendor exporters. According toJames McQuivey, PhDof Forrestera 1-minute video is equivalent to 1.8 million words. With that in mind, rather than filling up this page with a little more than 12.6 million words- minus a thousand for each picture—documenting each step here, I've included a short, (a little over 7 minutes) video walkthrough of the configuration process. For a deeper look at the technologies involved, check out the links below. Alright, let's grab some popcorn and watch a movie.😀 Additional links F5 BIG-IP Next Introduction F5 BIG-IP Next Documentation F5 BIG-IP Next API Specification F5 BIG-IP Next Remote Logging API Introducing OpenTelemetry OpenTelemetry Collector OpenTelemetry Collector Contrib repo Analytics Vendors Dashboard repo1.9KViews5likes0CommentsGetting Started with BIG-IP Next: Migrating an Application Workload
So far in this article series, the focus has been completely on the operational readiness of BIG-IP Next as a system. In this article, I'll walk through migrating an application currently supported by my classic BIG-IP running TMOS version 15.1.x. The application is just a simple instance of an NGINX web server fronted on LTM with basic load balancing, TLS offloading, and a basic WAF policy. There are a lot of screenshots in this article, which might seem overwhelming. Doing your own walkthrough, however, will put your mind at ease; it actually moves pretty quickly in realtime. Existing Application Workload on TMOS We'll start with the GUI representation of the application workload. It is secured with TLS, which is offloaded at the BIG-IP with a clientssl profile and not re-encrypted to the server. There are custom TCP and HTTP profiles defined as well as the aforementioned custom clientssl profile. Snat automap is enabled, and a specific VLAN is configured to allow connections. On the security tab, an application security policy is enabled, and the log illegal requests log profile is enabled as well. Finally, under resources, the default pool is defined and a policy is in place to map requests to the applied security policy. On the CLI, that virtual server along with all the other referenced BIG-IP objects are defined in the tmsh version of that configuration. ltm virtual nginx-vip-tls { destination 172.16.101.50:https ip-protocol tcp mask 255.255.255.255 policies { asm_auto_l7_policy__nginx-vip-tls { } } pool nginx-pool profiles { ASM_testpol { } cssl.TestSuite { context clientside } customHTTP { } customTCP { } websecurity { } } security-log-profiles { "Log illegal requests" } source-address-translation { type automap } vlans { vlan.br1 } vlans-enabled } ltm policy asm_auto_l7_policy__nginx-vip-tls { controls { asm } last-modified 2024-03-20:13:25:13 requires { http } rules { default { actions { 1 { asm enable policy /Common/testpol } } ordinal 1 } } status legacy strategy first-match } ltm pool nginx-pool { members { 172.16.102.5:http { address 172.16.102.5 session monitor-enabled state up } } monitor http } security bot-defense asm-profile ASM_testpol { app-service none clientside-in-use disabled flags 0 inject-javascript disabled persistent-data-validity-period 0 send-brute-force-challenge disabled send-javascript-challenge disabled send-javascript-efoxy disabled send-javascript-fingerprint disabled } ltm profile client-ssl cssl.TestSuite { app-service none cert-key-chain { default { cert default.crt key default.key } } cipher-group cg_TLSv1.3 ciphers none defaults-from clientssl inherit-ca-certkeychain true inherit-certkeychain true options { dont-insert-empty-fragments } } ltm cipher group cg_TLSv1.3 { allow { cr_TLSv1.3 { } } } ltm cipher rule cr_TLSv1.3 { cipher TLSv1_3 dh-groups DEFAULT signature-algorithms DEFAULT } ltm profile http customHTTP { app-service none defaults-from http enforcement { known-methods { PATCH DELETE GET POST PUT } max-header-count 32 max-header-size 16384 rfc-compliance enabled } hsts { mode enabled } insert-xforwarded-for enabled proxy-type reverse } ltm profile tcp customTCP { app-service none congestion-control bbr defaults-from f5-tcp-progressive idle-timeout 600 ip-tos-to-client pass-through keep-alive-interval 2100 pkt-loss-ignore-burst 3 pkt-loss-ignore-rate 10 proxy-options enabled } ltm profile web-security websecurity { } You can see that I have some non-standard options in some of that configuration, such as specifying the congestion-control algorithm algorithm in the TCP profile, enabling HSTS in the HTTP profile, and setting cipher rules and groups for use in my SSL profile. Now that we have an idea of the workload we're going to migrate, let's create a UCS of the system for use in the migration. If you are already comfortable with this part on classic BIG-IP systems, you can skip down to the next section header. First, login to your classic BIG-IP and navigate to System->Archives and click Create. Give it a name and click Finished. I named mine next-migration. Click OK after the UCS has been generated and saved. In the archive list, click the name of the UCS you created. Click the Download button. Migrating the Workload in Central Manager Upload UCS and Analyze the Workloads Armed with your UCS, login to Central Manager and on the welcome screen, click Go to Application Workspace. If you have not added any applications yet, you'll see a screen like this with a Start Adding Apps button. If you already have something defined, you'll see a list of applications. Click the + Add Application button instead. On this screen, we'll bypass creating a new application service and select New Migration. Name your session as you'll be able to come back to it to migrate other applications later if your intent is to just migrate a single application for now (as is the case with this walkthrough.) I added a description but it is not necessary. Click Next. Here you'll select your UCS archive and group your application services by IP addresses OR by virtual server. I stuck with the recommended default. Click Next. Your UCS will now upload and then Central Manager will analyze and group the package. An enhanced version of the JOURNEYS tool available in the f5devcentral organization on GitHub is used here. Select Add Application. Application 5 is the one we are interested in analyzing and migrating for this walkthrough, so I selected that one. Notice in the status column the applications that have warnings, and that ours is one of them. Hovering over the triangle icon it indicates the app can be migrated, but without some of the functionality from our classic iteration of this workload. Next, click Analyze at the top right so we can see what can't be migrated. In the Configuration Analyzer screen, there are 3 files with areas of concern. First, that the websecurity profile is not supported. This is ok, the mechanisms to support attaching policies in Next are slightly different. Next from what was the bigip_base.conf file, it's not supporting the vlan as defined. This is included in the migration analysis as the vlans are specified in my virtual server, but the mechanisms for doing so are different in Next. (Note: I don't fully grok this change yet. This article will be updated once I have confidence I'm communicating the functionality accurately.) And finally, from the bigip.conf file, there are few areas of concern, shown in the animated gif below. Standalone bot-defense is not a thing in BIG-IP Next, it's part of the overall policy, so that object is not supported. Also not supported yet are local traffic policies and cipher groups. Note that even though these objects aren't supported, I can still migrate the application, and it should "just work." I guess we'll see later in this article, right? :) At this point, select the </> Preview AS3 and copy that to a file. We'll compare that to the classic BIG-IP version of AS3 in a later section. Add an Application Service After closing the AS3 preview, select the application again and click Add. Click Next For this particular application, we need a couple shared objects: the certificate/key pair for the SSL profile and the WAF policy. Click Import. After those are imported, click the numbered icon (2 in my case) under the Shared Objects column, which will open a listing of those objects that you imported. Review the objects (optional) and click Exit. At this step, if your existing application migration is accurate to the object level, you can deploy to an instance directly. But I have some changes to make to the IPs so I'm going to deploy as a draft instead. After seeing that my deployment was successfully deployed as a draft in Central Manager, I click Finish. Update the Draft and Deploy In My Application Services, click the application we just migrated. Here we can tweak the AS3 declaration. I need to update the vlan as my vlan.br1 from my TMOS BIG-IP system is not defined on my Next instance. I also have different client/server address ranges, so I updated the virtual server and pool member addresses as well. You will likely want to change your application name from the generically-migrated "application_5" but I left it as is for this exercise. Once I completed those changes, I clicked Save & Deploy. I was then asked to select an instance to deploy the application server. I only have one currently, so I selected that. This failed due to my vlan configuration. As I mentioned during the migration process, I don't yet fully grok the vlan referencing requirements in Next, so this is a point for me to be educated on and follow up with updates here in this article. Instead, I removed the allowVlans attributed altogether (after another attempt) and then clicked Save & Deploy again and (after re-selecting the deploy location as shown above) found success. Clicking on the application, you get a visual representation of the application objects. Testing and Observing the Migrated Application Now that we have an honest to goodness deployed application on BIG-IP Next (WOO HOO!!) let's test it to make sure things are working as expected. I have a ubuntu test server with connections into my external and internal traffic networks for my Next instance so it can be the client (curl) and the server (NGINX). First, a request that should work: curl -sk https://10.0.2.50/ <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> Huzzah! That's a successful test. I ran a simple bash script with repetitive wget calls to push just a little load to populate the instance traffic graph: Now let's test the WAF policy by sending some nefarious traffic: curl -sk --config requests.txt https://10.0.2.50/ <html> <head> <title>Request Rejected</title> </head> <body>The requested URL was rejected. Please consult with your administrator.<br><br> Your support ID is: 16177875355615369771<br><br> <a href='javascript:history.back();'>[Go Back]</a> </body> </html> Sweet! Exactly what we wanted to see. Now let's take a look at the WAF Dashboard for blocks. Ok, that's a wrap on migrating the application. Functionally, it is a success! Comparing BIG-IP classic AS3 with BIG-IP Next AS3 If you are moving from classic BIG-IP configuration to BIG-IP Next, you likely will not have any context for comparing AS3 and so you might miss that some of the features you configured in classic are not present in Next. Some of those features aren't there at all yet, and some of them are just not exposed yet. Under the hood, TMM is still TMM with BIG-IP Next, and all of that core functionality is there, it's just a matter of prioritizing what gets exposed and tested and ready to support. Despite a myriad of features in classic BIG-IP, a surprising number of features went either unused or under-used and maintaining support for those will depend on future use requirements. Anyway, one way to build context for AS3 is to useVisual Studio Code and the F5 Extension to take your classic configuration and convert that to AS3 declarations with the AS3 configuration converter. In this section, I'm going to look at a few snippets to compare between classic and Next. Declaration Header The header for classic is a essentially a wrapper (lines 2-4) that isn't necessary in Next at all. That's because in classic, AS3 is not the only declaration class, you also have declarative onboarding and telemetry streaming. Classic: { "$schema": "https://raw.githubusercontent.com/F5Networks/f5-appsvcs-extension/master/schema/latest/as3-schema.json", "class": "AS3", "declaration": { "class": "ADC", "schemaVersion": "3.37.0", "id": "urn:uuid:4339ea7d-094b-4950-b029-ac6344b03a2b", "label": "Converted Declaration", } } Next: { "class": "ADC", "schemaVersion": "3.0.0", "id": "urn:uuid:715aa8d8-c2b0-4890-9e77-5f6131ee9efd", "label": "Converted Declaration", } Profiles One thing to keep in mind with migration is that the migration assistant currently provides detailed analysis to the class level, not the class attribute level. This means that some of the attributes that are supported in classic that are not supported in Next will fly under the radar and be removed with no notification. There is work underway in this regard, but you'll need to evaluate each of your applications as you migrate and plan accordingly. For the app I migrated here, this was evident in the following profiles. ClientSSL Here, the cipher groups and rules from classic are not yet available, and the ability to establish only TLSv1.3 seems to not be configurable at this time. Classic: "cssl.TestSuite": { "certificates": [ { "certificate": "foo.acmelabs.com" } ], "cipherGroup": { "use": "cg_TLSv1.3" }, "class": "TLS_Server", "tls1_0Enabled": true, "tls1_1Enabled": true, "tls1_2Enabled": true, "tls1_3Enabled": true, "singleUseDhEnabled": false, "insertEmptyFragmentsEnabled": false }, Next: "cssl.TestSuite": { "authenticationFrequency": "one-time", "certificates": [ { "certificate": "/tenant87f7bd9913a51/application_5/foo.acmelabs.com" } ], "class": "TLS_Server" }, TCP In the TCP profile, the most notable changes are the loss of QoS settings and the ability to select the congestion control algorithm. Classic: "customTCP": { "congestionControl": "bbr", "idleTimeout": 600, "ipTosToClient": "pass-through", "keepAliveInterval": 2100, "pktLossIgnoreBurst": 3, "pktLossIgnoreRate": 10, "proxyOptions": true, "class": "TCP_Profile" } Next: "customTCP": { "idleTimeout": 600, "pktLossIgnoreBurst": 3, "pktLossIgnoreRate": 10, "proxyBufferHigh": 262144, "proxyBufferLow": 196608, "proxyOptions": true, "sendBufferSize": 262144, "class": "TCP_Profile" }, HTTP In my HTTP profile, it seems I lost all my personally-selected options, such that I'd likely be fine with the default profile. Also, since I'm using the WAF, I can manage the allowed request methods there, and whereas I can't auto-insert strict transport security in the profile directly yet, I can manage that in an iRule as well, so I do have a path to workarounds for both cases. Classic: "customHTTP": { "knownMethods": [ "PATCH", "DELETE", "GET", "POST", "PUT" ], "maxHeaderCount": 32, "maxHeaderSize": 16384, "hstsInsert": true, "xForwardedFor": true, "proxyType": "reverse", "class": "HTTP_Profile" }, Next: "customHTTP": { "requestChunking": "sustain", "responseChunking": "sustain", "class": "HTTP_Profile" } Final Thoughts I point out the differences in my before and after to show a complete picture of the migration process. Some things changed, some went away, but the bottom line is I have a working application service. Before working on this article, I've done a migration in a couple step-by-step controlled labs and have played with but not finished deploying a working, tested, functional application in my own lab. Don't make that same mistake. Get your classic configurations migrated ASAP even if only as a draft in Central Manager, so you can start to evaluate and analyze What work you have on your end to tweak and tune where features have changed Where you need to start engaging your account team to inquire about your MUST HAVE features that may or may not be scoped currently. Next time out, we'll take a look at creating a net-new application service. Until then, stay active out there community and start digging into BIG-IP Next!1.4KViews8likes1CommentBIG-IP Next Automation: AS3 Basics
I need a little Mr. Miyagi right now to grab my face and intently look me in the eye and give me a "Concentrate! Focus power!" For those of you youngins' who don't know who that is, he's the OG Karate Kid mentor. Anyway, I have a thousand things I want to say about AS3 but in this article, I'll attempt to cut this down to a narrow BIG-IP Next-specific context to get you started. It helps that last December I did a five-part streaming series on AS3 in the BIG-IP classic context. If you haven't seen that, you have my blessing to stop right now, take some time to digest AS3 conceptually and practice against workloads and configurations in BIG-IP classic that you know and understand, before returning here to embrace all the newness of BIG-IP Next. AS3 is FOUNDATIONAL in BIG-IP Next In classic BIG-IP, you could edit the bigip.conf file directly, use tmsh commands, or iControlREST commands to imperatively create/modify/delete BIG-IP objects. With the exception of system configuration and shared configuration objects, this is not the case with BIG-IP Next. All application configuration is AS3 at its lowest state level. This doesn't mean you have to work primarily in AS3 configuration. If you utilize the migration utility in Central Manager, it will generate the AS3 necessary to get your apps up and running. Another option is to use the built-in http FAST template (we'll cover FAST in later articles) to build out an application from scratch in the GUI. But if you use features outside the purview of that template, or you need to edit your migration output, you'll need to work in the AS3 configuration declaration, even if just a little bit. Apples to Apples It's a fun card game, no? My family takes it to snarky absurd levels of sarcasm, to the point that when we play with "outsiders" we get lots of blank looks and stares as we're all rolling on the floor laughing. Oh well, to each his own. But we're here to talk about AS3, right? Well, in BIG-IP Next, there is a compatibility API for AS3, such that you can take a declaration from BIG-IP classic and as long as the features within that declaration are supported, it should "just work" via the Central Manager API. That's pretty cool, right? Let's start with a basic application declaration from the recent video posted by Mark_Dittmerexploring the API differences between classic and Next. { "class": "ADC", "schemaVersion": "3.0.0", "id": "generated-for-testing", "Tenant_1": { "class": "Tenant", "App_1": { "class": "Application", "Service_1": { "class": "Service_HTTP", "virtualAddresses": [ "10.0.0.1" ], "virtualPort": 80, "pool": "Pool_1" }, "Pool_1": { "class": "Pool", "members": [ { "servicePort": 80, "serverAddresses": [ "10.1.0.1", "10.1.0.2" ] } ] } } } } A simple VIP with a pool with two pool members. A toy config to be sure, but it is useful here to show the format (JSON) of an AS3 declaration and some of the schema as well. With the compatibility API, this same declaration can be posted to a classic BIG-IP like this: POST https://<BIG-IP IP Address>/mgmt/shared/appsvcs/declare Or a BIG-IP Next instance like this: POST https://<Central Manager IP Address>/api/v1/spaces/default/appsvcs/declare?target_address=<BIG-IP Next instance IP Address> For those already embracing AS3, this compatibility API in BIG-IP Next should make the transition easier. AS3 Workflow in BIG-IP Next With BIG-IP classic, you had to install the AS3 package (technically an iControl LX, or sometimes referenced as an iApps v2 package) onto each BIG-IP system you wanted to use the AS3 declarative configuration model on. Each BIG-IP was an island, and the configuration management of the overall system of BIG-IPs was reliant on an external system for source of truth. With BIG-IP Next, the Central Manager API has native AS3 support so there are no packages to install to prepare the environment. Also, Central Manager is the centralized AS3 interface for all Next instances. This has several benefits: A singular and centralized source of truth for your configuration management No external package management requirements Tremendous improvement in API performance management since most of the heavy lifting is offloaded from the instances and onto Central Manager and the control-plane functionality that remains on the instance is intentionally designed for API-first operations The general application deployment workflow introduced exclusively for Next, which I'll reference as the documents API, is twofold: Create an application service First, you create the application service on Central Manager. You can use the same JSON declaration from the section above here, only the API endpoint is different: POST https://<Central Manager IP Address>/api/v1/spaces/default/appsvcs/documents A successful transaction will result in an application service document on Central Manager. A couple notes on this at time of writing: Documents created through the API are not validated against the journeys migration tool that is available for use in the Central Manager GUI. Documents are not schema validated at the attribute level of classes, so whereas a class used in classic might be supported in Next, some of the attributes might not be. This means that whereas the document creation process can appear successful, the deployment will fail if classes and/or class attributes supported in classic BIG-IP are present in the AS3 declarations when an attempt to apply to an instance occurs. Deploy the application service Assuming, however, all your AS3 work is accurate to the Next-supported schema, you post the specified document by ID to the target BIG-IP Next instance, here as a JSON payload versus a query parameter on the compatibility API shown earlier. POST https://<Central Manager IP Address>/api/v1/spaces/default/appsvcs/documents/<Document ID>/deployments { "target": "<BIG-IP Next Instance IP Address>" } At this point, your service should be available to receive traffic on the instance it was deployed on. Next Up... Now that we have the theory in place, join me next time where we'll take a look at working with a couple application services through both approaches. Resources CM App Services Management AS3 Schema AS3 User Guide (classic, but useful) AS3 Reference Guide (classic, but useful) AS3 Foundations (streaming series)1.1KViews0likes4CommentsGetting Started with BIG-IP Next: Configuring Instance High Availability
With BIG-IP classic, there are a lot of design choices to make and steps on both systems to arrive at an HA pair. With BIG-IP Next, this is simplified quite a bit. Once configured, the highly available pair is treated by Central Manager as a single entity. There might be alternative options in the future, but as of version 20.1, HA for instances is active/standby only. In this article, I'll walk you through the steps to configure HA for instances in the Central Manager GUI. Background and Prep Work I set up two HA systems in my preparation for this article. The first had dedicated interfaces for the management interface, the external and internal traffic interfaces, and the HA interface. So when configuring the virtual machine, I made sure each system had four NICs. For the second, I merged all the non-management interfaces on a single NIC and used vlan tagging, so those systems had two NICs. In my lab that looks like this: The IP addressing scheme in my lab is shown below. First the four NIC system: 4-NIC System next-4nic-a next-4nic-b floating mgmt 172.16.2.152/24 172.16.2.153/24 172.16.2.151/24 cntrlplane ha (vlan 245) 10.10.245.1/30 10.10.245.2/30 NA dataplane ha (int 1.3) 10.0.5.1/30 10.0.5.2/30 NA dataplane ext (int 1.1) 10.0.2.152/24 10.0.2.153/24 10.0.2.151/24 dataplane int (int 1.2) 10.0.3.152/24 10.0.3.153/24 10.0.3.151/24 And now the two NIC system: 2-NIC System next-2nic-a next-2nic-b floating mgmt 172.16.2.162/24 172.16.2.163/24 172.16.2.161/24 cntrlplane ha (vlan 245) 10.10.245.5/30 10.10.245.6/30 NA dataplane ha (vlan 50) 10.0.5.5/30 10.0.5.6/30 NA dataplane ext (vlan 30) 10.0.2.162/24 10.0.2.163/24 10.0.2.161/24 dataplane int (vlan 40) 10.0.3.162/24 10.0.2.163/24 10.0.3.161/24 Beyond the self IP addresses for your traffic interfaces, you'll need additional IP addresses for the floating address, the control-plane HA sub-interfaces (which are created for you), and teh data-plane HA interfaces. Before proceeding, make sure you have a plan for network segmentation and addressing similar to above, you've installed two like instances, and that one (and only one) of them is licensed. Configuration This walk through is for the 2-NIC system shown above, but the steps are mostly the same. First, login to Central Manager, and click on Manage Instances. Click on the standalone mode for the system you want to be active initially in your HA pair. For me, that's next-2nic-a. (You can also just click on the system name and then select HA in the menu, but this saves a click.) In the pop-up dialog, select Enable HA. Read the notes below to make sure your systems are ready to be paired. On this screen, a list of available standalone systems will populate. Click the down arrow and select your second system, next-2nic-b in my case. Then click Next. On this next prompt, you'll need to create two vlans, one for the control plane and one for the data plane. The control plane mechanics are taken care of for you and you don't need to plan connectivity other than to select an available vlan that won't conflict with anything else in your system. For the data plane, you need to have a dedicated vlan and/or interface set aside. Click Create VLAN for the control plane. Name and tag your vlan. In my case I used cp-ha as my vlan name and tag 245. Click Done. Now click Create VLAN for the data plane. Because I'm tagging all networks on the 2-NIC system, my own interface is 1.1. So I named my data plan vlan dp-ha, set the tag to 50, selected interface 1.1, and clicked Done. Now that both HA VLANs have been created, click Next. On this screen, you'll name your HA pair system. This will need to be unique from other HA pairs, so plan accordingly. I named mine next-ha-1, but that's generic and unlikely to be helpful in your environment. Then set your HA management IP, this is how Central Manager will connect to the HA pair. You can enable auto-failback if desired, but I left that unchecked. For the HA Nodes Addresses, I referenced my addressing table posted at the top of this article and filled those in as appropriate. When you get those filled out, click Next. Now you'll be presented with a list of your traffic VLANs. On my system I have v102-ext and v103-int for my external and internal networks. First, I clicked v102-ext. On this screen you'll need to add a couple rows so you can populate the active node IP, the standby node IP, and the floating IP. The order doesn't matter, but I ordered them as shown, and again referenced my addressing table. Once populated, click Save. That will return you to this screen, where you'll notice that v102-ext now has a green checkbox where the yellow warning was. Now click into your other traffic VLAN (v103-int in my case) if applicable to your environment or skip this next step. This is a repeat of the external traffic network for the internal traffic network. I referenced my address table one more time and filled the details out as appropriate, then clicked Save. Make sure that you have green checkboxes on the traffic VLANs, then click Next. Review the summary of the HA settings you've configured, and if everything looks right, click Deploy to HA. On the "are you sure?" dialog where you're prompted to confirm your deployment, click Yes, Deploy. You'll then see messaging at the top of the HA configuration page for the instance indicating that HA is being created. Also note that the Mode on this page during creation still indicates standalone. Once the deployment is complete, you'll see the mode has changed to HA and the details for your active and standby nodes are provided. Also present here is the Enable automatic failover option, which is enabled by default. This is for software upgrades. If left enabled, the standby unit will be upgraded first, a failover will be executed, and the the remaining system will be upgraded. If in your HA configuration you specified auto-failback, then after the second system is upgraded there will be another failover executed to complete the process. And finally, as seen in the list of instances, there are three now instead of four, with next-ha-1 taking the place of next-2nic-a and next-2nic-b from where we started. Huzzah! You now have a functioning BIG-IP Next HA pair. After we conclude the "Getting Started" series, we'll start to look at the benefits of automation around all the tasks we've covered so far, including HA. The click-ops capabilities are nice to have, but I think you'll find the ability to automate all this from a script or something like an Ansible playbook will really start to drive home the API-first aspects of Next.999Views1like6CommentsGetting Started with BIG-IP Next: Licensing Instances in Central Manager
This article assumes that the license was not applied during the initial instance setup and covers only the GUI process. For the API process or for disconnected mode, please reference the instructions for licensing on Clouddocs. Download the JSON Web Token from MyF5 I don't have a paid license, so I'm going to use my trial license available at MyF5. Your mileage may vary here. Go to my products & plans, trials, and then in the my trials listing (assuming you've requested/received one) click BIG-IP Next. Click downloads and licenses (note, however, the helpful list of resources down in guides and references). You can just copy your JSON web token, but I chose to download. Install the Token Login to Central Manager and click manage instances. Click on your new unlicensed instance. In the left-hand menu at the bottom, click License. Click activate license. We already downloaded our token, so after reviewing the information, click next. Note that I made sure that my Central Manager has access to the licensing server and the steps covered in this article assume the same. If you've managed classic BIG-IP licenses, copying and pasting dossiers to get licenses should be a well-understood process. On this screen, paste your token into the box, give it a name, and click activate. After a brief interrogation of the licensing server, you should now have a healthy, licensed, BIG-IP Next Instance! Resources How to: Manage BIG-IP Next instance licenses799Views0likes9Comments