cisco
262 TopicsF5 and Cisco ACI Essentials - Design guide for a Single Pod APIC cluster
Deployment considerations It is usually an easy decision to have BIG-IP as part of your ACI deployment as BIG-IP is a mature feature rich ADC solution. Where time is spent is nailing down the design and the deployment options for the BIG-IP in the environment. Below we will discuss a few of the most commonly asked questions: SNAT or no SNAT There are various options you can use to insert the BIG-IP into the ACI environment. One way is to use the BIG-IP as a gateway for servers or as a routing next hop for routing instances. Another option is to use Source Network Address Translation (SNAT) on the BIG-IP, however with enabling SNAT the visibility into the real source IP address is lost. If preserving the source IP is a requirement then ACI's Policy-Based Redirect (PBR) can be used to make sure the return traffic goes back to the BIG-IP. BIG-IP redundancy F5 BIG-IP can be deployed in different high-availability modes. The two common BIG-IP deployment modes: active-active and active-standby. Various design considerations, such as endpoint movement during fail-overs, MAC masquerade, source MAC-based forwarding, Link Layer Discovery Protocol (LLDP), and IP aging should also be taken into account for each of the deployment modes. Multi-tenancy Multi-tenancy is supported by both Cisco ACI and F5 BIG-IP in different ways. There are a few ways that multi-tenancy constructs on ACI can be mapped to multi-tenancy on BIG-IP. The constructs revolve around tenants, virtual routing and forwarding (VRF), route domains, and partitions. Multi-tenancy can also be based on the BIG-IP form factor (appliance, virtual edition and/or virtual clustered multiprocessor (vCMP)). Tighter integration Once a design option is selected there are questions around what more can be done from an operational or automation perspective now that we have a BIG-IP and ACI deployment? The F5 ACI ServiceCenter is an application developed on the Cisco ACI App Center platform built for exactly that purpose. It is an integration point between the F5 BIG-IP and Cisco ACI. The application provides an APIC administrator a unified way to manage both L2-L3 and L4-L7 infrastructure. Once day-0 activities are performed and BIG-IP is deployed within the ACI fabric using any of the design options selected for your environment, then the F5 ACI ServiceCenter can be used to handle day-1 and day-2 operations. The day-1 and day-2 operations provided by the application are well suited for both new/greenfield and existing/brownfield deployments of BIG-IP and ACI deployments. The integration is loosely coupled, which allows the F5 ACI ServiceCenter to be installed or uninstalled with no disruption to traffic flow, as well as no effect on the F5 BIG-IP and Cisco ACI configuration. Check here to find out more. All of the above topics and more are discussed in detail here in the single pod white paper.1.7KViews3likes0CommentsOrchestrated Infrastructure Security - Change at the Speed of Business - Cisco Firepower
Editor's Note:The F5 Beacon capabilities referenced in this article hosted on F5 Cloud Services are planning a migration to a new SaaS Platform - Check out the latesthere Introduction This article is part of a series on implementing Orchestrated Infrastructure Security. It includes High Availability, Central Management with BIG-IQ, Application Visibility with Beacon and the protection of critical assets using F5 Advanced WAF, Protocol Inspection (IPS) with AFM as well as leading Security Solutions like Cisco Firepower and WSA.It is assumed that SSL Orchestrator is already deployed, and basic network connectivity is working. If you need help setting up SSL Orchestrator for the first time, refer to the Dev/Central article series on Implementing SSL Orchestrator here or the CloudDocs Deployment Guide here. This article focuses on using SSL Orchestrator as a tool to assist with simplifying Change Management processes, procedures and shortening the duration of the entire process. Configuration files of Cisco Firepower can be downloaded fromherefrom GitLab. Please forgive me for using SSL and TLS interchangeably in this article. Click here for a demo video of this Dev/Central article This article is divided into the following high level sections: ·Create a new Topology to perform testing ·Monitor Firepower statistics – change the weight ratio – check Firepower stats again ·Remove a single Firepower device from the Service ·Perform maintenance on the Firepower device ·Add the Firepower device to the new Topology ·Test functionality with a single client ·Add the Firepower device back to the original Topology ·Test functionality again ·Repeat to perform maintenance on the other Firepower device Create a new Topology to perform testing A new Topology will be used to safely test the Service after maintenance is performed.The Topology should be similar to the one used for production traffic.This Topology can be re-used in the future. From the BIG-IP Configuration Utility select SSL Orchestrator > Configuration.Click Add under Topologies. Scroll to the bottom of the next screen and click Next. Give it a name, Topology_Staging in this example. Select L2 Inbound as the Topology type then click Save & Next. For the SSL Configurations you can leave the default settings.Click Save & Next at the bottom. Click Save & Next at the bottom of the Services List. Click the Add button under Services Chain List.A new Service Chain is needed so we can remove Firepower1 from the Production Service and add it here. Give the Service Chain a name, Staging_Chain in this example.Click Save at the bottom. Note: The Service will be added to this Service Chain later. Click Save & Next. Click the Add button on the right to add a new rule. For Conditions select Client IP Subnet Match. Enter the Client IP and mask, 10.1.11.52/32 in this example.Click New to add the IP/Subnet. Set the SSL Proxy Action to Intercept. Set the Service Chain to the one created previously. Click OK. Note: This rule is written so that a single client computer (10.1.11.52) will match and can be used for testing. Select Save & Next at the bottom. For the Interception Rule set the Source Address to 10.1.11.52/32.Set the Destination Address/Mask to 10.4.11.0/24.Set the port to 443. Select the VLAN for your Ingress Network and move it to Selected. Set the L7 Profile to Common/http. Click Save & Next. For Log Settings, scroll to the bottom and select Save & Next. Click Deploy. Monitor Firepower statistics – change the weight ratio – check Firepower statistics again Check the statistics on the Firepower device we will be performing maintenance on.It’s “Firepower1” in this example. Connect to the CLI via SSH.At the prompt enter ‘capture-traffic’.Select the correct ‘inlineset’ (2 in this example) and hit Enter for no tcpdump options: > capture-traffic Please choose domain to capture traffic from: 0 - management0 1 - inlineset1 inline set 2 - inlineset2 inline set Selection? 2 Please specify tcpdump options desired. (or enter '?' for a list of supported options) Options: You should see an output similar to the following: This Firepower device is actively processing connections. Change the Weight Ratio Back to the SSL Orchestrator Configuration Utility.Click SSL Orchestrator > Configuration > Services > then the Service name, ssloS_Firepower in this example. Click the pencil icon to edit the Service. Click the pencil icon to edit the Network Configuration for Firepower2 Set the ratio to 65535 and click Done. Click Save & Next at the bottom. Click OK if presented with the following warning. Click Deploy. Click OK when presented with the Success message. Check Firepower Statistics Again Check the statistics on “Firepower1” again.With the Weight Ratio change there should be little to no active connections. It should look like the following: Note: The connections above represent the health checks from SSL Orchestrator to the inline Service. Remove a single Firepower device from the Service Back to the SSL Orchestrator Configuration Utility.Click SSL Orchestrator > Configuration > Services > then the Service name, ssloS_Firepower in this example. Click the pencil icon to edit the Service. Under Network Configuration, delete Firepower1. Click Save & Next at the bottom. Click OK if presented with the following warning. Click Deploy. Click OK when presented with the Success message. Perform maintenance on the Firepower device At this point Fireower1 has been removed from the Incoming_Security Topology and is no longer handling production traffic.Firepower2 is now handling all of the production traffic. We can now perform a variety of maintenance tasks on Firepower1 without disrupting production traffic.When done with the task(s) we can then safely test/verify the health of Firepower1 prior to moving it back into production. Some examples of maintenance tasks: ·Perform a software upgrade to a newer version. ·Make policy changes and verify they work as expected. ·Physically move the device. ·Replace a hard drive, fan, and/or power supply. Add the Firepower device to the new Topology This will allow us to test its functionality with a single client computer, prior to moving it back to production. From the SSL Orchestrator Configuration Utility click SSL Orchestrator > Configuration > Topologies > sslo_Topology_Staging. Click the pencil icon on the right to edit the Service. Click Add Service. Select the Cisco Firepower Threat Defense Inline Layer 2 Service and click Add. Give it a name or leave the default.Click Add under Network Configuration. Set the FROM and TO VLANS to the following and click Done. Click Save at the bottom. Click the Service Chain icon. Click the Staging_Chain. Move the CSCO Service from Available to Selected and click Save. Click OK. Click Deploy. Click OK. Test functionality with a single client We created a policy with source IP = 10.1.11.52 to use the new Firepower Service that we just performed maintenance on. Go to that client computer and verify that everything is still working as expected. As you can see this is the test client with IP 10.1.11.52. The page still loads for one of the web servers. You can view the Certificate and see that it is not the same as the Production Certificate. Add the Firepower device back to the original Topology From the SSL Orchestrator GUI select SSL Orchestrator > Configuration > Service Chains Select the Staging_Chain. Select ssloS_CSCO on the right and click the left arrow to remove it from Selected. Click Deploy when done. Click OK. Click OK to the Success message. From the SSL Orchestrator Guided Configuration select SSL Orchestrator > Configuration > Services. Select the CSCO Service and click Delete. Click OK to the Warning. When that is done click the ssloS_Firepower Service. Click the Pencil icon to edit the Service. Under Network Configuration click Add. Set the Ratio to the same value as Firepower2, 65535 in this example.Set the From and To VLAN the following and click Done. Click Save & Next at the bottom. Click OK. Click Deploy. Click OK. Test functionality again Make sure Firepower1 is working properly. To ensure that everything is working as expected you can view the Statistics on Firepower1 again. This Firepower device is actively processing connections. Repeat these steps to perform maintenance on the other Firepower device (not covered in this guide) ·Create a new Topology to perform testing ·Monitor Firepower statistics – change the weight ratio – check Firepower stats again ·Remove a single Firepower device from the Service ·Perform maintenance on the Firepower device ·Add the Firepower device to the new Topology ·Test functionality with a single client ·Add Firepower device back to the original Topology ·Test functionality again601Views2likes0CommentsF5 & Cisco ACI Essentials - ServiceCenter: One stop shop for IP address facts
Having to debug a network issue can be a daunting task, involving 100's of IP address's spread acrossyour entire deployment. Imagine having a tool which can help with getting all the information you want of an IP address with a click of a button. The F5 ACI ServiceCenter is one such tool designed to do exactly that and more. To refresh your knowledge on the tool: Lightboard Video Troubshooting tips User and deployment guide Telemetry streaming Let's get into the nuts and bolts of getting the entire benefit from visibility from the ServiceCenter. BIG-IP has a great component as part of its automation toolchain called 'Telemetry streaming'. The ServiceCenter takes advantage of telemetry streaming (TS) to grab traffic statistics from the BIG-IP. Integration TS with the ServiceCenter is a two step process: Step 1: Download and install the TS RPM package on the BIG-IP(no cost, no license) Step 2:Configure the TS consumer on the BIG-IP which the ServiceCenter will poll. Use a tool like POSTMAN or Curl to POST the below API to the BIG-IP. URI: https://<BIG-IP MGMT IP address>/mgmt/shared/telemetry/declare Payload: { "class":"Telemetry", "My_Poller":{ "class":"Telemetry_System_Poller", "interval":0, "actions":[ { "includeData":{ }, "locations":{ "virtualServers":{ ".*":{ } }, "pool":{ ".*":{ } } } } ] }, "My_System":{ "class":"Telemetry_System", "enable":"true", "systemPoller":[ "My_Poller" ] }, "My_Pull_Consumer":{ "class":"Telemetry_Pull_Consumer", "type":"default", "systemPoller":[ "My_Poller" ] } } IP Address facts Once logged into the ServiceCenter from the APIC controller choose the telemetry consumer to retrieve the IP address statistics. Conclusion The ServiceCenter besides the statistics will give much more information like the active connections on the BIG-IP, the physical port on which the IP is active (Virtual IP or node IP) on both the BIG-IP and the Cisco ACI, filtered logs from the BIG-IP and much more. To learn more check out the video below on the latest App enhancements and example use cases: Download the App to get started: https://dcappcenter.cisco.com/f5-aci-servicecenter.html748Views2likes2CommentsUnify Visibility with F5 ACI ServiceCenter in Cisco ACI and F5 BIG-IP Deployments
What is F5 ACI ServiceCenter? F5 ACI ServiceCenter is an application that runs natively on Cisco Application Policy Infrastructure Controller (APIC), which provides administrators a unified way to manage both L2-L3 and L4-L7 infrastructure in F5 BIG-IP and Cisco ACI deployments.Once day-0 activities are performed and BIG-IP is deployed within the ACI fabric, F5 ACI ServiceCenter can then be used to handle day-1 and day-2 operations. F5 ACI ServiceCenter is well suited for both greenfield and brownfield deployments. F5 ACI ServiceCenteris a successful and popular integration between F5 BIG-IP and Cisco Application Centric Infrastructure (ACI).This integration is loosely coupled and can be installed and uninstalled at anytime without any disruption to the APIC and the BIG-IP.F5 ACI ServiceCenter supports REST API and can be easily integrated into your automation workflow: F5 ACI ServiceCenter Supported REST APIs. Where can we download F5 ACI ServiceCenter? F5 ACI ServiceCenter is completely Free of charge and it is available to download from Cisco DC App Center. F5 ACI ServiceCenter is fully supported by F5. If you run into any issues and/or would like to see a new feature or an enhancement integrated into future F5 ACI ServiceCenter releases, you can open a support tickethere. Why should we use F5 ACI ServiceCenter? F5 ACI ServiceCenter has three main independent use cases and you have the flexibility to use them all or to pick and choose to use whichever ones that fit your requirements: Visibility F5 ACI ServiceCenter provides enhanced visibility into your F5 BIG-IP and Cisco ACI deployment. It has the capability to correlate BIG-IP and APIC information. For example, you can easily find out the correlated APIC Endpoint information for a BIG-IP VIP, and you can also easily determine the APIC Virtual Routing and Forwarding (VRF) to BIG-IP Route Domain (RD) mapping from F5 ACI ServiceCenter as well. You can efficiently gather the correlated information from both the APIC and the BIG-IP on F5 ACI ServiceCenter without the need to hop between BIG-IP and APIC. Besides, you can also gather the health status, the logs, statistics etc. on F5 ACI ServiceCenter as well. L2-L3 Network Configuration After BIG-IP is inserted into ACI fabric using APIC service graph, F5 ACI ServiceCenter has the capability to extract the APIC service graph VLANs from the APIC and then deployed the VLANs on the BIG-IP. This capability allows you to always have the single source of truth for network configuration between BIG-IP and APIC. L4-L7 Application Services F5 ACI ServiceCenter leverages F5 Automation Toolchain for application services: Advanced mode, which uses AS3 (Application Services 3 Extension) Basic mode, which uses FAST (F5 Application Services Templates) F5 ACI ServiceCenter also has the ability to dynamically add or remove pool members from a pool on the BIG-IP based on the endpoints discovered by the APIC, which helps to reduce configuration overhead. Other Features F5 ACI ServiceCenter can manage multiple BIG-IPs - physical as well as virtual BIG-IPs. If Link Layer Discovery Protocol (LLDP) is enabled on the interfaces between Cisco ACI and F5 BIG-IP,F5 ACI ServiceCenter can discover the BIG-IP and add it to the device list as well. F5 ACI Service can also categorize the BIG-IP accordingly, for example, if it is a standalone or in a high availability (HA) cluster. Starting from version 2.11, F5 ACI ServiceCenter supports multi-tenant design too. These are just some of the features and to find out more, check out F5 ACI ServiceCenter User and Deployment Guide. F5 ACI ServiceCenter Resources Webinar: Unify Your Deployment for Visibility with Cisco and the F5 ACI ServiceCenter Learn: F5 DevCentral Youtube Videos: F5 ACI ServiceCenter Playlist Cisco Learning Video:Configuring F5 BIG-IP from APIC using F5 ACI ServiceCenter Cisco ACI and F5 BIG-IP Design Guide White Paper Hands-on: F5 ACI ServieCenter Interactive Demo Cisco dCloud Lab -Cisco ACI with F5 ServiceCenter Lab v3 Get Started: Download F5 ACI ServiceCenter F5 ACI ServiceCenter User and Deployment Guide1.7KViews1like0CommentsEnable Consistent Application Services for Containers with CIS
Kubernetes is all about abstracting away complexity. As Kubernetes continues to evolve, it becomes more intelligent and will become even more powerful when it comes to helping enterprises manage their data center, not just at the cloud.While enterprises have had to deal with the challenges associated with managing different types of modern applications (AI/ML, Big data, and analytics) to process that data, they are faced with the challenge to maintain a top-level network/security policies and gaining better control of the workload, to ensure operational and functional consistency. This is where Cisco ACI and F5 Container Ingress Services come into the picture. F5 CIS and Cisco ACI Cisco ACI offers these customers an integrated network fabric for Kubernetes. Recently,F5 and Cisco joined forces byintegrating F5 Container Ingress Services (or CIS) with Cisco ACI to bring L4-7 services into Kubernetes environment, to further simplify the user experience in deploying, scaling and managing containerized applications. This integration specifically enables: Unified networking: Containers, VMs, and bare-metal Secure multi-tenancy and seamless integration of Kubernetes network policies and ACI policies A single point of automation with enhanced visibility for ACI and BIG-IP. F5 Application Services natively integrated in Container and PaaS Environments One of the key benefits for such implementation is the ACI encapsulation normalization. The ACI fabric, as the normalizer for the encapsulation, allows you to merge different network technologies or encapsulations be it vlan or vxlan into a single policy model. BIG-IP through a simple VLAN connection to ACI, with no need for additional gateway, can communicate with any service anywhere. Solution Deployment To integrate F5 CIS with the Cisco ACI forKubernetes environment, you perform a series of tasks. Some you perform in the network to set up the Cisco Application Policy Infrastructure Controller (APIC); others you perform on the Kubernetes server(s). Rather thangetting down to the nitty-gritty, I willjust highlightthe steps todeploy the joint solution. Pre-requisites The BIG-IP CIS and Cisco ACI joint solution deployment assumes that you have the following in place: A working Cisco ACI installation ACI must be integrated with vCenter with dVS Fabric tenant pre-provisioned with the required VRFs/EPGs/L3OUTs. BIG-IP already running for non-container workload Deploying Kubernetes Clusters to ACI Fabrics The following steps will provide you a complete cluster configuration: Step 1.Run ACI provisioning tool to prepare Cisco ACI to work with Kubernetes Cisco provides an acc_provision tool to provision the fabric for the Kubernetes VMM domain and generate a .yaml file that Kubernetes uses to deploy the required Cisco Application Centric Infrastructure (ACI) container components. You can download the provisioning tool here. Next, you can use this provision tool to generate a sample configuration file that you can edit. $ acc-provision--sample > aci-containers-config.yaml We can now edit the sample configuration file to provide information from your network. With such configuration file, now you can run the following command to provision the CiscoACIfabric: acc-provision -c aci-containers-config.yaml -o aci-containers.yaml -f kubernetes-<version> -a -u [apic username] -p [apic password] Step 2. Prepare the ACI CNI Plugin configuration File The above command also generates the fileaci-containers.yamlthat you use after installing Kubernetes. Step 3.Preparing the Kubernetes Nodes - Set up networking for the node to support Kubernetes installation. With provisioned ACI, you start to prepare networking for the Kubernetes nodes. This includes steps such as Configuring the VMs interface toward the ACI fabric, Configuring a static route for the multicast subnet, Configuring the DHCP Client to work with ACIetc. Step 4.Installing Kubernetes cluster After you provision Cisco ACI and prepare the Kubernetes nodes, you can install Kubernetes and ACI containers. You can use any installation method you choose appropriate to your environment. Step 5.Deploy Cisco ACI CNI plugin When the Kubernetes cluster is up and running, you can copy the preciously generated CNI configuration to the master node, and install the CNI plug-in using the following command: kubectl apply -f aci-containers.yaml The command installs the following (PODs): ACI Containers Host Agent and OpFlex agent in a DaemonSet calledaci-containers-host Open vSwitch in a DaemonSet calledaci-containers-openvswitch ACI Containers Controller in a deployment calledaci-containers-controller. Other required configurations, including service accounts, roles, and security context For ‘the authoritative word on this specific implementation’, you can click here the workflow for integrating k8s into Cisco ACI for latest and greatest. After you have performed the previous steps, you can verify the integration in the Cisco APIC GUI. The integration creates a tenant, three EPGs, and a VMM domain. Each tenant will have the visibility of all the Kubernetes POD's. Install the BIG-IP Controller The F5 BIG-IP Controller (k8s-bigip-ctlr) or Container Ingress Services, if you aren't familiar, is a Kubernetes native service that provides the glue between container services and BIG-IP. It watches for changes and communicates those to BIG-IP delivered application services. These, in turn, keep up with the changes in container environments and enable enforcement of security policies. Once you have a running Kubernetes cluster deployed to ACI Fabric, you can follow these instructions to install BIG-IP Controller. Use the kubectl get command to verify that thek8s-bigip-ctlrPod launched successfully. BIG-IP asnorth-south load balancer forExternal Services For Kubernetes services that are exposed externally and need to be load balanced, Kubernetes does not handle the provisioning of the load balancing. It is expected that the load balancing network function is implemented separately. For these services, Cisco ACI takes advantage of the symmetric policy-based redirect (PBR) feature available in the Cisco Nexus 9300-EX or FX leaf switches in ACI mode. This is where BIG-IP Container Ingress Services (or CIS) comes into thepicture, as the north-south load balancer.On ingress, incoming traffic to an externally exposed service is redirected by PBR to BIG-IP for that particular service. If a Kubernetes cluster contains more than one IP pod for a particular service, BIG-IP will load balance the traffic across all the pods for that service. In addition, each new POD is added to BIG-IP pool automatically. Conclusion F5 CIS and Cisco ACI together offer a unified control, visibility, security and application services, for both container and non-container workload. Further Resources F5 Container Ingress Services Click here Cisco ACI and Kubernetes Integration Click here1.5KViews1like0CommentsImplementing SSL Orchestrator - Explicit Proxy Service Configuration (Cisco WSA)
Introduction This article is part of a series on implementing BIG-IP SSL Orchestrator. It includes high availability and central management with BIG-IQ. Implementing SSL/TLS Decryption is not a trivial task. There are many factors to keep in mind and account for, from the network topology and insertion point, to SSL/TLS keyrings, certificates, ciphersuites and on and on. This article focuses on configuring a 3rd party, Explicit Proxy security device and everything you need to know about it. This article covers the configuration of Cisco Web Security Appliance (WSA) running version 11.8. Please forgive me for using SSL and TLS interchangeably in this article. A common Cisco WSA deployment mode is as an Explicit Proxy.The WSA proxy is completely transparent to the user but the BIG-IP will connect to it as an Explicit Proxy. The default settings for Cisco WSA will work with SSL Orchestrator.Keep in mind that: 1)By default WSA accepts connections on ports 80 & 3128.If you changed this you will have to specify the correct port when configuring SSLO. 2)It is assumed you are using WSA security features like URL categorization, Anti-Malware, Reputation filtering, etc. 3)It is recommended to use separate ethernet ports for Management and Data, similar to the image below. Summary In this article you learned how to configure a Cisco WSA in Explicit Proxy mode. Configuration of Cisco WSA can be downloaded fromherein GitLab. Next Steps Click Next to proceed to the next article in the series. Contact Cisco if you need additional assistance with their products.820Views1like0CommentsF5 & Cisco ACI Essentials - Take advantage of Policy Based Redirect
Different applications and environments have unique needs on how traffic is to be handled. Some applications due to the nature of their functionality or maybe due to a business need do require that the application server(s) are able to view the real IP of the client making the request to the application. Now when the request comes to the BIG-IP it has the option to change the real IP of the request or to keep it intact. In order to keep it intact the setting on the F5 BIG-IP ‘Source Address Translation’ is set to ‘None’. Now as simple as it may sound to just toggle a setting on the BIG-IP, a change of this setting causes significant change in traffic flow behavior. Let’s take an example with some actual values. Starting with a simple setup of a standalone BIG-IP with one interface on the BIG-IP for all traffic (one-arm) Client – 10.168.56.30 BIG-IP Virtual IP – 10.168.57.11 BIG-IP Self IP – 10.168.57.10 Server – 192.168.56.30 Scenario 1: With SNAT From Client : Src: 10.168.56.30 Dest: 10.168.57.11 From BIG-IP to Server: Src: 10.168.57.10 (Self-IP) Dest: 192.168.56.30 With this the server will respond back to 10.168.57.10 and BIG-IP will take care of forwarding the traffic back to the client. Here the application server see’s the IP 10.168.57.10 and not the client IP Scenario 2: No SNAT From Client : Src: 10.168.56.30 Dest: 10.168.57.11 From BIG-IP to Server: Src: 10.168.56.30 Dest: 192.168.56.30 With this the server will respond back to 10.168.56.30 and here where comes in the complication, the return traffic needs to go back to the BIG-IP and not the real client. One way to achieve this is to set the default GW of the server to the Self-IP of the BIG-IP and then the server will send the return traffic to the BIG-IP. BUT what if the server default gateway is not to be changed for whatsoever reason. It is at this time Policy based redirect will help. The default gw of the server will point to the ACI fabric, the ACI fabric will be able to intercept the traffic and send it over to the BIG-IP. With this the advantage of using PBR is two-fold The server(s) default gateway does not need to point to BIG-IP but can point to the ACI fabric The real client IP is preserved for the entire traffic flow Avoid server originated traffic to hit BIG-IP, resulting BIG-IP to configure a forwarding virtual to handle that traffic.If server originated traffic volume is high it could result unnecessary load the BIG-IP Before we get to the deeper into the topic of PRB below are a few links to help you refresh on some of the Cisco ACI and BIG-IP concepts ACI fundamentals: https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/1-x/aci-fundamentals/b_ACI-Fundamentals.html SNAT and Automap: https://support.f5.com/csp/article/K7820 BIG-IP modes of deployment: https://support.f5.com/csp/article/K96122456#link_02_01 Now let’s look at what it takes to configure PBR using a Standalone BIG-IP Virtual Edition in One-Arm mode Network diagram for reference: To use the PBR feature on APIC - Service graph is a MUST Details on L4-L7 service graph on APIC To get hands on experience on deploying a service graph (without pbr) Configuration on APIC 1) Bridge domain ‘F5-BD’ Under Tenant->Networking->Bridge domains->’F5-BD’->Policy IP Data plane learning - Disabled 2) L4-L7 Policy-Based Redirect Under Tenant->Policies->Protocol->L4-L7 Policy based redirect, create a new one Name: ‘bigip-pbr-policy’ L3 destinations: BIG-IP Self-IP and MAC IP: 10.168.57.10 MAC: Find the MAC of interface the above Self-IP is assigned from logging into the BIG-IP (example: 00:50:56:AC:D2:81) 3) Logical Device Cluster- Under Tenant->Services->L4-L7, create a logical device Managed – unchecked Name: ‘pbr-demo-bigip-ve` Service Type: ADC Device Type: Virtual (in this example) VMM domain (choose the appropriate VMM domain) Devices: Add the BIG-IP VM from the dropdown and assign it an interface Name: ‘1_1’, VNIC: ‘Network Adaptor 2’ Cluster interfaces Name: consumer, Concrete interface Device1/[1_1] Name: provider, Concrete interface: Device1/[1_1] 4) Service graph template Under Tenant->Services->L4-L7->Service graph templates, create a service graph template Give the graph a name:’ pbr-demo-sgt’ and then drag and drop the logical device cluster (pbr-demo-bigip-ve) to create the service graph ADC: one-arm Route redirect: true 5) Click on the service graph created and then go to the Policy tab, make sure the Connections for the connectors C1 and C2 and set as follows: Connector C1 Direct connect – False (Not mandatory to set to 'True' because PBR is not enabled on consumer connector for the consumer to VIP traffic) Adjacency type – L3 Connector C2 Direct connect - True Adjacency type - L3 6) Apply the service graph template Right click on the service graph and apply the service graph Choose the appropriate consumer End point group (‘App’) provider End point group (‘Web’) and provide a name for the new contract For the connector select the following: BD: ‘F5-BD’ L3 destination – checked Redirect policy – ‘bigip-pbr-policy’ Cluster interface – ‘provider’ Once the service graph is deployed, it is in applied state and the network path between the consumer, BIG-IP and provider has been successfully setup on the APIC. 7) Verify the connector configuration for PBR. Go to Device selection policy under Tenant->Services-L4-L7. Expand the menu and click on the device selection policy deployed for your service graph. For the consumer connector where PBR is not enabled Connector name - Consumer Cluster interface - 'provider' BD- ‘F5-BD’ L3 destination – checked Redirect policy – Leave blank (no selection) For the provider connector where PBR is enabled: Connector name - Provider Cluster interface - 'provider' BD - ‘F5-BD’ L3 destination – checked Redirect policy – ‘bigip-pbr-policy’ Configuration on BIG-IP 1) VLAN/Self-IP/Default route Default route – 10.168.57.1 Self-IP – 10.168.57.10 VLAN – 4094 (untagged) – for a VE the tagging is taken care by vCenter 2) Nodes/Pool/VIP VIP – 10.168.57.11 Source address translation on VIP: None 3)iRule (end of the article) that can be helpful for debugging Few differences in configuration when the BIG-IP is a Virtual edition and is setup in a High availability pair 1)BIG-IP: Set MAC Masquerade (https://support.f5.com/csp/article/K13502) 2)APIC: Logical device cluster Promiscuous mode – enabled Add both BIG-IP devices as part of the cluster 3) APIC: L4-L7 Policy-Based Redirect L3 destinations: Enter the Floating BIG-IP Self-IP and MAC masquerade ------------------------------------------------------------------------------------------------------------------------------------------------------------------ Configuration is complete, let’s take a look at the traffic flows Client-> F5 BIG-IP -> Server Server-> F5 BIG-IP -> Client In Step 2 when the traffic is returned from the client, ACI uses the Self-IP and MAC that was defined in the L4-L7 redirect policy to send traffic to the BIG-IP iRule to help with debugging on the BIG-IP when LB_SELECTED { log local0. "==================================================" log local0. "Selected server [LB::server]" log local0. "==================================================" } when HTTP_REQUEST { set LogString "[IP::client_addr] -> [IP::local_addr]" log local0. "==================================================" log local0. "REQUEST -> $LogString" log local0. "==================================================" } when SERVER_CONNECTED { log local0. "Connection from [IP::client_addr] Mapped -> [serverside {IP::local_addr}] \ -> [IP::server_addr]" } when HTTP_RESPONSE { set LogString "Server [IP::server_addr] -> [IP::local_addr]" log local0. "==================================================" log local0. "RESPONSE -> $LogString" log local0. "==================================================" } Output seen in /var/log/ltm on the BIG-IP, look at the event <SERVER_CONNECTED> Scenario 1: No SNAT -> Client IP is preserved Rule /Common/connections <HTTP_REQUEST>: Src: 10.168.56.30 -> Dest: 10.168.57.11 Rule /Common/connections <SERVER_CONNECTED>: Src: 10.168.56.30 Mapped -> 10.168.56.30 -> Dest: 192.168.56.30 Rule /Common/connections <HTTP_RESPONSE>: Src: 192.168.56.30 -> Dest: 10.168.56.30 If you are curious of the iRule output if SNAT is enabled on the BIG-IP - Enable AutoMap on the virtual server on the BIG-IP Scenario 2: With SNAT -> Client IP not preserved Rule /Common/connections <HTTP_REQUEST>: Src: 10.168.56.30 -> Dest: 10.168.57.11 Rule /Common/connections <SERVER_CONNECTED>: Src: 10.168.56.30 Mapped -> 10.168.57.10-> Dest: 192.168.56.30 Rule /Common/connections <HTTP_RESPONSE>: Src: 192.168.56.30 -> Dest: 10.168.56.30 References: ACI PBR whitepaper: https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-739971.html Troubleshooting guide: https://www.cisco.com/c/dam/en/us/td/docs/switches/datacenter/aci/apic/sw/4-x/troubleshooting/Cisco_TroubleshootingApplicationCentricInfrastructureSecondEdition.pdf Layer4-Layer7 services deployment guide https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/4-x/L4-L7-services/Cisco-APIC-Layer-4-to-Layer-7-Services-Deployment-Guide-401/Cisco-APIC-Layer-4-to-Layer-7-Services-Deployment-Guide-401_chapter_011.html Service graph: https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/4-x/L4-L7-services/Cisco-APIC-Layer-4-to-Layer-7-Services-Deployment-Guide-401/Cisco-APIC-Layer-4-to-Layer-7-Services-Deployment-Guide-401_chapter_0111.html https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/4-x/L4-L7-services/Cisco-APIC-Layer-4-to-Layer-7-Services-Deployment-Guide-401.pdf2.7KViews1like3CommentsF5 & Cisco ACI Essentials - ServiceCenter: A troubleshooting tool for network admins
Introduction The F5 ACI ServiceCenter is an application developed to run only on the Cisco ACI App Center platform. It is an integration point between the F5 BIG-IP and Cisco ACI. The application provides an APIC administrator a unified way to manage both L2-L3 and L4-L7 infrastructure. Once day-0 activities are performed and BIG-IP is deployed within the ACI fabric then the F5 ACI ServiceCenter can be used to handle day-1 and day-2 operations. For more information, check this informative lightboard video Topology Let's take a simple and basic scenario where a pair of BIG-IP's are deployed in an ACI environment and are load balancing a HTTP application. In an ideal world one single administrator would handle all the network related tasks including managing the load balancing capabilities, but in reality that is not the case. In reality a typical network administrator is aware of how the ACI is configured: how many tenants are present on APIC, how many end point groups (EPG) and bridge domains(BD) are deployed, how many contracts are deployed to make sure these end points groups are able to talk to each other etc. On the other hand, a typical BIG-IP administrator is aware of what VIP's are configured on the BIG-IP, what monitors are assigned to the HTTP application, how many pools and pool members exist on the BIG-IP etc. The network administrator has little or no visibility into the BIG-IP configuration and vice-versa and this leads to inconsistency in deployments as well as communication and coordination overhead in making any changes to the network. The F5 ACI ServiceCenter visibility use case aims at bridging this gap. It provides the network administrator some visibility into the BIG-IP configuration which will give a better picture of the entire network to the network administrator. This helps with making troubleshooting easier and also providing a means for making informed decisions. Use Case: Visibility and mapping of APIC and BIG-IP constructs In this article we will dive right into the visibility use case. Click for details on how to get started using the F5 ACI ServiceCenter. Using the visibility tab of the application the network administrator will be able to visually view how application workload on the APIC is tied to the BIG-IP. Workload on APIC is learned by an end point group on APIC. For example, if an application is being served by web servers with IPs 192.168.56.*, then these IP addresses will be present as an end poinst in an end point group (EPG) on the APIC. From the perspective of BIG-IP these web servers are pool members on a particular pool. The F5 ACI ServiceCenter has access to both APIC and BIG-IP and will co-relate this information and provide a mapping. BIG-IP: VIP|Pool|Pool Member <=> APIC: Tenant|Application Profile|End Point group This gives the network administrator a view of how the APIC workload is associated with the BIG-IP and what all applications and virtual IP's are tied to a tenant. Along with the mapping the health statistics from the BIG-IP are collected. The health status is reflected based on the monitor assigned to the VIP, pool and pool members on the BIG-IP. After any change that a network administrator would make on the network, he/she can login to the F5 ACI ServiceCenter and check if the health of the VIP's/pool member's was affected and troubleshoot on the appropriate tenant/app/epg on the APIC. Advantages: Reduce the network administrator's time on waiting on the BIG-IP admin to confirm that the application is or is not healthy. Network administrator has to have minimal knowledge of BIG-IP and still be able to make an informed decision. Can use the F5 ACI ServiceCenter to create a snapshot of the network before and after a network change is made. Automation The information can be viewed visually, but for those network administrators who are automating their environment they can also take advantage of the API support provided by the F5 ACI ServiceCenter. Click here for details on API’s supported on the F5 ACI ServiceCenter Let’s take an example of collecting the snapshot of the VIP and Pool member statistics. Ansible is being used in this example but any automation tool can be used to collect and parse the API response. All API calls are made to the APIC controller. Ansible playbook for gathering virtual IP address and the status of the VIP. After parsing the data copying the content to a file. --- - name: Get VIP and status hosts: localhost gather_facts: false connection: local vars: apic_ip: "10.192.73.xx" big_ip: "10.192.73.xx" partition: "Dynamic" tasks: - name: Login to APIC uri: url: https://{{apic_ip}}/api/aaaLogin.json method: POST validate_certs: no body_format: json body: aaaUser: attributes: name: "admin" pwd: "<<apic_password>>" headers: content_type: "application/json" return_content: yes register: cookie - debug: msg="{{cookie['cookies']['APIC-cookie']}}" - set_fact: token: "{{cookie['cookies']['APIC-cookie']}}" - name: Login to BIG-IP uri: url: https://{{apic_ip}}/appcenter/F5Networks/F5ACIServiceCenter/loginbigip.json method: POST validate_certs: no body: url: "{{big_ip}}" user: "admin" password: "<<bigip_password>>" body_format: json headers: DevCookie: "{{token}}" - name: Get complete visibility information uri: url: https://{{apic_ip}}/appcenter/F5Networks/F5ACIServiceCenter/getvipstats.json method: POST validate_certs: no body: url: "{{big_ip}}" partition: "{{partition}}" body_format: json headers: DevCookie: "{{token}}" return_content: yes register: complete_info - name: Save only VIP information into a fact set_fact: vip_info: "{{ complete_info.json.vipStats}}" - name: Display VIP and status information debug: var: vip_info - name: Set fact with key value pairs set_fact: vip_status: "{{ vip_status|default([]) + [ {'vip': item.address.split(':')[0], 'status': item.status } ] }}" loop: "{{vip_info | json_query(query_string) }}" vars: query_string: "[].vip" - name: Display key value pairs debug: msg: "{{item}}" with_items: "{{vip_status}}" - name: Create VIP ip:status file blockinfile: path: ./vip_status create: yes block: | {{item.vip}}: {{item.status}} marker: "# {mark} ANSIBLE MANAGED BLOCK {{ item.vip }}" with_items: "{{vip_status}}" - name: Delete comments from file lineinfile: path: ./vip_status regexp: '^#' state: absent - name: Sort the content for the file for easy comparision shell: sort -k2 vip_status > before_nw_change_vip Output of file 'before_nw_change_vip' 10.168.56.50: available Ansible playbook for gathering node IP address and the status . After parsing the data copying the content to a file. --- - name: Get node and status hosts: localhost gather_facts: false connection: local vars: apic_ip: "10.192.73.xx" big_ip: "10.192.73.xx" partition: "Dynamic" tasks: - name: Login to APIC uri: url: https://{{apic_ip}}/api/aaaLogin.json method: POST validate_certs: no body_format: json body: aaaUser: attributes: name: "admin" pwd: "<<apic_password>>" headers: content_type: "application/json" return_content: yes register: cookie - debug: msg="{{cookie['cookies']['APIC-cookie']}}" - set_fact: token: "{{cookie['cookies']['APIC-cookie']}}" - name: Login to BIG-IP uri: url: https://{{apic_ip}}/appcenter/F5Networks/F5ACIServiceCenter/loginbigip.json method: POST validate_certs: no body: url: "{{big_ip}}" user: "admin" password: "<<bigip_password>>" body_format: json headers: DevCookie: "{{token}}" - name: Get complete visibility information uri: url: https://{{apic_ip}}/appcenter/F5Networks/F5ACIServiceCenter/getvipstats.json method: POST validate_certs: no body: url: "{{big_ip}}" partition: "{{partition}}" body_format: json headers: DevCookie: "{{token}}" return_content: yes register: complete_info - name: Save only VIP information into a fact set_fact: vip_info: "{{ complete_info.json.vipStats}}" - debug: var: vip_info - name: Set fact with key value pairs for pool members set_fact: node_status: "{{ node_status|default([]) + [ {'ip': item.address, 'status': item.status, 'tenant': item.epgs[0].tenant.name, 'app': item.epgs[0].app.name, 'epg': item.epgs[0].epg.name} ] }}" loop: "{{vip_info | json_query(query_string) }}" vars: query_string: "[].nodes[]" - name: Display key value pairs debug: msg: "{{item}}" with_items: "{{node_status}}" - name: Create node ip:status file blockinfile: path: ./node_status create: yes block: | {{item.ip}}: {{item.status}}: {{item.tenant}} {{item.app}} {{item.epg}} marker: "# {mark} ANSIBLE MANAGED BLOCK {{ item.ip }}" with_items: "{{node_status}}" - name: Delete comments from file lineinfile: path: ./node_status regexp: '^#' state: absent - name: Sort the content for the file for easy comparision shell: sort -k2 node_status > before_nw_change_node Output of file 'before_nw_change_node' 192.168.56.150: available: uni/tn-AspireDemo/ap-AppProfile 192.168.56.151: available: uni/tn-AspireDemo/ap-AppProfile 192.168.56.152: available: uni/tn-AspireDemo/ap-AppProfile 192.168.56.153: available: uni/tn-AspireDemo/ap-AppProfile 192.168.56.154: available: uni/tn-AspireDemo/ap-AppProfile 192.168.56.155: available: uni/tn-AspireDemo/ap-AppProfile 192.168.56.156: available: uni/tn-AspireDemo/ap-AppProfile 192.168.56.157: available: uni/tn-AspireDemo/ap-AppProfile 192.168.56.158: available: uni/tn-AspireDemo/ap-AppProfile 192.168.56.159: available: uni/tn-AspireDemo/ap-AppProfile 192.168.56.167: available: uni/tn-AspireDemo/ap-AppProfile Once a network change is made, this information can be collected again and a comparison can be made of if any VIP's/Pool members were affected by the network change. Summary Few key highlights of the F5 ACI ServiceCenter: Free of cost, no license needed Installed on the APIC natively, no external software/hardware component Operates in the control plane and does not disrupt traffic flow Visibility use case is ideal for new and existing BIG-IP deployments Using the F5 ACI ServiceCenter application within your ACI environment where BIG-IP's are deployed is a win-win for both network administrators and BIG-IP administrators. Network administrators can use it for visibility into the BIG-IP and making sure the network is setup correctly to serve the application sitting behind the BIG-IP. The BIG-IP administrators can take it for granted that the network is intact and the network administrators have done their due diligence by using the F5 ACI ServiceCenter. BIG-IP administrators can focus their efforts on application specific challenges and configurations on the BIG-IP using their day to day operational model. However maybe there is an ideal world and the BIG-IP and network administrators both have access to the APIC controller, in that case the BIG-IP administrator can also take advantage of L4-L7 use case provided by the F5 ACI ServiceCenter. For more details visit: https://www.f5.com/cisco977Views1like0CommentsF5 & Cisco ACI Essentials - Dynamic pool sizing using the F5 ACI ServiceCenter
APIC EndPoints and EndPoint Groups When dealing with the Cisco ACI environment you may have wondered about using an Application-Centric Design or a Network-Centric Design. Regardless of the strategy, the ultimate goal is to have an accessible and secure application/workload in the ACI environment. An application is comprised of several servers; each one performing a function for the application (web server, DB server, app server etc.). Each of these servers may be physical or virtual and are treated as endpoints on the ACI fabric. Endpoints are devices connected to the network directly or indirectly. They have an address, attributes and can be physical or virtual. Endpoint examples include servers, virtual machines, network-attached storage, or clients on the Internet. An EPG (EndPoint Group) is an object that contains a collection of endpoints, which can be added to an EPG either dynamically or statically. Take a look at the relationship between different objects on the APIC. Click here for more details. Relationship between Endpoints and Pool members If an application is being served by web servers with IPs having address's in the range 192.168.56.*, for example, then these IP addresses will be presented as an endpoint in an endpoint group (EPG) on the APIC. From the perspective of BIG-IP, these web servers are pool members of a particular pool. The F5 ACI ServiceCenter is an application developed on the Cisco ACI App Center platform designed to run on the APIC controller. It has access to both APIC and BIG-IP and can correlate existing information from both devices to provide a mapping as follows: BIG-IP | APIC ________________________________________________________________________ VIP: Pool: Pool Member(s): Route Domain (RD) |Tenant: Application Profile: End Point group: Virtual Routing and Forwarding (VRF) This gives an administrator a view of how the APIC workload is associated with the BIG-IP and what all applications and virtual IP's are tied to a tenant. Click here for more details on this visibility dashboard and learn more on how and under what situations the dashboard can be helpful. In this article we are going to see how the F5 ACI ServiceCenter can take advantage of the endpoints learned by the ACI fabric to dynamically grow/shrink pool members. Dynamic EndPoint Attach and Detach Let's think back to our application which is say being hosted on 100's of servers, these servers could be added to an APIC EPG statically by a network admin or they could be added dynamically through a vCenter or openstack APIC integration. In either case, these endpoints ALSO need to be added to the BIG-IP where the endpoints can be protected by malicious attacks and/or load-balanced. This can be a very tedious task for a APIC or a BIG-IP administrator. Using the dynamic EndPoint attach and detach feature on the F5 ACI ServiceCenter, this burden can be reduced. The application has the ability to adjust the pool members on the BIG-IP based on the server farm on the APIC. On APIC when an endpoint is attached, it is learned by the fabric and added to a particular tenant, application profile and EPG on the APIC. The F5 ACI ServiceCenter provides the capability to map an EPG on the APIC to a pool on the BIG-IP. The application relies on the attach/detach notifications from the APIC to add/delete the BIG-IP pool-members. There are different ways in which the dynamic mapping can be leveraged using the F5 ACI ServiceCenter based on the L4-L7 configuration: Scenario 1: Declare L4-L7 configuration using F5 ACI ServiceCenter Scenario 2: L4-L7 configuration already exists on the BIG-IP Scenario 3: Use dynamic mapping but do not declare the L4-L7 configuration using the F5 ACI ServiceCenter Scenario 4: Use the F5 ACI ServiceCenter API's to define the mapping along with the L4-L7 configuration Let's take a look at each one of them in detail. Scenario 1: Declare L4-L7 configuration using F5 ACI ServiceCenter Let's assume there is no existing configuration on the BIG-IP, a new application needs to be deployed which is front ended by a VIP/Pool/Pool members. The F5 ACI ServiceCenter provides a UI that can be used to deploy the L4-L7 configuration and create a mapping between Pool <-> EPG. There are two options: Basic mode uses FAST andAdvanced mode uses AS3. Basic mode: Leverage dynamic endpoint attach and detach feature by using the pre-built Service-Discovery template Advanced mode: Leverage dynamic endpoint attach and detach feature by using Manage Endpoint Mappings Scenario 2: L4-L7 configuration already exists on the BIG-IP If L4-L7 configuration using AS3 already exists on the BIG-IP, the F5 ACI ServiceCenter will detect all partitions and application that in compatible with AS3. Configuration for a particular partition/application on BIG-IP can then be updated to create a Pool <-> EPG mapping. However, there is one condition that is the pool can either have static or dynamic members. Thus, if the pool already has existing members, those members will have to be deleted before a dynamic mapping can be created. To maintain the dynamic mapping, any future changes to the L4-L7 configuration on the BIG-IP should be done via the F5 ACI ServiceCenter. Scenario 3: Use dynamic mapping but do not declare the L4-L7 configuration using the F5 ACI ServiceCenter The F5 ACI ServiceCenter can be used just for the dynamic mapping and pool sizing and not for defining the L4-L7 configuration. For this method, the entire AS3 declaration along with the mapping will be directly send to the BIG-IP using AS3. Since the declaration is AS3, the F5 ACI ServiceCenter will automatically detect a Pool <-> EPG mapping which can be viewable from the inventory tab. Step 1: AS3 declaration with Pool <-> EPG mapping posted directly to the BIG-IP (see below for a sample declaration) Step 2: Sync Endpoints Step 3: View Endpoints Scenario 4: Use the F5 ACI ServiceCenter API's to define the mapping along with the L4-L7 configuration Finally, if the UI is not appealing and automation all the way is the goal, then the F5 ACI ServiceCenter has an API call where the mapping as well as the L4-L7 configuration (which was done in Scenario 1) can be completely automated: URI: https://<apic_controller_ip>>/appcenter/F5Networks/F5ACIServiceCenter/updateas3data.json In this scenario, the declaration is being passed to the F5 ACI ServiceCenter through the APIC controller and NOT directly to the BIG-IP. A sample API call Summary Having knowledge on how AS3 works is essential since it is a declarative API, and using it incorrectly can result in incorrect configuration. Any method mentioned above would work, and the decision on which method to use is based on the operational model that works the best in your environment. References Unify Visibility with F5 ACI ServiceCenter in Cisco ACI and F5 BIG-IP Deployments Download F5 ACI ServiceCenter F5 ACI ServiceCenter API documentation F5 AS3 - What does it means -imperative vs declarative F5 AS3 Best Practice Cisco ACI Fabric Endpoint Learning White Paper785Views1like0CommentsF5 and Cisco ACI Essentials: Automate automate automate !!!
This article will focus on automation support by BIG-IP and Cisco ACI and how automation tools specifically Ansible can be used to automate different use cases. Before getting into the weeds let's discuss and understand BIG-IP's and Cisco ACI's automation strategies. BIG-IP automation strategy BIG-IP automation strategy is simple-abstract as much complexity as possible from the user, give an easy button to the user to deploy their BIG-IP configuration. This could honestly mean different methods to different people, some prefer sending a single API call to perform one action( A one-to-one mapping between your API<->Configuration). Others prefer a more declarative approach where one API call performs multiple actions, basically a one-to -many mapping between your API(1)<->Configuration(N). A great link to refresh and learn about the different options: https://www.f5.com/products/automation-and-orchestration Cisco ACI automation strategy Cisco Application Policy Infrastructure Controller (APIC) is the network controller for the ACI fabric. APIC is the unified point of automation and management for the Cisco ACI fabric, policy enforcement, and health monitoring. The Cisco ACI programmability model provides complete programmatic access using APIC. Click here to learn more https://developer.cisco.com/site/aci/ Automation tools There are a lot of automation tools that are being talked for network automation BUT the one that comes up in every customer conversation is Ansible. Its simplicity, maturity and community adoption has made it very popular. In this article we are going to focus on using Ansible to automate a service discovery use case. Use Case: Dynamic EP attach/detach Let’s take an example of a simple http web service being made highly available and secure using the BIG-IP Virtual IP address. This web service has a bunch of backend web servers hosting the application, the IP of this web servers is configured on the BIG-IP as pool members. These same web server IP’s are learned as endpoints in the ACI fabric and are part of an End Point Group (EPG) on the APIC. Hence there is a logical mapping between a EPG on APIC and a pool on the BIG-IP. Now if the application is adding or deleting web servers that is hosting the application maybe to save cost or maybe to deal with increase/decrease of traffic, what happens is that the web server IP will be automatically learned/unlearned on APIC. BUT an admin will still have to add/remove that web server IP from the pool on BIG-IP. This can be a burden on the network admin specially if this happens very often. Here is where automation can help and let’s look at how in the next section More details on the use case can be found at https://devcentral.f5.com/s/articles/F5-Cisco-ACI-Essentials-Dynamic-pool-sizing-using-the-F5-ACI-ServiceCenter Automation of Use Case: Dynamic EP attach/detach Automation can be achieved by using Ansible and Ansible tower where API calls are made directly to the BIG-IP. Another option it to use the F5 ACI ServiceCenter (a native F5 ACI integration) to automate this particular use case. Ansible and Ansible tower To learn more about Ansible and Ansible tower: https://www.ansible.com/products/tower Using this method of automation separate API calls are made directly to the ACI and the BIG-IP. Sample playbook to perform the addition and deletion of pool members to a BIG-IP pool based on members in a particular EPG. The mapping of pool to EPG is provided as input to the playbook. - name: Dynamic end point attach/dettach hosts: aci connection: local gather_facts: false vars: epg_members: [] pool_members: [] pool_members_ip: [] bigip_ip: 10.192.73.xx bigip_password: admin bigip_username: admin # Here we are mapping pool 'dynamic_pool' to EPG 'internalEPG' which belongs to APIC tenant 'TenantDemo' app_profile_name: AppProfile epg_name: internalEPG partition: Common pool_name: dynamic_pool pool_port: 80 tenant_name: TenantDemo tasks: - name: Setup provider set_fact: provider: server: "{{bigip_ip}}" user: "{{bigip_username}}" password: "{{bigip_password}}" server_port: "443" validate_certs: "false" - name: Get end points learned from End Point group aci_rest: action: "get" uri: "/api/node/mo/uni/tn-{{tenant_name}}/ap-{{app_profile_name}}/epg-{{epg_name}}.json?query-target=subtree&target-subtree-class=fvIp" host: "{{inventory_hostname}}" username: "{{ lookup('env', 'ANSIBLE_NET_USERNAME') }}" password: "{{ lookup('env', 'ANSIBLE_NET_PASSWORD') }}" validate_certs: "false" register: eps - name: Get the IP's of the servers part of the EPG set_fact: epg_members="{{epg_members + [item]}}" loop: "{{eps | json_query(query_string)}}" vars: query_string: "imdata[*].fvIp.attributes.addr" no_log: True - name: Get only the IPv4 IP's set_fact: epg_members="{{epg_members | ipv4}}" - name: Adding Pool members to the BIG-IP bigip_pool_member: provider: "{{provider}}" state: "present" name: "{{item}}" host: "{{item}}" port: "{{pool_port}}" pool: "{{pool_name}}" partition: "{{partition}}" loop: "{{epg_members}}" - name: Query BIG-IP facts bigip_device_facts: provider: "{{provider}}" gather_subset: - ltm-pools register: bigip_facts - name: "Show members belonging to pool {{pool_name}}" set_fact: pool_members="{{pool_members + [item]}}" loop: "{{bigip_facts.ltm_pools | json_query(query_string)}}" vars: query_string: "[?name=='{{pool_name}}'].members[*].name[]" - set_fact: pool_members_ip: "{{pool_members_ip + [item.split(':')[0]]}}" loop: "{{pool_members}}" - debug: "msg={{pool_members_ip}}" #If there are any membeers on the BIG-IP that are not present in the EPG,then delete them - name: Find the members to be deleted if any set_fact: members_to_be_deleted: "{{ pool_members_ip | difference(epg_members) }}" - debug: "msg={{members_to_be_deleted}}" - name: Delete Pool members from the BIG-IP bigip_pool_member: provider: "{{provider}}" state: "absent" name: "{{item}}" port: "{{pool_port}}" pool: "{{pool_name}}" preserve_node: yes partition: "{{partition}}" loop: "{{members_to_be_deleted}}" Ansible tower's scheduling feature can be used to schedule this playbook to be run every minute, every hour or once per day based on how often an application is expected to change and how important is it for the configuration on both the Cisco ACI and the BIG-IP to be in sync. F5 ACI ServiceCenter To learn more about the integration : https://www.f5.com/cisco The F5 ACI ServiceCenter is installed on the APIC controller. Here automation can be used to create the initial EPG to Pool mapping. Once the mapping is created the F5 ACI ServiceCenter handles the dynamic sizing of pools based on events generated by APIC. Events are generated when a server is learned/unlearned on an EPG which is what the F5 ACI ServiceCenter listens to and accordingly adds or removes pool members from the BIG-IP. Sample playbook to deploy the mapping configuration on the BIG-IP through the F5 ACI ServiceCenter --- - name: Deploy EPG to Pool mapping hosts: localhost gather_facts: false connection: local vars: apic_ip: "10.192.73.xx" big_ip: "10.192.73.xx" partition: "Dynamic" tasks: - name: Login to APIC uri: url: https://{{apic_ip}}/api/aaaLogin.json method: POST validate_certs: no body_format: json body: aaaUser: attributes: name: "admin" pwd: "******" headers: content_type: "application/json" return_content: yes register: cookie - debug: msg="{{cookie['cookies']['APIC-cookie']}}" - set_fact: token: "{{cookie['cookies']['APIC-cookie']}}" - name: Login to BIG-IP uri: url: https://{{apic_ip}}/appcenter/F5Networks/F5ACIServiceCenter/loginbigip.json method: POST validate_certs: no body: url: "{{big_ip}}" user: "admin" password: "admin" body_format: json headers: DevCookie: "{{token}}" #The body of this request defines the mapping of Pool to EPG #Here we are mapping pool 'web_pool' to EPG 'internalEPG' which belongs to APIC tenant 'TenantDemo' - name: Deploy AS3 dynamic EP mapping uri: url: https://{{apic_ip}}/appcenter/F5Networks/F5ACIServiceCenter/updateas3data.json method: POST validate_certs: no body: url: "{{big_ip}}" partition: "{{partition}}" application: "DemoApp1" json: class: Application template: http serviceMain: class: Service_HTTP virtualAddresses: - 10.168.56.100 pool: web_pool web_pool: class: Pool monitors: - http members: - servicePort: 80 serverAddresses: [] - addressDiscovery: event servicePort: 80 constants: class: Constants serviceCenterEPG: web_pool: tenant: TenantDemo application: AppProfile epg: internalEPG body_format: json status_code: - 202 - 200 headers: DevCookie: "{{token}}" return_content: yes register: complete_info - name: Get task ID of above request set_fact: task_id: "{{ complete_info.json.message.taskId}}" when: complete_info.json.code == 202 - name: Get deployment status uri: url: https://{{apic_ip}}/appcenter/F5Networks/F5ACIServiceCenter/getasynctaskresponse.json method: POST validate_certs: no body: taskId: "{{task_id}}" body_format: json headers: DevCookie: "{{token}}" return_content: yes register: result until: result.json.message.message != "in progress" retries: 5 delay: 2 when: task_id is defined - name: Display final result debug: var: result After deploying this configuration, adding/deleting pool members and making sure the configuration is in sync is the responsibility of the F5 ACI ServiceCenter. Takeaways Both methods are highly effective and usable. The choice of which one to use comes down to the operational model in your environment. Some pros and cons to help made the decision on which platform to use for automation. Ansible Tower Pros No dependency on any other tools Fits in better with bigger company automation strategy to use Ansible for ALL network automation Cons Have to manage playbook execution and scheduling using Ansible Tower If more logic is needed besides what’s described above playbooks will have to be written and maintained Execution of playbook is based on scheduling and is not event driven F5 ACI ServiceCenter Pros Only pool-epg mapping has to be deployed using automation, rest all is handled by the application User interface to view pool member to EPG mapping once deployed and view discrepancies if any Limited automation knowledge is needed, heavy lifting is being done by the application Dynamically adding/deleting pool members is event driven, as members are learned/unlearned by the F5 ACI ServiceCenter an action is taken Cons Another tool is required Customization of pool to EPG mapping is not present. Only one-to-one EPG to pool mapping is present. References Learn about the F5 ACI ServiceCenter and other Cisco integrations: https://f5.com/cisco Download the F5 ACI ServiceCenter: https://dcappcenter.cisco.com/f5-aci-servicecenter.html Lab to execute Ansible playbooks: https://dcloud.cisco.com (Lab name: F5 and Ansible )1.6KViews1like0Comments