aci
33 TopicsF5 & Cisco ACI Essentials - Take advantage of Policy Based Redirect
Different applications and environments have unique needs on how traffic is to be handled. Some applications due to the nature of their functionality or maybe due to a business need do require that the application server(s) are able to view the real IP of the client making the request to the application. Now when the request comes to the BIG-IP it has the option to change the real IP of the request or to keep it intact. In order to keep it intact the setting on the F5 BIG-IP ‘Source Address Translation’ is set to ‘None’. Now as simple as it may sound to just toggle a setting on the BIG-IP, a change of this setting causes significant change in traffic flow behavior. Let’s take an example with some actual values. Starting with a simple setup of a standalone BIG-IP with one interface on the BIG-IP for all traffic (one-arm) Client – 10.168.56.30 BIG-IP Virtual IP – 10.168.57.11 BIG-IP Self IP – 10.168.57.10 Server – 192.168.56.30 Scenario 1: With SNAT From Client : Src: 10.168.56.30 Dest: 10.168.57.11 From BIG-IP to Server: Src: 10.168.57.10 (Self-IP) Dest: 192.168.56.30 With this the server will respond back to 10.168.57.10 and BIG-IP will take care of forwarding the traffic back to the client. Here the application server see’s the IP 10.168.57.10 and not the client IP Scenario 2: No SNAT From Client : Src: 10.168.56.30 Dest: 10.168.57.11 From BIG-IP to Server: Src: 10.168.56.30 Dest: 192.168.56.30 With this the server will respond back to 10.168.56.30 and here where comes in the complication, the return traffic needs to go back to the BIG-IP and not the real client. One way to achieve this is to set the default GW of the server to the Self-IP of the BIG-IP and then the server will send the return traffic to the BIG-IP. BUT what if the server default gateway is not to be changed for whatsoever reason. It is at this time Policy based redirect will help. The default gw of the server will point to the ACI fabric, the ACI fabric will be able to intercept the traffic and send it over to the BIG-IP. With this the advantage of using PBR is two-fold The server(s) default gateway does not need to point to BIG-IP but can point to the ACI fabric The real client IP is preserved for the entire traffic flow Avoid server originated traffic to hit BIG-IP, resulting BIG-IP to configure a forwarding virtual to handle that traffic.If server originated traffic volume is high it could result unnecessary load the BIG-IP Before we get to the deeper into the topic of PRB below are a few links to help you refresh on some of the Cisco ACI and BIG-IP concepts ACI fundamentals: https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/1-x/aci-fundamentals/b_ACI-Fundamentals.html SNAT and Automap: https://support.f5.com/csp/article/K7820 BIG-IP modes of deployment: https://support.f5.com/csp/article/K96122456#link_02_01 Now let’s look at what it takes to configure PBR using a Standalone BIG-IP Virtual Edition in One-Arm mode Network diagram for reference: To use the PBR feature on APIC - Service graph is a MUST Details on L4-L7 service graph on APIC To get hands on experience on deploying a service graph (without pbr) Configuration on APIC 1) Bridge domain ‘F5-BD’ Under Tenant->Networking->Bridge domains->’F5-BD’->Policy IP Data plane learning - Disabled 2) L4-L7 Policy-Based Redirect Under Tenant->Policies->Protocol->L4-L7 Policy based redirect, create a new one Name: ‘bigip-pbr-policy’ L3 destinations: BIG-IP Self-IP and MAC IP: 10.168.57.10 MAC: Find the MAC of interface the above Self-IP is assigned from logging into the BIG-IP (example: 00:50:56:AC:D2:81) 3) Logical Device Cluster- Under Tenant->Services->L4-L7, create a logical device Managed – unchecked Name: ‘pbr-demo-bigip-ve` Service Type: ADC Device Type: Virtual (in this example) VMM domain (choose the appropriate VMM domain) Devices: Add the BIG-IP VM from the dropdown and assign it an interface Name: ‘1_1’, VNIC: ‘Network Adaptor 2’ Cluster interfaces Name: consumer, Concrete interface Device1/[1_1] Name: provider, Concrete interface: Device1/[1_1] 4) Service graph template Under Tenant->Services->L4-L7->Service graph templates, create a service graph template Give the graph a name:’ pbr-demo-sgt’ and then drag and drop the logical device cluster (pbr-demo-bigip-ve) to create the service graph ADC: one-arm Route redirect: true 5) Click on the service graph created and then go to the Policy tab, make sure the Connections for the connectors C1 and C2 and set as follows: Connector C1 Direct connect – False (Not mandatory to set to 'True' because PBR is not enabled on consumer connector for the consumer to VIP traffic) Adjacency type – L3 Connector C2 Direct connect - True Adjacency type - L3 6) Apply the service graph template Right click on the service graph and apply the service graph Choose the appropriate consumer End point group (‘App’) provider End point group (‘Web’) and provide a name for the new contract For the connector select the following: BD: ‘F5-BD’ L3 destination – checked Redirect policy – ‘bigip-pbr-policy’ Cluster interface – ‘provider’ Once the service graph is deployed, it is in applied state and the network path between the consumer, BIG-IP and provider has been successfully setup on the APIC. 7) Verify the connector configuration for PBR. Go to Device selection policy under Tenant->Services-L4-L7. Expand the menu and click on the device selection policy deployed for your service graph. For the consumer connector where PBR is not enabled Connector name - Consumer Cluster interface - 'provider' BD- ‘F5-BD’ L3 destination – checked Redirect policy – Leave blank (no selection) For the provider connector where PBR is enabled: Connector name - Provider Cluster interface - 'provider' BD - ‘F5-BD’ L3 destination – checked Redirect policy – ‘bigip-pbr-policy’ Configuration on BIG-IP 1) VLAN/Self-IP/Default route Default route – 10.168.57.1 Self-IP – 10.168.57.10 VLAN – 4094 (untagged) – for a VE the tagging is taken care by vCenter 2) Nodes/Pool/VIP VIP – 10.168.57.11 Source address translation on VIP: None 3)iRule (end of the article) that can be helpful for debugging Few differences in configuration when the BIG-IP is a Virtual edition and is setup in a High availability pair 1)BIG-IP: Set MAC Masquerade (https://support.f5.com/csp/article/K13502) 2)APIC: Logical device cluster Promiscuous mode – enabled Add both BIG-IP devices as part of the cluster 3) APIC: L4-L7 Policy-Based Redirect L3 destinations: Enter the Floating BIG-IP Self-IP and MAC masquerade ------------------------------------------------------------------------------------------------------------------------------------------------------------------ Configuration is complete, let’s take a look at the traffic flows Client-> F5 BIG-IP -> Server Server-> F5 BIG-IP -> Client In Step 2 when the traffic is returned from the client, ACI uses the Self-IP and MAC that was defined in the L4-L7 redirect policy to send traffic to the BIG-IP iRule to help with debugging on the BIG-IP when LB_SELECTED { log local0. "==================================================" log local0. "Selected server [LB::server]" log local0. "==================================================" } when HTTP_REQUEST { set LogString "[IP::client_addr] -> [IP::local_addr]" log local0. "==================================================" log local0. "REQUEST -> $LogString" log local0. "==================================================" } when SERVER_CONNECTED { log local0. "Connection from [IP::client_addr] Mapped -> [serverside {IP::local_addr}] \ -> [IP::server_addr]" } when HTTP_RESPONSE { set LogString "Server [IP::server_addr] -> [IP::local_addr]" log local0. "==================================================" log local0. "RESPONSE -> $LogString" log local0. "==================================================" } Output seen in /var/log/ltm on the BIG-IP, look at the event <SERVER_CONNECTED> Scenario 1: No SNAT -> Client IP is preserved Rule /Common/connections <HTTP_REQUEST>: Src: 10.168.56.30 -> Dest: 10.168.57.11 Rule /Common/connections <SERVER_CONNECTED>: Src: 10.168.56.30 Mapped -> 10.168.56.30 -> Dest: 192.168.56.30 Rule /Common/connections <HTTP_RESPONSE>: Src: 192.168.56.30 -> Dest: 10.168.56.30 If you are curious of the iRule output if SNAT is enabled on the BIG-IP - Enable AutoMap on the virtual server on the BIG-IP Scenario 2: With SNAT -> Client IP not preserved Rule /Common/connections <HTTP_REQUEST>: Src: 10.168.56.30 -> Dest: 10.168.57.11 Rule /Common/connections <SERVER_CONNECTED>: Src: 10.168.56.30 Mapped -> 10.168.57.10-> Dest: 192.168.56.30 Rule /Common/connections <HTTP_RESPONSE>: Src: 192.168.56.30 -> Dest: 10.168.56.30 References: ACI PBR whitepaper: https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-739971.html Troubleshooting guide: https://www.cisco.com/c/dam/en/us/td/docs/switches/datacenter/aci/apic/sw/4-x/troubleshooting/Cisco_TroubleshootingApplicationCentricInfrastructureSecondEdition.pdf Layer4-Layer7 services deployment guide https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/4-x/L4-L7-services/Cisco-APIC-Layer-4-to-Layer-7-Services-Deployment-Guide-401/Cisco-APIC-Layer-4-to-Layer-7-Services-Deployment-Guide-401_chapter_011.html Service graph: https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/4-x/L4-L7-services/Cisco-APIC-Layer-4-to-Layer-7-Services-Deployment-Guide-401/Cisco-APIC-Layer-4-to-Layer-7-Services-Deployment-Guide-401_chapter_0111.html https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/4-x/L4-L7-services/Cisco-APIC-Layer-4-to-Layer-7-Services-Deployment-Guide-401.pdf2.6KViews1like3CommentsF5 and Cisco ACI Essentials - Design guide for a Single Pod APIC cluster
Deployment considerations It is usually an easy decision to have BIG-IP as part of your ACI deployment as BIG-IP is a mature feature rich ADC solution. Where time is spent is nailing down the design and the deployment options for the BIG-IP in the environment. Below we will discuss a few of the most commonly asked questions: SNAT or no SNAT There are various options you can use to insert the BIG-IP into the ACI environment. One way is to use the BIG-IP as a gateway for servers or as a routing next hop for routing instances. Another option is to use Source Network Address Translation (SNAT) on the BIG-IP, however with enabling SNAT the visibility into the real source IP address is lost. If preserving the source IP is a requirement then ACI's Policy-Based Redirect (PBR) can be used to make sure the return traffic goes back to the BIG-IP. BIG-IP redundancy F5 BIG-IP can be deployed in different high-availability modes. The two common BIG-IP deployment modes: active-active and active-standby. Various design considerations, such as endpoint movement during fail-overs, MAC masquerade, source MAC-based forwarding, Link Layer Discovery Protocol (LLDP), and IP aging should also be taken into account for each of the deployment modes. Multi-tenancy Multi-tenancy is supported by both Cisco ACI and F5 BIG-IP in different ways. There are a few ways that multi-tenancy constructs on ACI can be mapped to multi-tenancy on BIG-IP. The constructs revolve around tenants, virtual routing and forwarding (VRF), route domains, and partitions. Multi-tenancy can also be based on the BIG-IP form factor (appliance, virtual edition and/or virtual clustered multiprocessor (vCMP)). Tighter integration Once a design option is selected there are questions around what more can be done from an operational or automation perspective now that we have a BIG-IP and ACI deployment? The F5 ACI ServiceCenter is an application developed on the Cisco ACI App Center platform built for exactly that purpose. It is an integration point between the F5 BIG-IP and Cisco ACI. The application provides an APIC administrator a unified way to manage both L2-L3 and L4-L7 infrastructure. Once day-0 activities are performed and BIG-IP is deployed within the ACI fabric using any of the design options selected for your environment, then the F5 ACI ServiceCenter can be used to handle day-1 and day-2 operations. The day-1 and day-2 operations provided by the application are well suited for both new/greenfield and existing/brownfield deployments of BIG-IP and ACI deployments. The integration is loosely coupled, which allows the F5 ACI ServiceCenter to be installed or uninstalled with no disruption to traffic flow, as well as no effect on the F5 BIG-IP and Cisco ACI configuration. Check here to find out more. All of the above topics and more are discussed in detail here in the single pod white paper.1.7KViews3likes0CommentsUnmanaged Mode - what it means for ACI and BIG-IP integration [End of Life]
The F5 and Cisco APIC integration based on the device package and iWorkflow is End Of Life. The latest integration is based on the Cisco AppCenter named ‘F5 ACI ServiceCenter’. Visit https://f5.com/cisco for updated information on the integration. The adoption of Cisco ACI with the APIC controller continue to gain traction in the market. With their latest major APIC release, 1.2.1i , Cisco has streamlined how ADCs are connected to the fabric. There have traditionally been two methods of connecting services: Service Insertion with a device package and a service graph (Managed) Connecting a device as an endpoint device into an EPG (Unmanaged) What Cisco has done is simplified how a services device can be connected into the fabric as an Unmanaged device to the APIC. This is known as “Unmanaged Mode” vs the “Managed Mode” where a device package is used. Instead of the usual multi-step manual configuration process for specifying the network configuration in APIC, the attachment has been consolidated into the service graph. Before it was necessary to manually static bind the VLAN to the EPG (Provider and Consumer) and assign the physical domain to the EPG. It was also required to bind contracts between multiple Provider and Consumer EPGs. Now, all you have to do is go into the service graph and specify connectivity just like you were building a managed service graph. By doing this, there is now one common location and workflow for configuring services. The process is simplified. Advantages Service graph representation with Unmanaged and Managed modes mixed (few devices managed by APIC, few devices NOT managed by APIC) Unified view of a service chain Why needed Customers have requested to provide “Unmanaged ” mode for their custom devices Need to manage their own security policies outside of the networking team Need to use their existing orchestration infrastructure for a particular devices in the chain Need to use advanced vendor specific functionality What this means for BIG-IP integration BIG-IP is attached as an EPG - but now being able to represent this mode within a service graph Difference between Managed and Unmanaged mode Mode Goal Unmanaged Mode (Device managed externally) Service Chaining (traffic redirection) through ACI Managed Mode (Device managed by device package) Service Chaining (traffic redirection) through ACI Device configuration through APIC (Policy based configuration) Some prerequisites for deploying an Unmanaged logical device cluster BIGIP: Define VLANS, Self IP’s and trunk (if needed) APIC: VLAN Pool – Static allocation Fabric access policies to ensure physical connectivity to BIGIP Click here to view a video with more details on how to deploy a BIGIP in Unmanaged mode on APIC https://www.youtube.com/watch?v=OJPEYzNGD3A Once deployed as an Unmanaged device cluster with traffic redirection through ACI configure your BIGIP with nodes, pools, monitors, virtual servers and all other features required by your application like you always do by using the BIG-IP GUI/CLI etc (not through the Cisco ACI) References: http://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/1-x/L4-L7_Services_Deployment/guide/b_L4L7_Deploy_ver121x/b_L4L7_Deploy_ver121x_chapter_010000.pdf http://blogs.cisco.com/datacenter/new-innovations-for-l4-7-network-services-integration-with-ciscos-aci-approach1.6KViews0likes5CommentsF5 and Cisco ACI Essentials: Automate automate automate !!!
This article will focus on automation support by BIG-IP and Cisco ACI and how automation tools specifically Ansible can be used to automate different use cases. Before getting into the weeds let's discuss and understand BIG-IP's and Cisco ACI's automation strategies. BIG-IP automation strategy BIG-IP automation strategy is simple-abstract as much complexity as possible from the user, give an easy button to the user to deploy their BIG-IP configuration. This could honestly mean different methods to different people, some prefer sending a single API call to perform one action( A one-to-one mapping between your API<->Configuration). Others prefer a more declarative approach where one API call performs multiple actions, basically a one-to -many mapping between your API(1)<->Configuration(N). A great link to refresh and learn about the different options: https://www.f5.com/products/automation-and-orchestration Cisco ACI automation strategy Cisco Application Policy Infrastructure Controller (APIC) is the network controller for the ACI fabric. APIC is the unified point of automation and management for the Cisco ACI fabric, policy enforcement, and health monitoring. The Cisco ACI programmability model provides complete programmatic access using APIC. Click here to learn more https://developer.cisco.com/site/aci/ Automation tools There are a lot of automation tools that are being talked for network automation BUT the one that comes up in every customer conversation is Ansible. Its simplicity, maturity and community adoption has made it very popular. In this article we are going to focus on using Ansible to automate a service discovery use case. Use Case: Dynamic EP attach/detach Let’s take an example of a simple http web service being made highly available and secure using the BIG-IP Virtual IP address. This web service has a bunch of backend web servers hosting the application, the IP of this web servers is configured on the BIG-IP as pool members. These same web server IP’s are learned as endpoints in the ACI fabric and are part of an End Point Group (EPG) on the APIC. Hence there is a logical mapping between a EPG on APIC and a pool on the BIG-IP. Now if the application is adding or deleting web servers that is hosting the application maybe to save cost or maybe to deal with increase/decrease of traffic, what happens is that the web server IP will be automatically learned/unlearned on APIC. BUT an admin will still have to add/remove that web server IP from the pool on BIG-IP. This can be a burden on the network admin specially if this happens very often. Here is where automation can help and let’s look at how in the next section More details on the use case can be found at https://devcentral.f5.com/s/articles/F5-Cisco-ACI-Essentials-Dynamic-pool-sizing-using-the-F5-ACI-ServiceCenter Automation of Use Case: Dynamic EP attach/detach Automation can be achieved by using Ansible and Ansible tower where API calls are made directly to the BIG-IP. Another option it to use the F5 ACI ServiceCenter (a native F5 ACI integration) to automate this particular use case. Ansible and Ansible tower To learn more about Ansible and Ansible tower: https://www.ansible.com/products/tower Using this method of automation separate API calls are made directly to the ACI and the BIG-IP. Sample playbook to perform the addition and deletion of pool members to a BIG-IP pool based on members in a particular EPG. The mapping of pool to EPG is provided as input to the playbook. - name: Dynamic end point attach/dettach hosts: aci connection: local gather_facts: false vars: epg_members: [] pool_members: [] pool_members_ip: [] bigip_ip: 10.192.73.xx bigip_password: admin bigip_username: admin # Here we are mapping pool 'dynamic_pool' to EPG 'internalEPG' which belongs to APIC tenant 'TenantDemo' app_profile_name: AppProfile epg_name: internalEPG partition: Common pool_name: dynamic_pool pool_port: 80 tenant_name: TenantDemo tasks: - name: Setup provider set_fact: provider: server: "{{bigip_ip}}" user: "{{bigip_username}}" password: "{{bigip_password}}" server_port: "443" validate_certs: "false" - name: Get end points learned from End Point group aci_rest: action: "get" uri: "/api/node/mo/uni/tn-{{tenant_name}}/ap-{{app_profile_name}}/epg-{{epg_name}}.json?query-target=subtree&target-subtree-class=fvIp" host: "{{inventory_hostname}}" username: "{{ lookup('env', 'ANSIBLE_NET_USERNAME') }}" password: "{{ lookup('env', 'ANSIBLE_NET_PASSWORD') }}" validate_certs: "false" register: eps - name: Get the IP's of the servers part of the EPG set_fact: epg_members="{{epg_members + [item]}}" loop: "{{eps | json_query(query_string)}}" vars: query_string: "imdata[*].fvIp.attributes.addr" no_log: True - name: Get only the IPv4 IP's set_fact: epg_members="{{epg_members | ipv4}}" - name: Adding Pool members to the BIG-IP bigip_pool_member: provider: "{{provider}}" state: "present" name: "{{item}}" host: "{{item}}" port: "{{pool_port}}" pool: "{{pool_name}}" partition: "{{partition}}" loop: "{{epg_members}}" - name: Query BIG-IP facts bigip_device_facts: provider: "{{provider}}" gather_subset: - ltm-pools register: bigip_facts - name: "Show members belonging to pool {{pool_name}}" set_fact: pool_members="{{pool_members + [item]}}" loop: "{{bigip_facts.ltm_pools | json_query(query_string)}}" vars: query_string: "[?name=='{{pool_name}}'].members[*].name[]" - set_fact: pool_members_ip: "{{pool_members_ip + [item.split(':')[0]]}}" loop: "{{pool_members}}" - debug: "msg={{pool_members_ip}}" #If there are any membeers on the BIG-IP that are not present in the EPG,then delete them - name: Find the members to be deleted if any set_fact: members_to_be_deleted: "{{ pool_members_ip | difference(epg_members) }}" - debug: "msg={{members_to_be_deleted}}" - name: Delete Pool members from the BIG-IP bigip_pool_member: provider: "{{provider}}" state: "absent" name: "{{item}}" port: "{{pool_port}}" pool: "{{pool_name}}" preserve_node: yes partition: "{{partition}}" loop: "{{members_to_be_deleted}}" Ansible tower's scheduling feature can be used to schedule this playbook to be run every minute, every hour or once per day based on how often an application is expected to change and how important is it for the configuration on both the Cisco ACI and the BIG-IP to be in sync. F5 ACI ServiceCenter To learn more about the integration : https://www.f5.com/cisco The F5 ACI ServiceCenter is installed on the APIC controller. Here automation can be used to create the initial EPG to Pool mapping. Once the mapping is created the F5 ACI ServiceCenter handles the dynamic sizing of pools based on events generated by APIC. Events are generated when a server is learned/unlearned on an EPG which is what the F5 ACI ServiceCenter listens to and accordingly adds or removes pool members from the BIG-IP. Sample playbook to deploy the mapping configuration on the BIG-IP through the F5 ACI ServiceCenter --- - name: Deploy EPG to Pool mapping hosts: localhost gather_facts: false connection: local vars: apic_ip: "10.192.73.xx" big_ip: "10.192.73.xx" partition: "Dynamic" tasks: - name: Login to APIC uri: url: https://{{apic_ip}}/api/aaaLogin.json method: POST validate_certs: no body_format: json body: aaaUser: attributes: name: "admin" pwd: "******" headers: content_type: "application/json" return_content: yes register: cookie - debug: msg="{{cookie['cookies']['APIC-cookie']}}" - set_fact: token: "{{cookie['cookies']['APIC-cookie']}}" - name: Login to BIG-IP uri: url: https://{{apic_ip}}/appcenter/F5Networks/F5ACIServiceCenter/loginbigip.json method: POST validate_certs: no body: url: "{{big_ip}}" user: "admin" password: "admin" body_format: json headers: DevCookie: "{{token}}" #The body of this request defines the mapping of Pool to EPG #Here we are mapping pool 'web_pool' to EPG 'internalEPG' which belongs to APIC tenant 'TenantDemo' - name: Deploy AS3 dynamic EP mapping uri: url: https://{{apic_ip}}/appcenter/F5Networks/F5ACIServiceCenter/updateas3data.json method: POST validate_certs: no body: url: "{{big_ip}}" partition: "{{partition}}" application: "DemoApp1" json: class: Application template: http serviceMain: class: Service_HTTP virtualAddresses: - 10.168.56.100 pool: web_pool web_pool: class: Pool monitors: - http members: - servicePort: 80 serverAddresses: [] - addressDiscovery: event servicePort: 80 constants: class: Constants serviceCenterEPG: web_pool: tenant: TenantDemo application: AppProfile epg: internalEPG body_format: json status_code: - 202 - 200 headers: DevCookie: "{{token}}" return_content: yes register: complete_info - name: Get task ID of above request set_fact: task_id: "{{ complete_info.json.message.taskId}}" when: complete_info.json.code == 202 - name: Get deployment status uri: url: https://{{apic_ip}}/appcenter/F5Networks/F5ACIServiceCenter/getasynctaskresponse.json method: POST validate_certs: no body: taskId: "{{task_id}}" body_format: json headers: DevCookie: "{{token}}" return_content: yes register: result until: result.json.message.message != "in progress" retries: 5 delay: 2 when: task_id is defined - name: Display final result debug: var: result After deploying this configuration, adding/deleting pool members and making sure the configuration is in sync is the responsibility of the F5 ACI ServiceCenter. Takeaways Both methods are highly effective and usable. The choice of which one to use comes down to the operational model in your environment. Some pros and cons to help made the decision on which platform to use for automation. Ansible Tower Pros No dependency on any other tools Fits in better with bigger company automation strategy to use Ansible for ALL network automation Cons Have to manage playbook execution and scheduling using Ansible Tower If more logic is needed besides what’s described above playbooks will have to be written and maintained Execution of playbook is based on scheduling and is not event driven F5 ACI ServiceCenter Pros Only pool-epg mapping has to be deployed using automation, rest all is handled by the application User interface to view pool member to EPG mapping once deployed and view discrepancies if any Limited automation knowledge is needed, heavy lifting is being done by the application Dynamically adding/deleting pool members is event driven, as members are learned/unlearned by the F5 ACI ServiceCenter an action is taken Cons Another tool is required Customization of pool to EPG mapping is not present. Only one-to-one EPG to pool mapping is present. References Learn about the F5 ACI ServiceCenter and other Cisco integrations: https://f5.com/cisco Download the F5 ACI ServiceCenter: https://dcappcenter.cisco.com/f5-aci-servicecenter.html Lab to execute Ansible playbooks: https://dcloud.cisco.com (Lab name: F5 and Ansible )1.5KViews1like0CommentsEnable Consistent Application Services for Containers with CIS
Kubernetes is all about abstracting away complexity. As Kubernetes continues to evolve, it becomes more intelligent and will become even more powerful when it comes to helping enterprises manage their data center, not just at the cloud.While enterprises have had to deal with the challenges associated with managing different types of modern applications (AI/ML, Big data, and analytics) to process that data, they are faced with the challenge to maintain a top-level network/security policies and gaining better control of the workload, to ensure operational and functional consistency. This is where Cisco ACI and F5 Container Ingress Services come into the picture. F5 CIS and Cisco ACI Cisco ACI offers these customers an integrated network fabric for Kubernetes. Recently,F5 and Cisco joined forces byintegrating F5 Container Ingress Services (or CIS) with Cisco ACI to bring L4-7 services into Kubernetes environment, to further simplify the user experience in deploying, scaling and managing containerized applications. This integration specifically enables: Unified networking: Containers, VMs, and bare-metal Secure multi-tenancy and seamless integration of Kubernetes network policies and ACI policies A single point of automation with enhanced visibility for ACI and BIG-IP. F5 Application Services natively integrated in Container and PaaS Environments One of the key benefits for such implementation is the ACI encapsulation normalization. The ACI fabric, as the normalizer for the encapsulation, allows you to merge different network technologies or encapsulations be it vlan or vxlan into a single policy model. BIG-IP through a simple VLAN connection to ACI, with no need for additional gateway, can communicate with any service anywhere. Solution Deployment To integrate F5 CIS with the Cisco ACI forKubernetes environment, you perform a series of tasks. Some you perform in the network to set up the Cisco Application Policy Infrastructure Controller (APIC); others you perform on the Kubernetes server(s). Rather thangetting down to the nitty-gritty, I willjust highlightthe steps todeploy the joint solution. Pre-requisites The BIG-IP CIS and Cisco ACI joint solution deployment assumes that you have the following in place: A working Cisco ACI installation ACI must be integrated with vCenter with dVS Fabric tenant pre-provisioned with the required VRFs/EPGs/L3OUTs. BIG-IP already running for non-container workload Deploying Kubernetes Clusters to ACI Fabrics The following steps will provide you a complete cluster configuration: Step 1.Run ACI provisioning tool to prepare Cisco ACI to work with Kubernetes Cisco provides an acc_provision tool to provision the fabric for the Kubernetes VMM domain and generate a .yaml file that Kubernetes uses to deploy the required Cisco Application Centric Infrastructure (ACI) container components. You can download the provisioning tool here. Next, you can use this provision tool to generate a sample configuration file that you can edit. $ acc-provision--sample > aci-containers-config.yaml We can now edit the sample configuration file to provide information from your network. With such configuration file, now you can run the following command to provision the CiscoACIfabric: acc-provision -c aci-containers-config.yaml -o aci-containers.yaml -f kubernetes-<version> -a -u [apic username] -p [apic password] Step 2. Prepare the ACI CNI Plugin configuration File The above command also generates the fileaci-containers.yamlthat you use after installing Kubernetes. Step 3.Preparing the Kubernetes Nodes - Set up networking for the node to support Kubernetes installation. With provisioned ACI, you start to prepare networking for the Kubernetes nodes. This includes steps such as Configuring the VMs interface toward the ACI fabric, Configuring a static route for the multicast subnet, Configuring the DHCP Client to work with ACIetc. Step 4.Installing Kubernetes cluster After you provision Cisco ACI and prepare the Kubernetes nodes, you can install Kubernetes and ACI containers. You can use any installation method you choose appropriate to your environment. Step 5.Deploy Cisco ACI CNI plugin When the Kubernetes cluster is up and running, you can copy the preciously generated CNI configuration to the master node, and install the CNI plug-in using the following command: kubectl apply -f aci-containers.yaml The command installs the following (PODs): ACI Containers Host Agent and OpFlex agent in a DaemonSet calledaci-containers-host Open vSwitch in a DaemonSet calledaci-containers-openvswitch ACI Containers Controller in a deployment calledaci-containers-controller. Other required configurations, including service accounts, roles, and security context For ‘the authoritative word on this specific implementation’, you can click here the workflow for integrating k8s into Cisco ACI for latest and greatest. After you have performed the previous steps, you can verify the integration in the Cisco APIC GUI. The integration creates a tenant, three EPGs, and a VMM domain. Each tenant will have the visibility of all the Kubernetes POD's. Install the BIG-IP Controller The F5 BIG-IP Controller (k8s-bigip-ctlr) or Container Ingress Services, if you aren't familiar, is a Kubernetes native service that provides the glue between container services and BIG-IP. It watches for changes and communicates those to BIG-IP delivered application services. These, in turn, keep up with the changes in container environments and enable enforcement of security policies. Once you have a running Kubernetes cluster deployed to ACI Fabric, you can follow these instructions to install BIG-IP Controller. Use the kubectl get command to verify that thek8s-bigip-ctlrPod launched successfully. BIG-IP asnorth-south load balancer forExternal Services For Kubernetes services that are exposed externally and need to be load balanced, Kubernetes does not handle the provisioning of the load balancing. It is expected that the load balancing network function is implemented separately. For these services, Cisco ACI takes advantage of the symmetric policy-based redirect (PBR) feature available in the Cisco Nexus 9300-EX or FX leaf switches in ACI mode. This is where BIG-IP Container Ingress Services (or CIS) comes into thepicture, as the north-south load balancer.On ingress, incoming traffic to an externally exposed service is redirected by PBR to BIG-IP for that particular service. If a Kubernetes cluster contains more than one IP pod for a particular service, BIG-IP will load balance the traffic across all the pods for that service. In addition, each new POD is added to BIG-IP pool automatically. Conclusion F5 CIS and Cisco ACI together offer a unified control, visibility, security and application services, for both container and non-container workload. Further Resources F5 Container Ingress Services Click here Cisco ACI and Kubernetes Integration Click here1.4KViews1like0CommentsWhy is an Active-Active configuration not recommended by F5?
I was considering configuring our F5 LTMs in an Active/Active state within Cisco ACI but I read here that this type of configuration is not recommended without having at least one F5 in standby mode. "F5 does not recommend deploying BIP-IP systems in an Active-Active configuration without a standby device in the cluster, due to potential loss of high availability." Why is this? With two F5s in Active/Active mode, they should still fail over to each other if one happens to go down. Would it take longer for one device to fail over to another who is active rather than being truly standby?999Views0likes5CommentsF5 & Cisco ACI Essentials - ServiceCenter: A troubleshooting tool for network admins
Introduction The F5 ACI ServiceCenter is an application developed to run only on the Cisco ACI App Center platform. It is an integration point between the F5 BIG-IP and Cisco ACI. The application provides an APIC administrator a unified way to manage both L2-L3 and L4-L7 infrastructure. Once day-0 activities are performed and BIG-IP is deployed within the ACI fabric then the F5 ACI ServiceCenter can be used to handle day-1 and day-2 operations. For more information, check this informative lightboard video Topology Let's take a simple and basic scenario where a pair of BIG-IP's are deployed in an ACI environment and are load balancing a HTTP application. In an ideal world one single administrator would handle all the network related tasks including managing the load balancing capabilities, but in reality that is not the case. In reality a typical network administrator is aware of how the ACI is configured: how many tenants are present on APIC, how many end point groups (EPG) and bridge domains(BD) are deployed, how many contracts are deployed to make sure these end points groups are able to talk to each other etc. On the other hand, a typical BIG-IP administrator is aware of what VIP's are configured on the BIG-IP, what monitors are assigned to the HTTP application, how many pools and pool members exist on the BIG-IP etc. The network administrator has little or no visibility into the BIG-IP configuration and vice-versa and this leads to inconsistency in deployments as well as communication and coordination overhead in making any changes to the network. The F5 ACI ServiceCenter visibility use case aims at bridging this gap. It provides the network administrator some visibility into the BIG-IP configuration which will give a better picture of the entire network to the network administrator. This helps with making troubleshooting easier and also providing a means for making informed decisions. Use Case: Visibility and mapping of APIC and BIG-IP constructs In this article we will dive right into the visibility use case. Click for details on how to get started using the F5 ACI ServiceCenter. Using the visibility tab of the application the network administrator will be able to visually view how application workload on the APIC is tied to the BIG-IP. Workload on APIC is learned by an end point group on APIC. For example, if an application is being served by web servers with IPs 192.168.56.*, then these IP addresses will be present as an end poinst in an end point group (EPG) on the APIC. From the perspective of BIG-IP these web servers are pool members on a particular pool. The F5 ACI ServiceCenter has access to both APIC and BIG-IP and will co-relate this information and provide a mapping. BIG-IP: VIP|Pool|Pool Member <=> APIC: Tenant|Application Profile|End Point group This gives the network administrator a view of how the APIC workload is associated with the BIG-IP and what all applications and virtual IP's are tied to a tenant. Along with the mapping the health statistics from the BIG-IP are collected. The health status is reflected based on the monitor assigned to the VIP, pool and pool members on the BIG-IP. After any change that a network administrator would make on the network, he/she can login to the F5 ACI ServiceCenter and check if the health of the VIP's/pool member's was affected and troubleshoot on the appropriate tenant/app/epg on the APIC. Advantages: Reduce the network administrator's time on waiting on the BIG-IP admin to confirm that the application is or is not healthy. Network administrator has to have minimal knowledge of BIG-IP and still be able to make an informed decision. Can use the F5 ACI ServiceCenter to create a snapshot of the network before and after a network change is made. Automation The information can be viewed visually, but for those network administrators who are automating their environment they can also take advantage of the API support provided by the F5 ACI ServiceCenter. Click here for details on API’s supported on the F5 ACI ServiceCenter Let’s take an example of collecting the snapshot of the VIP and Pool member statistics. Ansible is being used in this example but any automation tool can be used to collect and parse the API response. All API calls are made to the APIC controller. Ansible playbook for gathering virtual IP address and the status of the VIP. After parsing the data copying the content to a file. --- - name: Get VIP and status hosts: localhost gather_facts: false connection: local vars: apic_ip: "10.192.73.xx" big_ip: "10.192.73.xx" partition: "Dynamic" tasks: - name: Login to APIC uri: url: https://{{apic_ip}}/api/aaaLogin.json method: POST validate_certs: no body_format: json body: aaaUser: attributes: name: "admin" pwd: "<<apic_password>>" headers: content_type: "application/json" return_content: yes register: cookie - debug: msg="{{cookie['cookies']['APIC-cookie']}}" - set_fact: token: "{{cookie['cookies']['APIC-cookie']}}" - name: Login to BIG-IP uri: url: https://{{apic_ip}}/appcenter/F5Networks/F5ACIServiceCenter/loginbigip.json method: POST validate_certs: no body: url: "{{big_ip}}" user: "admin" password: "<<bigip_password>>" body_format: json headers: DevCookie: "{{token}}" - name: Get complete visibility information uri: url: https://{{apic_ip}}/appcenter/F5Networks/F5ACIServiceCenter/getvipstats.json method: POST validate_certs: no body: url: "{{big_ip}}" partition: "{{partition}}" body_format: json headers: DevCookie: "{{token}}" return_content: yes register: complete_info - name: Save only VIP information into a fact set_fact: vip_info: "{{ complete_info.json.vipStats}}" - name: Display VIP and status information debug: var: vip_info - name: Set fact with key value pairs set_fact: vip_status: "{{ vip_status|default([]) + [ {'vip': item.address.split(':')[0], 'status': item.status } ] }}" loop: "{{vip_info | json_query(query_string) }}" vars: query_string: "[].vip" - name: Display key value pairs debug: msg: "{{item}}" with_items: "{{vip_status}}" - name: Create VIP ip:status file blockinfile: path: ./vip_status create: yes block: | {{item.vip}}: {{item.status}} marker: "# {mark} ANSIBLE MANAGED BLOCK {{ item.vip }}" with_items: "{{vip_status}}" - name: Delete comments from file lineinfile: path: ./vip_status regexp: '^#' state: absent - name: Sort the content for the file for easy comparision shell: sort -k2 vip_status > before_nw_change_vip Output of file 'before_nw_change_vip' 10.168.56.50: available Ansible playbook for gathering node IP address and the status . After parsing the data copying the content to a file. --- - name: Get node and status hosts: localhost gather_facts: false connection: local vars: apic_ip: "10.192.73.xx" big_ip: "10.192.73.xx" partition: "Dynamic" tasks: - name: Login to APIC uri: url: https://{{apic_ip}}/api/aaaLogin.json method: POST validate_certs: no body_format: json body: aaaUser: attributes: name: "admin" pwd: "<<apic_password>>" headers: content_type: "application/json" return_content: yes register: cookie - debug: msg="{{cookie['cookies']['APIC-cookie']}}" - set_fact: token: "{{cookie['cookies']['APIC-cookie']}}" - name: Login to BIG-IP uri: url: https://{{apic_ip}}/appcenter/F5Networks/F5ACIServiceCenter/loginbigip.json method: POST validate_certs: no body: url: "{{big_ip}}" user: "admin" password: "<<bigip_password>>" body_format: json headers: DevCookie: "{{token}}" - name: Get complete visibility information uri: url: https://{{apic_ip}}/appcenter/F5Networks/F5ACIServiceCenter/getvipstats.json method: POST validate_certs: no body: url: "{{big_ip}}" partition: "{{partition}}" body_format: json headers: DevCookie: "{{token}}" return_content: yes register: complete_info - name: Save only VIP information into a fact set_fact: vip_info: "{{ complete_info.json.vipStats}}" - debug: var: vip_info - name: Set fact with key value pairs for pool members set_fact: node_status: "{{ node_status|default([]) + [ {'ip': item.address, 'status': item.status, 'tenant': item.epgs[0].tenant.name, 'app': item.epgs[0].app.name, 'epg': item.epgs[0].epg.name} ] }}" loop: "{{vip_info | json_query(query_string) }}" vars: query_string: "[].nodes[]" - name: Display key value pairs debug: msg: "{{item}}" with_items: "{{node_status}}" - name: Create node ip:status file blockinfile: path: ./node_status create: yes block: | {{item.ip}}: {{item.status}}: {{item.tenant}} {{item.app}} {{item.epg}} marker: "# {mark} ANSIBLE MANAGED BLOCK {{ item.ip }}" with_items: "{{node_status}}" - name: Delete comments from file lineinfile: path: ./node_status regexp: '^#' state: absent - name: Sort the content for the file for easy comparision shell: sort -k2 node_status > before_nw_change_node Output of file 'before_nw_change_node' 192.168.56.150: available: uni/tn-AspireDemo/ap-AppProfile 192.168.56.151: available: uni/tn-AspireDemo/ap-AppProfile 192.168.56.152: available: uni/tn-AspireDemo/ap-AppProfile 192.168.56.153: available: uni/tn-AspireDemo/ap-AppProfile 192.168.56.154: available: uni/tn-AspireDemo/ap-AppProfile 192.168.56.155: available: uni/tn-AspireDemo/ap-AppProfile 192.168.56.156: available: uni/tn-AspireDemo/ap-AppProfile 192.168.56.157: available: uni/tn-AspireDemo/ap-AppProfile 192.168.56.158: available: uni/tn-AspireDemo/ap-AppProfile 192.168.56.159: available: uni/tn-AspireDemo/ap-AppProfile 192.168.56.167: available: uni/tn-AspireDemo/ap-AppProfile Once a network change is made, this information can be collected again and a comparison can be made of if any VIP's/Pool members were affected by the network change. Summary Few key highlights of the F5 ACI ServiceCenter: Free of cost, no license needed Installed on the APIC natively, no external software/hardware component Operates in the control plane and does not disrupt traffic flow Visibility use case is ideal for new and existing BIG-IP deployments Using the F5 ACI ServiceCenter application within your ACI environment where BIG-IP's are deployed is a win-win for both network administrators and BIG-IP administrators. Network administrators can use it for visibility into the BIG-IP and making sure the network is setup correctly to serve the application sitting behind the BIG-IP. The BIG-IP administrators can take it for granted that the network is intact and the network administrators have done their due diligence by using the F5 ACI ServiceCenter. BIG-IP administrators can focus their efforts on application specific challenges and configurations on the BIG-IP using their day to day operational model. However maybe there is an ideal world and the BIG-IP and network administrators both have access to the APIC controller, in that case the BIG-IP administrator can also take advantage of L4-L7 use case provided by the F5 ACI ServiceCenter. For more details visit: https://www.f5.com/cisco934Views1like0CommentsOperationalizing The Network now available with F5 and Cisco ACI [End of Life]
The F5 and Cisco APIC integration based on the device package and iWorkflow is End Of Life. The latest integration is based on the Cisco AppCenter named ‘F5 ACI ServiceCenter’. Visit https://f5.com/cisco for updated information on the integration. #ACI #SDN With the availability of the F5 device package for Cisco APIC, you can now rapidly provision the (entire) network from top to bottom. A key concern of IT continues to center on provisioning of network services. Whether eliminating the time consuming task of manually provisioning network attributes network device by device or trying to eliminate inefficiencies within the broader "network" service provisioning process, the goal is the same: increase the speed and accuracy with which network services are provisioned. SDN and related technologies promise to do just that by operationalizing the network. Last fall Cisco announced its Application Centric Infrastructure (ACI) vision and then focused on the network by bringing its Application Policy Infrastructure Controller ™ (APIC) to fruition. Seeing our visions align fully on the need for operationalization of the network with a focus on the application, F5 committed to supporting Cisco APIC by delivering a device package to enable the rapid provisioning of F5's Software Defined Application Services (SDAS). Today we're pleased to announce the availability - at no charge - of that device package. The availability lets customers configure application policies and requirements for F5 services across the L2-7 fabric and subsequently automatically provision the services necessary to ensure the entire stack of network services - from top to bottom, layer 2 through layer 7 - are available when applications need them. You can download the F5 Device Package for Cisco APIC todayand learn more about how F5 and Cisco work together: Cisco Alliance Cisco Resources on DevCentral903Views0likes2CommentsF5 BIG-IP AND CISCO APIC DESIGN GUIDE [End of Life]
The F5 and Cisco APIC integration based on the device package and iWorkflow is End Of Life. The latest integration is based on the Cisco AppCenter named ‘F5 ACI ServiceCenter’. Visit https://f5.com/cisco for updated information on the integration. With customers deploying ACI with service insertion, people are constantly asking both design and implementation related questions such as: When should I use 1 ARM vs 2 ARM topology for services? Can I use BIG-IP in HA mode? Can I use/share my existing BIG-IP with ACI? My services and networking teams are separate, how can I maintain that business separation while using ACI? How are elastic workloads addressed with BIG-IP and ACI? We have had discussions with various customers and helped them deploy ACI with F5 service integration based on their network designs and topologies. From these discussions F5 and Cisco have boiled down the topics into a brief design guide. F5 BIG-IP LTM Service Insertion with Cisco ACI (Design Guide) Our design guide starts with an introduction of ACI and the benefits of using service insertion.We then go into the F5 device package features and supported functions. With the reader having a baseline understanding of ACI and F5 integration we discuss the design considerations for service insertion. To help further your knowledge on F5 BIG-IP and Cisco ACI, the following references are also recommended: F5 and Cisco Technology Alliance webpage Cisco Live Presentation – Real world design and deployments with Cisco ACI and F5 LTM service insertion We are also going to be presenting ACI sessions at F5 Agility in Washington DC (August 4 th to August 6 th ). Please come and join us for the following breakout sessions F5 and Cisco ACI integration update Gain an understanding of the F5 Device Package, and how BIG-IP and BIG-IQ bring L4–7 services into the ACI policy model. Design/deploy with Cisco ACIusing F5 LTM service insertion Learn from Cisco and F5 experts about the Cisco ACI and BIG-IP LTM architectures, topologies, and use cases.856Views0likes0CommentsBIG-IQ Grows UP [End of Life]
The F5 and Cisco APIC integration based on the device package and iWorkflow is End Of Life. The latest integration is based on the Cisco AppCenter named ‘F5 ACI ServiceCenter’. Visit https://f5.com/cisco for updated information on the integration. Today F5 is announcing a new F5® BIG-IQ™ 4.5. This release includes a new BIG-IQ component – BIG-IQ ADC. Why is 4.5 a big deal? This release introduces a critical new BIG-IQ component, BIG-IQ ADC. With ADC management, BIG-IQ can finally control basic local traffic management (LTM) policies for all your BIG-IP devices from a single pane of glass. Better still, BIG-IQ’s ADC function has been designed with the concept of “roles” deeply ingrained. In practice, this means that BIG-IQ offers application teams a “self-serve portal” through which they can manage load balancing of just the objects they are “authorized” to access and update. Their changes can be staged so that they don’t go live until the network team has approved the changes. We will post follow up blogs that dive into the new functions in more detail. In truth, there are a few caveats around this release. Namely, BIG-IQ requires our customer’s to be using BIG-IP 11.4.1 or above. Many functions require 11.5 or above. Customers with older TMOS version still require F5’s legacy central management solution, Enterprise Manager. BIG-IQ still can’t do some of the functions Enterprise Manager provides, such as iHealth integration and advanced analytics. And BIG-IQ can’t yet manage some advanced LTM options. Never-the-less, this release will an essential component of many F5 deployments. And since BIG-IQ is a rapidly growing platform, the feature gaps will be filled before you know it. Better still, we have big plans for adding additional components to the BIG-IQ framework over the coming year. In short, it’s time to take a long hard look at BIG-IQ. What else is new? There are hundreds of new or modified features in this release. Let me list a few of the highlights by component: 1. BIG-IQ ADC - Role-based central Management of ADC functions across the network · Centralized basic management of LTM configurations · Monitoring of LTM objects · Provide high availability and clustering support of BIG-IP devices and application centric manageability services · Pool member management (enable/disable) · Centralized iRules Management (though not editing) · Role-based management · Staging and manual of deployments 2. BIG-IQ Cloud - Enhanced Connectivity and Partner Integration · Expand orchestration and management of cloud platforms via 3rd party developers · Connector for VMware NSX and (early access) connector for Cisco ACI · Improve customer experience via work flows and integrations · Improve tenant isolation on device and deployment 3. BIG-IQ Device - Manage physical and virtual BIG-IP devices from a single pane of glass · Support for VE volume licensing · Management of basic device configuration & templates · UCS backup scheduling · Enhanced upgrade advisor checks 4. BIG-IQ Security - Centralizes security policy deployment, administration, and management · Centralized feature support for BIG-IP AFM · Centralized policy support for BIG-IP ASM · Consolidated DDoS and logging profiles for AFM/ASM · Enhanced visibility and notifications · API documentation for ASM · UI enhancements for AFM policy management My next blog will include a video demonstrating the new BIG-IQ ADC component and showing how it enhances collaboration between the networking and application teams with fine grained RBAC.799Views0likes3Comments