cisco
262 Topicstelnet to server from F5
I have a F5 big-ip 4200 on code version 11.4 and I cannot seem to telnet to anything over a specific port. I am using route domains so I was following this article: http://support.f5.com/kb/en-us/solutions/public/10000/400/sol10467.html This does not appear to be working though as I am still not able to telnet to anything that I know should be working. This is the error I get: run util telnet 2620:0000:0C10:F501:0000:AD00:172.28.141.168 453 Trying 2620:0:c10:f501:0:fe4d:ac1c:8da8... telnet: connect to address 2620:0:c10:f501:0:fe4d:ac1c:8da8: Network is unreachable I am able to telnet to 172.28.141.168 on port 453 from my desktop. IPv6 is enabled...am I missing something?6.1KViews0likes3CommentsBig IP LTM sending tcp Resets due to SSL handshake time out ?
Hi F5 gurus, We have a https file transfer going on a daily basis and we are experiencing a big problem here. Client is a java program and server is behind the F5. We are offloading ssl on F5 so we use client ssl profile with default settings ( Version 11.2.1 LTM, ssl handshake time out = 10 sec ) . Tcpdump is saying that RST are generated from F5. As per F5 Handshake time more than 10sec will make system vulnerable to DoS attack. Also client route through many network devices before hitting the G5 big ip. I have enabled the rstcause.log and rstcause.pkt which gives me below logs. ltm 12-15 17:00:13 err lb1 tmm1[9547]: RST sent from server ip :443 to client ip :14720, [0x147b9e1:962] SSL handshake timeout exceeded ltm 12-15 17:00:13 err lb1 tmm1[9547]: RST sent from server ip :443 to client ip :14716, [0x147b9e1:962] SSL handshake timeout exceeded ltm 12-15 17:00:13 err lb1 tmm1[9547]: RST sent from server ip :443 to client ip :14718, [0x147b9e1:962] SSL handshake timeout exceeded ltm 12-15 17:00:13 err lb1 tmm[9546]: RST sent from server ip :443 to client ip :14719, [0x147b9e1:962] SSL handshake timeout exceeded ltm 12-15 17:00:14 err lb1 tmm1[9547]: RST sent from server ip :443 to client ip :14716, [0x147db9a:4315] TCP 3WHS rejected ltm 12-15 17:00:14 err lb1 tmm1[9547]: RST sent from server ip :443 to client ip :14718, [0x147db9a:4315] TCP 3WHS rejected ltm 12-15 17:00:14 err lb1 tmm[9546]: RST sent from server ip :443 to client ip :14719, [0x147db9a:4315] TCP 3WHS rejected Please help me if you have seen this problem before. Thanks in advance3.6KViews0likes10CommentsHow to use Ansible with Cisco routers
Quick Intro For those who don't know, there is an Ansible plugin callednetwork_clito retrieve network device configuration for backup, inspection and even execute commands. So, let's assume we have Ansible already installed and 2 routers: I used Debian Linux here and I had to install Python 3: I've also installed pip as we can see above because I wanted to install a specific version of Ansible (2.5.+) that would allow myself to use network_cli plugin: Note: we can list available Ansible versions by just typingpip install ansible== I've also created a user namedansible: Edited Linuxsudoersfile withvisudocommand: And addedAnsible user permission to run root commands without prompting for password so my file looked liked this: Quick Set Up This is my directory structure: These are the files I used for this lab test: Note: we can replace cisco1.rodrigo.example for an IP address too. In Ansible, there is a default config file (ansible.cfg) where we store the global config, i.e. how we want Ansible to behave. We also keep the list of our hosts into an inventory file (inventory.yml here). There is a default folder (group_vars) where we can store variables that would apply to any router we ran Ansible against and in this case it makes sense as my router credentials are the same. Lastly, retrieve_backup.yml is my actual playbook, i.e. where I tell Ansible what to do. Note: I manually logged in to cisco1.rodrigo.example and cisco2.rodrigo.example to populate ssh known_hosts files, otherwise Ansible complains these hosts are untrusted. Populating our Playbook file retrieve_backup.yml Let's say we just want to retrieve the OSPF configuration from our Cisco routers. We can useios_commandto type in any command to Cisco router and useregisterto store the output to a variable: Note: be careful with the indentation. I used 2 spaces here. We can then copy the content of the variable to a file in a given directory. In this case, we copied whatever is inospf_outputvariable toospf_configdirectory. From the Playbook file above we can work out that variables are referenced between{{ }}and we might probably be wondering why do we need to append stdout[0] to ospf_output right? If you know Python, you might be interested in knowing a bit more about what's going on under the hood so I'll clarify things a bit more here. The variableospf_outputis actually a dictionary andstdoutis one of its keys. In reality, ospf_output.stdout could be represented as ospf_output['stdout'] We add the[0] because the object retrieved by the key stdout is not a string. It's a list! And[0] just represents the first object in the list. Executing our Playbook I'll create ospf_config first: And we execute our playbook by issuingansible-playbookcommand: Now let's check if our OSPF config was retrieved: We can pretty much type in any IOS command we'd type in a real router, either to configure it or to retrieve its configuration. We could also append the date to the file name but it's out of the scope of this article. That's it for now.3.5KViews1like1CommentCisco ISE load-balancing and Change of Authorization (CoA)
First, let me clearly state that I do not have a Cisco background. I have no experience with the RADIUS protocol, and am not familiar with the details of the CoA, so I am not in a position to know if what I'm being asked to do is appropriate/necessary/makes sense or not. Our Cisco guys came to me asking for a RADIUS load-balancing VIP, with persistence based on CALLING-STATION-ID. I found https://devcentral.f5.com/questions/load-balance-cisco-ise-servers easily enough. So I created a wildcard UDP VIP with the iRule. But, they came back with an additional CoA requirement. They claim that the ISE servers periodically send "a CoA packet" to the clients of the RADIUS VIP. They want the LTM to intercept these packets, and SNAT it from the RADIUS VIP address. They claim that the clients of the RADIUS service will only accept CoA packets from the VIP address. Apart from the link above, the only good resource on the subject I can find is https://supportforums.cisco.com/blog/153056/ise-and-load-balancing. I get somewhat lost in the terminology, but this statement seems important: Each PSN gets listed individually in the Dynamic-Authorization (CoA). Use the real IP Address of the PSN, not the VIP. In the context of this document it sounds to me like the "PSN" is also the Pool Member of the RADIUS VIP, and that we should be adding the IP address of the Pool Member in some CoA field on the clients of the RADIUS VIP. But again not being familiar with RADIUS, I'm very uncertain. Apart from the question of whether or not I can SNAT from a VIP address at all (which I highly doubt), does anyone have some insight into how to account for these RADIUS/CoA packets in a load-balancing context?2.8KViews0likes8CommentsF5 & Cisco ACI Essentials - Take advantage of Policy Based Redirect
Different applications and environments have unique needs on how traffic is to be handled. Some applications due to the nature of their functionality or maybe due to a business need do require that the application server(s) are able to view the real IP of the client making the request to the application. Now when the request comes to the BIG-IP it has the option to change the real IP of the request or to keep it intact. In order to keep it intact the setting on the F5 BIG-IP ‘Source Address Translation’ is set to ‘None’. Now as simple as it may sound to just toggle a setting on the BIG-IP, a change of this setting causes significant change in traffic flow behavior. Let’s take an example with some actual values. Starting with a simple setup of a standalone BIG-IP with one interface on the BIG-IP for all traffic (one-arm) Client – 10.168.56.30 BIG-IP Virtual IP – 10.168.57.11 BIG-IP Self IP – 10.168.57.10 Server – 192.168.56.30 Scenario 1: With SNAT From Client : Src: 10.168.56.30 Dest: 10.168.57.11 From BIG-IP to Server: Src: 10.168.57.10 (Self-IP) Dest: 192.168.56.30 With this the server will respond back to 10.168.57.10 and BIG-IP will take care of forwarding the traffic back to the client. Here the application server see’s the IP 10.168.57.10 and not the client IP Scenario 2: No SNAT From Client : Src: 10.168.56.30 Dest: 10.168.57.11 From BIG-IP to Server: Src: 10.168.56.30 Dest: 192.168.56.30 With this the server will respond back to 10.168.56.30 and here where comes in the complication, the return traffic needs to go back to the BIG-IP and not the real client. One way to achieve this is to set the default GW of the server to the Self-IP of the BIG-IP and then the server will send the return traffic to the BIG-IP. BUT what if the server default gateway is not to be changed for whatsoever reason. It is at this time Policy based redirect will help. The default gw of the server will point to the ACI fabric, the ACI fabric will be able to intercept the traffic and send it over to the BIG-IP. With this the advantage of using PBR is two-fold The server(s) default gateway does not need to point to BIG-IP but can point to the ACI fabric The real client IP is preserved for the entire traffic flow Avoid server originated traffic to hit BIG-IP, resulting BIG-IP to configure a forwarding virtual to handle that traffic.If server originated traffic volume is high it could result unnecessary load the BIG-IP Before we get to the deeper into the topic of PRB below are a few links to help you refresh on some of the Cisco ACI and BIG-IP concepts ACI fundamentals: https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/1-x/aci-fundamentals/b_ACI-Fundamentals.html SNAT and Automap: https://support.f5.com/csp/article/K7820 BIG-IP modes of deployment: https://support.f5.com/csp/article/K96122456#link_02_01 Now let’s look at what it takes to configure PBR using a Standalone BIG-IP Virtual Edition in One-Arm mode Network diagram for reference: To use the PBR feature on APIC - Service graph is a MUST Details on L4-L7 service graph on APIC To get hands on experience on deploying a service graph (without pbr) Configuration on APIC 1) Bridge domain ‘F5-BD’ Under Tenant->Networking->Bridge domains->’F5-BD’->Policy IP Data plane learning - Disabled 2) L4-L7 Policy-Based Redirect Under Tenant->Policies->Protocol->L4-L7 Policy based redirect, create a new one Name: ‘bigip-pbr-policy’ L3 destinations: BIG-IP Self-IP and MAC IP: 10.168.57.10 MAC: Find the MAC of interface the above Self-IP is assigned from logging into the BIG-IP (example: 00:50:56:AC:D2:81) 3) Logical Device Cluster- Under Tenant->Services->L4-L7, create a logical device Managed – unchecked Name: ‘pbr-demo-bigip-ve` Service Type: ADC Device Type: Virtual (in this example) VMM domain (choose the appropriate VMM domain) Devices: Add the BIG-IP VM from the dropdown and assign it an interface Name: ‘1_1’, VNIC: ‘Network Adaptor 2’ Cluster interfaces Name: consumer, Concrete interface Device1/[1_1] Name: provider, Concrete interface: Device1/[1_1] 4) Service graph template Under Tenant->Services->L4-L7->Service graph templates, create a service graph template Give the graph a name:’ pbr-demo-sgt’ and then drag and drop the logical device cluster (pbr-demo-bigip-ve) to create the service graph ADC: one-arm Route redirect: true 5) Click on the service graph created and then go to the Policy tab, make sure the Connections for the connectors C1 and C2 and set as follows: Connector C1 Direct connect – False (Not mandatory to set to 'True' because PBR is not enabled on consumer connector for the consumer to VIP traffic) Adjacency type – L3 Connector C2 Direct connect - True Adjacency type - L3 6) Apply the service graph template Right click on the service graph and apply the service graph Choose the appropriate consumer End point group (‘App’) provider End point group (‘Web’) and provide a name for the new contract For the connector select the following: BD: ‘F5-BD’ L3 destination – checked Redirect policy – ‘bigip-pbr-policy’ Cluster interface – ‘provider’ Once the service graph is deployed, it is in applied state and the network path between the consumer, BIG-IP and provider has been successfully setup on the APIC. 7) Verify the connector configuration for PBR. Go to Device selection policy under Tenant->Services-L4-L7. Expand the menu and click on the device selection policy deployed for your service graph. For the consumer connector where PBR is not enabled Connector name - Consumer Cluster interface - 'provider' BD- ‘F5-BD’ L3 destination – checked Redirect policy – Leave blank (no selection) For the provider connector where PBR is enabled: Connector name - Provider Cluster interface - 'provider' BD - ‘F5-BD’ L3 destination – checked Redirect policy – ‘bigip-pbr-policy’ Configuration on BIG-IP 1) VLAN/Self-IP/Default route Default route – 10.168.57.1 Self-IP – 10.168.57.10 VLAN – 4094 (untagged) – for a VE the tagging is taken care by vCenter 2) Nodes/Pool/VIP VIP – 10.168.57.11 Source address translation on VIP: None 3)iRule (end of the article) that can be helpful for debugging Few differences in configuration when the BIG-IP is a Virtual edition and is setup in a High availability pair 1)BIG-IP: Set MAC Masquerade (https://support.f5.com/csp/article/K13502) 2)APIC: Logical device cluster Promiscuous mode – enabled Add both BIG-IP devices as part of the cluster 3) APIC: L4-L7 Policy-Based Redirect L3 destinations: Enter the Floating BIG-IP Self-IP and MAC masquerade ------------------------------------------------------------------------------------------------------------------------------------------------------------------ Configuration is complete, let’s take a look at the traffic flows Client-> F5 BIG-IP -> Server Server-> F5 BIG-IP -> Client In Step 2 when the traffic is returned from the client, ACI uses the Self-IP and MAC that was defined in the L4-L7 redirect policy to send traffic to the BIG-IP iRule to help with debugging on the BIG-IP when LB_SELECTED { log local0. "==================================================" log local0. "Selected server [LB::server]" log local0. "==================================================" } when HTTP_REQUEST { set LogString "[IP::client_addr] -> [IP::local_addr]" log local0. "==================================================" log local0. "REQUEST -> $LogString" log local0. "==================================================" } when SERVER_CONNECTED { log local0. "Connection from [IP::client_addr] Mapped -> [serverside {IP::local_addr}] \ -> [IP::server_addr]" } when HTTP_RESPONSE { set LogString "Server [IP::server_addr] -> [IP::local_addr]" log local0. "==================================================" log local0. "RESPONSE -> $LogString" log local0. "==================================================" } Output seen in /var/log/ltm on the BIG-IP, look at the event <SERVER_CONNECTED> Scenario 1: No SNAT -> Client IP is preserved Rule /Common/connections <HTTP_REQUEST>: Src: 10.168.56.30 -> Dest: 10.168.57.11 Rule /Common/connections <SERVER_CONNECTED>: Src: 10.168.56.30 Mapped -> 10.168.56.30 -> Dest: 192.168.56.30 Rule /Common/connections <HTTP_RESPONSE>: Src: 192.168.56.30 -> Dest: 10.168.56.30 If you are curious of the iRule output if SNAT is enabled on the BIG-IP - Enable AutoMap on the virtual server on the BIG-IP Scenario 2: With SNAT -> Client IP not preserved Rule /Common/connections <HTTP_REQUEST>: Src: 10.168.56.30 -> Dest: 10.168.57.11 Rule /Common/connections <SERVER_CONNECTED>: Src: 10.168.56.30 Mapped -> 10.168.57.10-> Dest: 192.168.56.30 Rule /Common/connections <HTTP_RESPONSE>: Src: 192.168.56.30 -> Dest: 10.168.56.30 References: ACI PBR whitepaper: https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-739971.html Troubleshooting guide: https://www.cisco.com/c/dam/en/us/td/docs/switches/datacenter/aci/apic/sw/4-x/troubleshooting/Cisco_TroubleshootingApplicationCentricInfrastructureSecondEdition.pdf Layer4-Layer7 services deployment guide https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/4-x/L4-L7-services/Cisco-APIC-Layer-4-to-Layer-7-Services-Deployment-Guide-401/Cisco-APIC-Layer-4-to-Layer-7-Services-Deployment-Guide-401_chapter_011.html Service graph: https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/4-x/L4-L7-services/Cisco-APIC-Layer-4-to-Layer-7-Services-Deployment-Guide-401/Cisco-APIC-Layer-4-to-Layer-7-Services-Deployment-Guide-401_chapter_0111.html https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/4-x/L4-L7-services/Cisco-APIC-Layer-4-to-Layer-7-Services-Deployment-Guide-401.pdf2.6KViews1like3CommentsConverting a Cisco ACE configuration file to F5 BIG-IP Format
In September, Cisco announced that it was ceasing development and pulling back on sales of its Application Control Engine (ACE) load balancing modules. Customers of Cisco’s ACE product line will now have to look for a replacement product to solve their load balancing and application delivery needs. One of the first questions that will come up when a customer starts looking into replacement products surrounds the issue of upgradability. Will the customer be able to import their current configuration into the new technology or will they have to start with the new product from scratch. For smaller businesses, starting over can be a refreshing way to clean up some of the things you’ve been meaning to but weren’t able to for one reason or another. But, for a large majority of the users out there, starting over from nothing with a new product is a daunting task. To help with those users considering a move to the F5 universe, DevCentral has included several scripts to assist with the configuration migration process. In our Codeshare section we created some scripts useful in converting ACE configurations into their respective F5 counterparts. https://devcentral.f5.com/s/articles/cisco-ace-to-f5-big-ip https://devcentral.f5.com/s/articles/Cisco-ACE-to-F5-Conversion-Python-3 https://devcentral.f5.com/s/articles/cisco-ace-to-f5-big-ip-via-tmsh We also have scripts covering Cisco’s CSS (https://devcentral.f5.com/s/articles/cisco-css-to-f5-big-ip ) and CSM products (https://devcentral.f5.com/s/articles/cisco-csm-to-f5-big-ip ) as well. In this article, I’m going to focus on the ace2f5-tmsh” in the ace2f5.zip script library. The script takes as input an ACE configuration and creates a TMSH script to create the corresponding F5 BIG-IP objects. ace2f5-tmsh.pl $ perl ace2f5-tmsh.pl ace_config > tmsh_script We could leave it at that, but I’ll use this article to discuss the components of the ACE configuration and how they map to F5 objects. ip The ip object in the ACE configuration is defined like this: ip route 0.0.0.0 0.0.0.0 10.211.143.1 equates to a tmsh “net route” command. net route 0.0.0.0-0 { network 0.0.0.0/0 gw 10.211.143.1 } rserver An “rserver” is basically a node containing a server address including an optional “inservice” attribute indicating whether it’s active or not. ACE Configuration rserver host R190-JOEINC0060 ip address 10.213.240.85 rserver host R191-JOEINC0061 ip address 10.213.240.86 inservice rserver host R192-JOEINC0062 ip address 10.213.240.88 inservice rserver host R193-JOEINC0063 ip address 10.213.240.89 inservice It will be used to find the IP address for a given rserver hostname. serverfarm A serverfarm is a LTM pool except that it doesn’t have a port assigned to it yet. ACE Configuration serverfarm host MySite-JoeInc predictor hash url rserver R190-JOEINC0060 inservice rserver R191-JOEINC0061 inservice rserver R192-JOEINC0062 inservice rserver R193-JOEINC0063 inservice F5 Configuration ltm pool Insiteqa-JoeInc { load-balancing-mode predictive-node members { 10.213.240.86:any { address 10.213.240.86 }} members { 10.213.240.88:any { address 10.213.240.88 }} members { 10.213.240.89:any { address 10.213.240.89 }} } probe a “probe” is a LTM monitor except that it does not have a port. ACE Configuration probe tcp MySite-JoeInc interval 5 faildetect 2 passdetect interval 10 passdetect count 2 will map to the TMSH “ltm monitor” command. F5 Configuration ltm monitor Insiteqa-JoeInc { defaults from tcp interval 5 timeout 10 retry 2 } sticky The “sticky” object is a way to create a persistence profile. First you tie the serverfarm to the persist profile, then you tie the profile to the Virtual Server. ACE Configuration sticky ip-netmask 255.255.255.255 address source MySite-JoeInc-sticky timeout 60 replicate sticky serverfarm MySite-JoeInc class-map A “class-map” assigns a listener, or Virtual IP address and port number which is used for the clientside and serverside of the connection. ACE Configuration class-map match-any vip-MySite-JoeInc-12345 2 match virtual-address 10.213.238.140 tcp eq 12345 class-map match-any vip-MySite-JoeInc-1433 2 match virtual-address 10.213.238.140 tcp eq 1433 class-map match-any vip-MySite-JoeInc-31314 2 match virtual-address 10.213.238.140 tcp eq 31314 class-map match-any vip-MySite-JoeInc-8080 2 match virtual-address 10.213.238.140 tcp eq 8080 class-map match-any vip-MySite-JoeInc-http 2 match virtual-address 10.213.238.140 tcp eq www class-map match-any vip-MySite-JoeInc-https 2 match virtual-address 10.213.238.140 tcp eq https policy-map a policy-map of type loadbalance simply ties the persistence profile to the Virtual . the “multi-match” attribute constructs the virtual server by tying a bunch of objects together. ACE Configuration policy-map type loadbalance first-match vip-pol-MySite-JoeInc class class-default sticky-serverfarm MySite-JoeInc-sticky policy-map multi-match lb-MySite-JoeInc class vip-MySite-JoeInc-http loadbalance vip inservice loadbalance policy vip-pol-MySite-JoeInc loadbalance vip icmp-reply class vip-MySite-JoeInc-https loadbalance vip inservice loadbalance vip icmp-reply class vip-MySite-JoeInc-12345 loadbalance vip inservice loadbalance policy vip-pol-MySite-JoeInc loadbalance vip icmp-reply class vip-MySite-JoeInc-31314 loadbalance vip inservice loadbalance policy vip-pol-MySite-JoeInc loadbalance vip icmp-reply class vip-MySite-JoeInc-1433 loadbalance vip inservice loadbalance policy vip-pol-MySite-JoeInc loadbalance vip icmp-reply class reals nat dynamic 1 vlan 240 class vip-MySite-JoeInc-8080 loadbalance vip inservice loadbalance policy vip-pol-MySite-JoeInc loadbalance vip icmp-reply F5 Configuration ltm virtual vip-Insiteqa-JoeInc-12345 { destination 10.213.238.140:12345 pool Insiteqa-JoeInc persist my_source_addr profiles { tcp {} } } ltm virtual vip-Insiteqa-JoeInc-1433 { destination 10.213.238.140:1433 pool Insiteqa-JoeInc persist my_source_addr profiles { tcp {} } } ltm virtual vip-Insiteqa-JoeInc-31314 { destination 10.213.238.140:31314 pool Insiteqa-JoeInc persist my_source_addr profiles { tcp {} } } ltm virtual vip-Insiteqa-JoeInc-8080 { destination 10.213.238.140:8080 pool Insiteqa-JoeInc persist my_source_addr profiles { tcp {} } } ltm virtual vip-Insiteqa-JoeInc-http { destination 10.213.238.140:http pool Insiteqa-JoeInc persist my_source_addr profiles { tcp {} http {} } } ltm virtual vip-Insiteqa-JoeInc-https { destination 10.213.238.140:https profiles { tcp {} } Conclusion If you are considering migrating from Cicso’s ACE to F5, I’d consider you take a look at the Cisco conversion scripts to assist with the conversion.2.5KViews0likes6CommentsTrunk / VPC Port-Channel not working properly with Nexus 9K / 2K (FEX) : Spanning-tree involved
Hello DevCentral, I'll present to you an odd behavior using 2 Nexus 9k (9.2.1) with Nexus 2k as Fex on which two Big-IP i4600 (12.1.4) are connected. Our Setup : The two Big-IP are configured in a device-group, Each Big-IP is connected to two Nexus 2k (FEX) in the same aggregate using VPV technology on the Nexus. The configuration match this KB : https://support.f5.com/csp/article/K13142 Spanning-tree is disabled on interfaces and Trunk on the Big-IP Flow Control is disabled on the Big-IP and the Nexus The Big-IP are connected to multiples VLANS using "Tagged Interfaces" option (802.1q tag on packets) Observations with this spanning-tree setup on the VPC configured on the Nexus : spanning-tree port type edge spanning-tree bpduguard enable Observation 1: When every interface is up, everything work properly Observation 2: If I shut one or the other interface of Port-channel1 on the switch everything is ok, If I shut both interfaces of Port-channel1 the aggregate is seen "Down", If I "no shut" interface1 of Port-channel1 the aggregate is rebuild and works after few seconds. Observation 3: If I shut one or the other interface of Port-channel1 on the switch everything is ok, If I shut both interfaces of Port-channel1 the aggregate is seen "Down", If I "no shut" interface2 of Port-channel1 the aggregate is rebuild but packets are not forwarded to/from this interface. Observations with this spanning-tree setup on the VPC configured on Nexus (notice the word trunk added): spanning-tree port type edge trunk spanning-tree bpduguard enable Observation 1: When every interface is up, everything work properly Observation 2: If I shut one or the other interface of Port-channel1 on the switch everything is ok, If I shut both interfaces of Port-channel1 the aggregate is seen "Down", If I "no shut" interface1 of Port-channel1 the aggregate is rebuild and works after few seconds. Observation 3: If I shut one or the other interface of Port-channel1 on the switch everything is ok, If I shut both interfaces of Port-channel1 the aggregate is seen "Down", If I "no shut" interface2 of Port-channel1 the aggregate is rebuild and works after few seconds. General Observations: There is no error detected on the interfaces/Port-Channel on the Nexus There is no error detected on the interfaces/Port-Channel on the Big-IP Conclusion: "spanning-tree port type edge", is not working for this setup "spanning-tree port type edge trunk", is working for this setup Question: Can someone explain what's happening here ? Regards my fellow companions.2.4KViews0likes2CommentsF5 LTM Transparent mode configuration
Dear All, I am trying to configure BIG-IP LTM device to work in transparent mode in order to replace Cisco ACE device. I have already done several configurations but the results are not so good as it should be. As used the following guide : http://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/ltm_implementation/sol_vlans.html1062318 As a result I am not able to ping the external and internal network it looks as the LTM Block the entire flow. Any help will be appreciated ! Thanks in advance !2KViews0likes19CommentsF5 and Cisco ACI Essentials - Design guide for a Single Pod APIC cluster
Deployment considerations It is usually an easy decision to have BIG-IP as part of your ACI deployment as BIG-IP is a mature feature rich ADC solution. Where time is spent is nailing down the design and the deployment options for the BIG-IP in the environment. Below we will discuss a few of the most commonly asked questions: SNAT or no SNAT There are various options you can use to insert the BIG-IP into the ACI environment. One way is to use the BIG-IP as a gateway for servers or as a routing next hop for routing instances. Another option is to use Source Network Address Translation (SNAT) on the BIG-IP, however with enabling SNAT the visibility into the real source IP address is lost. If preserving the source IP is a requirement then ACI's Policy-Based Redirect (PBR) can be used to make sure the return traffic goes back to the BIG-IP. BIG-IP redundancy F5 BIG-IP can be deployed in different high-availability modes. The two common BIG-IP deployment modes: active-active and active-standby. Various design considerations, such as endpoint movement during fail-overs, MAC masquerade, source MAC-based forwarding, Link Layer Discovery Protocol (LLDP), and IP aging should also be taken into account for each of the deployment modes. Multi-tenancy Multi-tenancy is supported by both Cisco ACI and F5 BIG-IP in different ways. There are a few ways that multi-tenancy constructs on ACI can be mapped to multi-tenancy on BIG-IP. The constructs revolve around tenants, virtual routing and forwarding (VRF), route domains, and partitions. Multi-tenancy can also be based on the BIG-IP form factor (appliance, virtual edition and/or virtual clustered multiprocessor (vCMP)). Tighter integration Once a design option is selected there are questions around what more can be done from an operational or automation perspective now that we have a BIG-IP and ACI deployment? The F5 ACI ServiceCenter is an application developed on the Cisco ACI App Center platform built for exactly that purpose. It is an integration point between the F5 BIG-IP and Cisco ACI. The application provides an APIC administrator a unified way to manage both L2-L3 and L4-L7 infrastructure. Once day-0 activities are performed and BIG-IP is deployed within the ACI fabric using any of the design options selected for your environment, then the F5 ACI ServiceCenter can be used to handle day-1 and day-2 operations. The day-1 and day-2 operations provided by the application are well suited for both new/greenfield and existing/brownfield deployments of BIG-IP and ACI deployments. The integration is loosely coupled, which allows the F5 ACI ServiceCenter to be installed or uninstalled with no disruption to traffic flow, as well as no effect on the F5 BIG-IP and Cisco ACI configuration. Check here to find out more. All of the above topics and more are discussed in detail here in the single pod white paper.1.7KViews3likes0CommentsUnmanaged Mode - what it means for ACI and BIG-IP integration [End of Life]
The F5 and Cisco APIC integration based on the device package and iWorkflow is End Of Life. The latest integration is based on the Cisco AppCenter named ‘F5 ACI ServiceCenter’. Visit https://f5.com/cisco for updated information on the integration. The adoption of Cisco ACI with the APIC controller continue to gain traction in the market. With their latest major APIC release, 1.2.1i , Cisco has streamlined how ADCs are connected to the fabric. There have traditionally been two methods of connecting services: Service Insertion with a device package and a service graph (Managed) Connecting a device as an endpoint device into an EPG (Unmanaged) What Cisco has done is simplified how a services device can be connected into the fabric as an Unmanaged device to the APIC. This is known as “Unmanaged Mode” vs the “Managed Mode” where a device package is used. Instead of the usual multi-step manual configuration process for specifying the network configuration in APIC, the attachment has been consolidated into the service graph. Before it was necessary to manually static bind the VLAN to the EPG (Provider and Consumer) and assign the physical domain to the EPG. It was also required to bind contracts between multiple Provider and Consumer EPGs. Now, all you have to do is go into the service graph and specify connectivity just like you were building a managed service graph. By doing this, there is now one common location and workflow for configuring services. The process is simplified. Advantages Service graph representation with Unmanaged and Managed modes mixed (few devices managed by APIC, few devices NOT managed by APIC) Unified view of a service chain Why needed Customers have requested to provide “Unmanaged ” mode for their custom devices Need to manage their own security policies outside of the networking team Need to use their existing orchestration infrastructure for a particular devices in the chain Need to use advanced vendor specific functionality What this means for BIG-IP integration BIG-IP is attached as an EPG - but now being able to represent this mode within a service graph Difference between Managed and Unmanaged mode Mode Goal Unmanaged Mode (Device managed externally) Service Chaining (traffic redirection) through ACI Managed Mode (Device managed by device package) Service Chaining (traffic redirection) through ACI Device configuration through APIC (Policy based configuration) Some prerequisites for deploying an Unmanaged logical device cluster BIGIP: Define VLANS, Self IP’s and trunk (if needed) APIC: VLAN Pool – Static allocation Fabric access policies to ensure physical connectivity to BIGIP Click here to view a video with more details on how to deploy a BIGIP in Unmanaged mode on APIC https://www.youtube.com/watch?v=OJPEYzNGD3A Once deployed as an Unmanaged device cluster with traffic redirection through ACI configure your BIGIP with nodes, pools, monitors, virtual servers and all other features required by your application like you always do by using the BIG-IP GUI/CLI etc (not through the Cisco ACI) References: http://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/1-x/L4-L7_Services_Deployment/guide/b_L4L7_Deploy_ver121x/b_L4L7_Deploy_ver121x_chapter_010000.pdf http://blogs.cisco.com/datacenter/new-innovations-for-l4-7-network-services-integration-with-ciscos-aci-approach1.6KViews0likes5Comments