Forum Discussion
vCMP Failover Scenarios
Thanks for the diagram. I believe I am going to end up having one VPC per chassis and multi-homing each blade to both N7K's. Prior to this design, I had separate physical interfaces (and VPCs) for internal and external traffic. I am now contemplating consolidating these into the same physical interfaces and VPCs which will free up the additional 10GB interfaces on the N7K's that I need for multihoming each blade.
Each connected interface on the chassis will be a member of the same trunk. Aggregate bandwidth available to the trunk will of course be determined by the number of physical interfaces spanning the blades of the entire chassis (2 blades = 40Gbps, etc).
This will provide switch redundancy for even the LTM instances that only span one blade since each blade will be connected to both N7K's.
- Steve_M__153836Jun 18, 2014
Nimbostratus
That sounds like a good plan. It is nice with the vCMP functionality that you can trunk all the VLANs you want over one vPC and then decide what LTM instance gets what VLAN. This devcentral thread(https://devcentral.f5.com/s/feed/0D51T00006i7MMdSAM) has a very useful piece of info at the end of it regarding what you name your vPC and how it is configured on the F5 (names must match). - Josh_41258Jun 18, 2014
Nimbostratus
I have never had any issues with my names and or IDs. I simply configure a trunk on the BIG-IP and add interfaces to it. I'll have to give that thread a read.
Help guide the future of your DevCentral Community!
What tools do you use to collaborate? (1min - anonymous)Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com