dag
2 TopicsVIPRION, vCMP, TMM and DAG - how it works
Hi, I tried to find definitive answer in KB as well in post on DC but I am still not sure if I get things right. Scenario: VIPRION with two blades Trunk created including one port from each blade - let's say it's used by ext VLAN vGuest spanning two blades set with 2 vCPU per slot vGuest will consist of two VMs (one per blade) each with 2 x vCPU - vGuest total vCPU = 4 According to all I found that will give 4 x TMM process - 1 per vCPU (treated as core I guess), don't know how it translates to TMM instances? Assume we will have as well 4 TMM instances? First question is how DAG is performing distribution in relation to vGuest setup - 1 vGuest (4 TMM processes) = 2 x VMs (2 x 2 TMMs). Is DAG threating vGuest as one entity and distributes connections among all 4 TMMs or just among TMMs on given VM? In other words, let's say that new connection was directed by LACP on switch to blade 1 interface. This is new connection so I assume DAG needs to assign it to TMM process/instance. Will it consider only TMMs running on VM on blade 1 or all TMMs of vGuest? If all and TMM on blade 2 VM is selected, then I assume that VM (or DAG and HSB) on blade 1 will use chassis backplane to pass traffic to TMM running on VM/blade 2 - is that right? If so will returning traffic be passed back via backplane to VM on blade 1 and then via blade 1 interface to switch, or maybe it will go back directly via interface on blade 2? If second option is true will it be an issue for LACP? Traffic from switch to blade hashed (assuming Src/Dst IP:port hash) to blade 1 interface and traffic from VIPRION going back via blade 2 interface link - even if hash is the same? If DAG is distributing connections between all vGuest TMMs how it decides to which VM traffic should be send - checking load on VMs, RR so each new TCP/UDP port hash is directed to new TMM? Piotr370Views0likes1CommentExchange 2010 load balanced via BIG-IP
Hello guys, Currently we have our Exchange (2x CAS servers) configured via BIG-IP. The Exchange VLAN is stretched up to F5 so VS and 4x CAS servers sits within the same VLAN/Subnet which works fine. I'm working on migration to a new/separate BIG-IP appliance and was wondering what is the downside/disadvantage of having exchange VS in a different VLAN/Subnet? I know there is an option to get this configured on F5 via iApp ("Same subnet for BIG-IP virtual servers and Client Access Servers" or "Different subnet for BIG-IP virtual servers and Client Access Servers") but not too sure if everything will work as works right now. The CAS servers default gateway is pointed to the VLAN interface configured on our main router (NOT VS sitting on F5) and would like to keep it that way. I'm asking as I have recently been told the DNS for autodiscovery.domainname.co.uk has to be pointed to the IP address which lives in the same VLAN/subnet as the IP addresses of CASes otherwise DAG will not work... Has anyone tried that? I would much prefer to have only 2x VLAN configured on F5 appliances (Internal VLAN & External VLAN) rather than 3x.232Views0likes1Comment