Forum Discussion

Domai's avatar
Icon for Altostratus rankAltostratus
Jan 28, 2016

External VLAN and Internal VLAN LTM communication

Guys, I just want to understand how the External vs Internal VLAN communication happens in LTM. lets say I have LTM that has external vlan and internal vlan I am able to ping any server on VIP when I log into the management IP. I get that. But what I don't understand is since this is deny by default why am I able to ping a server in internal VLAN when I ssh into the F5's selfip which is in external vlan? I was told that the subsystem that hosts management IP is a separate OS and Self and floating IP's are a diff OS in F5 architecture like AOM or EDU. Is that incorrect?


1 Reply

  • The kernel is linux, the operating system is CentOS, and the BigIP software runs primarily under a process called tmm (traffic management microkernel. There are also additional supporting processes, such as apache, which runs under the linux kernel, and provides the management GUI, and openssh, which provides access to the commandline interface.


    Linux only has drivers for the the management interface (eth0), and tmm only has drivers for the remaining ethernet interfaces. (which I'll call "tmm interfaces") tmm presents stub drivers to linux to make tmm vlans appear as ethernet interfaces to the linux kernel, but linux is actually just talking to tmm, and tmm sends those packets out the physical interfaces.


    The linux kernel has an IP routing table that is separate from tmm's routing table, but is populated with routes from tmm. You can configure routes that only appear in the linux routing table by using the "sys management-route" tmsh command. Because of this, commands run from the linux host will be able to see both the linux and tmm routes, and depending on what those routes are, you would typically be able to reach pool members using linux commandline tools such as ping and curl


    A packet arriving on the management interface (eth0) is processed by the linux kernel and handled by whatever processes are running in linux userland (for example, apache, sshd, named, etc)


    When a packet arrives on a tmm interface, it is processed by tmm, and tmm looks for a matching listener or self-ip address. If the packet matches a listener (a virtual address, aka a virtual-server destination), and arrived on a VLAN that the listener is configured to listen on, then the packet is processed as per the configuration of that virtual server.


    If the incoming packet matches a self-ip address on that VLAN, and the port lockdown setting on the self-ip allows it, then the packet is internally delivered to the linux host, where apache is running, and you get the management interface that you would also see if you accessed the management interface. Typically public-facing self-ip addresses would be configured with a port lockdown of 'allow-service none', to block access to the management UI.


    Otherwise the packet is dropped. The "default deny" is not so much a security posture - it's more that if there's no virtual or self ip to handle the packet, then the BigIP has no way to handle it. It will not act like a router unless it has been explicitly configured to do so (with a forwarding virtual, for example)


    Does that help ?