Forum Discussion

Franky-frank-reg7's avatar
Dec 19, 2022
Solved

Configuring Pool members in a separate subnet of self IP on a Virtual Server

I have a requirement to configure a virtual server on a BIG-IP LTM appliance which is on prem with one of the pool members is in the cloud. Can someone please comment on how to approach this?

Until now, we've only had to configure virtual servers for devices that are in the same subnet as the self IP of the F5, in this case because one of the pool members is in the cloud, the device would not be on the same subnet as the self IP of the F5 on prem. Most of our virtual servers are typically configured with SNAT, I would imagine that the device in the cloud needs a route back to the self IP of the appliance, but I wanted to triple check to see if anything else needs to be factored in for this design.

 

 

  • Yes, that is what Mohammed is saying.

    You need SNAT ip (self ip) to have a route to the pool member subnet and also the pool member needs to have a route back to this SNAT ip.

    SNAT ip can be floating ip,slef ip , nat pool. I don't know what you are using.

5 Replies

  • Hello Franky-frank-reg7 , 
    I want to say that in F5 you have a great availabilty to perform all of your approaches like your scenario now between 
    Cloud and on-prem. 

    > Firstly , you can achieve your approaches. 
    you can configure (Virtual servers , self ips ) in different subnets or same subnets , the same with ( Pool members and self ips ) you can do it within same subnet and different subnets as well even if your F5 appliance in a Data center and the servers in another data center. 

    > Secondly , the only thing controls your process is " Routing " , you can achieve your scenario with proper Routing between on-prem and Cloud if your F5 appliance have the right routes configured on to reach to Servers. 

    > third , I have done before an implementation between ( Virtual servers , External self-ips , Pool members and internal self ips ) all of this in single interface " One - Arm " deployment I mean , and it works well. 
    The whole I needed was Very good Vlans and Routing planning , to control the direction of traffic ( in /out ) my F5 appliance. 
    So Go a head , you can get it , but make sure with each route between two on-prem and Cloud sites. 

    • Franky-frank-reg7's avatar
      Franky-frank-reg7
      Icon for Altocumulus rankAltocumulus

      Thanks for prompt reply, Mohammed. Can you be a bit more specific? Are you saying the routing between the self IP and pool members in the cloud/separate data center needs to be in place?

      Just trying to understand specifically what you're referring to.

       

      • Mohamed_Ahmed_Kansoh's avatar
        Mohamed_Ahmed_Kansoh
        Icon for MVP rankMVP

        Hello Franky-frank-reg7 ,

        Yes , you need to define a back route from pool member ( cloud or on-prem ) network to F5 internal self ips network ( that you use it for SNAT )

        so I agree with mihaic reply. 

        > I was very generic in my first reply to show you that you able to achieve your such approaches by correct routing between F5 and other network that communicate with it.

        > most of F5 deployments , we put f5 in between external users and internal servers networks (pool members ) 

        So you need to add routes in external networks ( where the data traffic related to users come ) point to F5 ( virtual servers ) and add back routes point to external network.

        This is the same between f5 and servers even if ( f5 and servers not in the same data center ) you need to add routes on f5 directs its traffic to servers and back routes from servers network ( pool members ) point to F5 internal self ips. 

        If it's not clear , let me know ...

         

  • Yes, that is what Mohammed is saying.

    You need SNAT ip (self ip) to have a route to the pool member subnet and also the pool member needs to have a route back to this SNAT ip.

    SNAT ip can be floating ip,slef ip , nat pool. I don't know what you are using.