load balance
5 TopicsLoad balancer help setup needed
Hi, Im very new to LineRate (discovered it this morning) and I am trying to load balance between 2 Windows 2012 servers with web addresses. Its actually got IBMs Integration Broker on each and they listen on I have set up the Linerate device in our VMware environment. The device and the 2 servers are on the same subnet. I then do an xml POST to each individually and get a timestamp back. Then when I try to post to the Virtual IP I get xml rubbish back. What I cant figure out is I have set the real server addresses and ports to 7800. But how does it know to go to PS all looks good in the gui. All looks online. Any help greatly appreciated... Steve.911Views0likes9CommentsiRule for load balancing to different virtual server depending on the URI path
Hi Guys, I have three Virtual Server to be configured on our LTM's which are running on the version 15.1.7. One virtual server is facing to client (let say VS-A) and contains two virtual server (let say VS-B and VS-C) that should be load balance. VS-B and VS-C need to load balance on the VS-A but the incoming traffic should be clasify use the uri /path. The conditions like this: if the uri contains /aa, /bb, /cc will be forward and load balance to VS-B and VS-VS-C. I tried to make irules like this: when HTTP_REQUEST { if { [HTTP::uri] contains "/aa,/bb,/cc" } { virtual VS_B } else { virtual VS_C } } But the results is traffic from the client always going to the VS-B, so the load balancing doesn't have running. I don't know it can be configured with the iRules or not, since I am not an expert in writing the iRules can anyone suggest me with the iRules that helps working the VIP as mentioned above. Appreciate any kind your insight. Thanks750Views0likes7CommentsPool round Robin not working with standard virtual server
I have a standard HTTPS virtual server configured with two nodes in the pool. There is no persistence setting enabled and the load balancing method is round robin. For some reason, after I browse to the site and establish a connection with a backend server in the pool, all my future requests go to the same server and it behaves in a way that indicates some persistence is enabled. For example, when I refresh my browser, open the site in a new browser, and open the site in an incognito browser, all my requests keep going to the same node. You can see below that I tried this multiple times and kept getting connected to one server and the number of connections on that server was increasing. According to my research, because there is no persistence profile setting, the load balancing method is round robin, and both servers are available and able to accept traffic, every time I refresh or open the site in a new tab or browser, I should be randomly assigned to a server for that connection via round robin load balancing. But this is not what I observe. Is there a reason that my virtual servers are showing persistence by default? Any ideas? Here are some images of my config:Solved720Views0likes6CommentsDifference between BT(Upgraded to ADD-ASMAWF ) vs BTA device
Hi All I have a running BT i15800 upgraded to ADD-ASMAWF device on site, i want to add add another device, now i have a F5-BIG-BTA-i15800 option to add, i want to know is there any technichal difference between these two device ? should i consider anything for this matter or not ? Thanks126Views0likes3CommentsQuestions about F5 BIG-IP Multi-Datacenter Configuration
We have an infrastructure with two datacenters (DC1 and DC2), each equipped with an F5 BIG-IP using the LTM module for DNS traffic load balancing to resolvers, and the Routing module to inject BGP routes to the Internet Gateways (IGW) for redundancy. Here’s our current setup (based on the attached diagram): Each DC has a BIG-IP connected to resolvers via virtual interfaces (VPI1 and VPI2). Routing tables indicate VPI1->DC1 and VPI2->DC2. Each DC has its own IGW for Internet connectivity. Question 1: Handling BIG-IP Failures If the BIG-IP in one datacenter (e.g., DC1) fails, will the DNS traffic destined for its resolvers be automatically redirected to DC2 via BGP? How can BGP be configured to ensure this? Is it feasible and recommended to create a HA Group including the BIG-IPs from both datacenters for automatic failover? What are the limitations or best practices for such a setup across remote sites? Question 2: IGW Redundancy Currently, each datacenter has its own IGW. We’d like to implement redundancy between the IGWs of the two DCs. Can a protocol like HSRP or VRRP be used to share a virtual IP address between the IGWs of the two datacenters? If so, how can the geographical distance be managed? If not, what are the alternatives to ensure effective IGW redundancy in a multi-datacenter environment? Question 3: BGP Optimization and Latency We use BGP to redirect traffic to the available datacenter in case of resolver failures. How can BGP be configured to minimize latency during this redirection? Are there specific techniques or configurations recommended by F5 to optimize this? Question 4: Alternatives to the DNS Module for Redundancy We are considering a solution like the DNS module (GSLB) to intelligently manage DNS traffic redirection between datacenters in case of failures. However, this could increase costs. Are there alternatives to the DNS module that would achieve this goal (intelligent redirection and inter-datacenter redundancy) while leveraging the existing LTM and Routing modules? For example, advanced BGP configurations or other built-in features of these modules? Thank you in advance for your advice and feedback!89Views0likes1Comment