Forum Discussion
iRule for redirecting requests
We have vip on port 80 with default pool having all members(on different ip addresses) on port 80.
Now we have requirement like this when following HTTP::uri comes on this vip it should do :
/api/channel* - go to channel pool having all members on port 8088
/api/space* - go to space pool having all members on port 8089
/api/gateway* - go to gateway having all members on port 8087
Now all these subsystems(different ports) are on the same ip address
vip 192.168.159.131 on port 80
default pool members - 192.168.159.133:80 , 192.168.159.134:80
channel pool members - 192.168.159:133:8088 , 192.168.159:134:8088
space pool members - 192.168.159:133:8089 , 192.168.159:134:8089
gateway pool members - 192.168.159:133:8087 , 192.168.159:134:8087
Now we have to design a iRule like this if from default pool 192.168.159.133 is picked then all other subsystem requests should go to this ip address only and not to 192.168.159.134.
Can anybody provide a way to design this iRule.
Thanks.
- Michael_YatesNimbostratusHi Narendra,
when HTTP_REQUEST { switch -glob [string tolower [HTTP::uri]] { "/api/channel/*" { pool channelpool } "/api/space/*" { pool spacepool } "/api/gateway/*" { pool gatewaypool } default { pool defaultpool } } }
- Colin_Walker_12Historic F5 AccountThat's pretty much how I'd do it. You could do things with the node command and IP ranges and whatnot if you don't want to build the actual pools, but realistically you're probably better off just building the pools and using the pool command like Michael showed above. You'd get a lot better reporting and monitoring that way, and it's a heck of a lot easier. ;)
- Narendra_26827NimbostratusI have tried your above iRule with port translation but requests are being load balanced in round robin fashion for the subsytem pools so if any of the requests go to any other member ip address then our application fails to launch.
- Michael_YatesNimbostratusWhat persistence do you have set on the Virtual Server?
- Narendra_26827NimbostratusPersistence can be done but again when it can fail we will not know. For suppose if we apply IP based persistence and by chance any other subsystem is selected for that source ip then it will not launch the application.
- The_BhattmanNimbostratusHi Narendra,
- Narendra_26827NimbostratusThanks Bhattman i will try this approach.
- Narendra_26827NimbostratusI was to able to achieve the desired result for above scenario with IP Persistence and match across services. Thanks for the help.
when CLIENT_ACCEPTED { set retries 0 } when HTTP_REQUEST { set request [HTTP::request] set uri [HTTP::uri] switch -glob [string tolower $uri] { "/api/channel*" { pool channel-pool } "/api/space*" { pool space-pool } "/api/gateway*" { pool gateway-pool } default { pool default_pool } } } when LB_SELECTED { if { $retries > 0 } { persist none
switch -glob [string tolower $uri] {
"/api/channel*" { LB::reselect pool channel-pool member 192.168.159.134 8088 }
"/api/space*" { LB::reselect pool space-pool member 192.168.159.134 8089 }
"/api/gateway*" { LB::reselect pool gateway-pool member 192.168.159.134 8087 }
default { LB::reselect pool default_pool member 192.168.159.134 80 }
} }
}
when HTTP_RESPONSE { switch [HTTP::status] { 307 { incr retries persist none HTTP::retry $request } } }
- Narendra_26827NimbostratusAnybody?
- hooleylistCirrostratusWhat happens when it "doesn't work"? Can you add logging of the selected and connected pool member? You might also want to log when you're reselecting in LB_SELECTED and retrying in HTTP_RESPONSE.
when LB_SELECTED priority 501 { log local0. "[IP::client_addr]:[TCP::client_port]: Selected [LB::server]" } when SERVER_CONNECTED { log local0. "[IP::client_addr]:[TCP::client_port]: Connected [IP::server_addr:[TCP::server_port]" }
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com