Forum Discussion

Eridano_Di_Piet's avatar
Icon for Nimbostratus rankNimbostratus
Aug 12, 2011

Reverting to http1.0 from http1.1

Hi guys,


we've one virtual server (standard with tcp profile, really basic config) load balancing http connections to a pool of 4 members in round-robin. Under heavy load conditions, one of the servers (randomly) gets overloaded and crashes, it seems like it receives more http requests than others. In order to avoid this we're thinking to use http1.0 instead of http1.1 so that each single http request is served by a different node and load balancing is more granular.


I though to use an iRule like this:





if {[HTTP::header is_keepalive]} {


HTTP::header replace "Connection" "close"




HTTP::version "1.0"





which changes both the version and connection headers.


Since I'm not an http expert I'd really appreciate your opinion.





4 Replies

  • I would suggest using connection limit at the node level for each server so that when server reaches to a limit it stops accepting connection while serving the existence connection.



    When all servers reaches to the limit then LTM start dropping all the new connections. In order to avoid this situation you can use the active_member command in iRules to detect this condition and redirect all your request to a virtual waiting room.



    Virtual waiting room is nothing but running few servers with a static html page embedded with a html code that redirect your request back to the original servers after x number of mins.



    let me know if its make sense.


  • Hi Erick,

    Sounds like you are not exactly sure if the cause is more HTTP Request that is causing the issue. I would first look at changing the load balancing algorithm and also investigate the OneConnect Profile to see if that might help before starting to change HTTP versions.

    If you are going to end up needing to round robin based on HTTP request then I suppose an irule would be

    when HTTP_REQUEST {

    set poolname "pooltest"

    if { [active_members $poolname] < 1 } {

    No active pool members; reset client




    set count [members $poolname]

    set try 0

    while { $try < $count } {

    set plist [lindex [members -list $poolname] [expr {[table incr "round-robin:$poolname"] % $count}]]

    set mip [lindex $plist 0]

    set mport [lindex $plist 1]

    if { [LB::status pool $poolname member $mip $mport up] } {

    pool $poolname member $mip $mport



    incr try




  • Hi all,


    thanks for your replies. The problem we have is that using http1.1 a default mechanism inside the protocol itself creates a sort of persitence so that a single tcp connection is used for several HTTP requests, so, as far as I have understood, if a page contains more objects they are downloaded all together within a single tcp connection. With http1.0 things should be different because each object requires a connection to be downloaded and so it easier to spread all these connections betweens servers, that's why we wanna try to change the version to the old 1.0


    Thanks again,







  • Hi Erik,



    The servers will potentially get more, unnecessary load if you force LTM to open a new connection for every HTTP request. I'd instead consider your load balancing algorithm. If you're concerned about uneven distribution you could consider a dynamic algorithm like fastest or least connections.