For more information regarding the security incident at F5, the actions we are taking to address it, and our ongoing efforts to protect our customers, click here.

Forum Discussion

luigi_avella_10's avatar
luigi_avella_10
Icon for Nimbostratus rankNimbostratus
Jun 11, 2014

GTM: https monitor

Hi, I've configured an https control on a BIGIP1600 GTM pool member for a service configured on BIGIP3600 LTM, it fails with the following error:

 

Jun 11 14:59:05 gtm2 gtmd[1894]: 011ae0f2:1: Monitor instance yyy.yyy.yyy.102:443 UNKNOWN_MONITOR_STATE --> DOWN from yyy.yyy.yyy.161 (state: protocol mismatch)

 

If I launch "telnet yyy.yyy.yyy.102 443" from the GTM the connection goes up.

 

Should I change cypher list on the GTM?

 

Thank you for your attention

 

23 Replies

  • Here you are the config:

     

    (gtm2.domain.external.it yyy.yyy.yyy.161) dumped 06/11/2014 17:52:47 CEST - GTM Version 9.4.8

    globals { probe_protocol { icmp } }

     

    // *** Data Center Definitions (2) ***

     

    datacenter { // 2 server(s) name "NY" location "NY" server "LTM_NY" server "GTM_NY" }

     

    datacenter { // 2 server(s) name "W" location "W" server "LTM_W" server "GTM_W" link "firewall_internet" }

     

    // *** Monitor Definitions (10) ***

     

    monitor "testA" { defaults from "https" interval 30 timeout 120 probe_interval 0 probe_timeout 5 probe_num_probes 1 probe_num_successes 1 dest www.www.www.127:443 // https cert "" send "GET /" username "" recv "" compatibility "enabled" password "" key "" cipherlist "DEFAULT:+SHA:+3DES:+kEDH" partition "Common" }

     

    monitor "bigip_GTM_W" { defaults from "bigip" interval 30 timeout 90 probe_interval 0 probe_timeout 1 probe_num_probes 1 probe_num_successes 1 dest yyy.yyy.yyx.161:4353 // 4353 partition "Common" }

     

    monitor "https_B" { defaults from "https" interval 30 timeout 120 probe_interval 0 probe_timeout 5 probe_num_probes 1 probe_num_successes 1 dest www.www.www.144:443 // https compatibility "enabled" recv "" cipherlist "DEFAULT:+SHA:+3DES:+kEDH" cert "" username "" key "" send "GET /" password "" partition "Common" }

     

    monitor "testgtm_NY" { defaults from "http" interval 30 timeout 120 probe_interval 0 probe_timeout 5 probe_num_probes 1 probe_num_successes 1 dest yyy.yyy.yyy.199:80 // http username "" password "" recv "" send "GET /" partition "Common" }

     

    monitor "B" { defaults from "https" interval 30 timeout 120 probe_interval 0 probe_timeout 5 probe_num_probes 1 probe_num_successes 1 dest www.www.www.144:443 // https send "GET /" cert "" compatibility "enabled" username "" key "" recv "" password "" cipherlist "DEFAULT:+SHA:+3DES:+kEDH" partition "Common" }

     

    monitor "B_NY" { defaults from "https" interval 30 timeout 120 probe_interval 0 probe_timeout 5 probe_num_probes 1 probe_num_successes 1 dest yyy.yyy.yyy.102:443 // https cert "" username "" recv "" key "" send "GET /" cipherlist "DEFAULT:+SHA:+3DES:+kEDH" password "" compatibility "disabled" partition "Common" }

     

    monitor "testgtm" { defaults from "http" interval 30 timeout 120 probe_interval 0 probe_timeout 5 probe_num_probes 1 probe_num_successes 1 dest yyy.yyy.yyx.128:80 // http username "" password "" recv "" send "GET /" partition "Common" }

     

    monitor "https_testA" { defaults from "https" interval 30 timeout 120 probe_interval 0 probe_timeout 5 probe_num_probes 1 probe_num_successes 1 dest www.www.www.127:443 // https key "" recv "" cert "" cipherlist "DEFAULT:+SHA:+3DES:+kEDH" send "GET /" password "" compatibility "enabled" username "" partition "Common" }

     

    monitor "B_NY_http" { defaults from "http" interval 30 timeout 120 probe_interval 0 probe_timeout 5 probe_num_probes 1 probe_num_successes 1 dest yyy.yyy.yyy.102:80 // http username "" password "" send "GET /" recv "" partition "Common" }

     

    monitor "C_NY" { defaults from "https" interval 30 timeout 120 probe_interval 0 probe_timeout 5 probe_num_probes 1 probe_num_successes 1 dest yyy.yyy.yyy.212:443 // https username "" compatibility "enabled" password "" send "GET /" cipherlist "DEFAULT:+SHA:+3DES:+kEDH" cert "" recv "" key "" partition "Common" }

     

    // *** Link Definitions (1) ***

     

    link { // datacenter=W name "firewall_internet" address yyy.yyy.yyx.2 monitor "gateway_icmp" }

     

    // *** Server Definitions (4) ***

     

    server { // datacenter=W, VS=3 name "LTM_W" type bigip box { address yyy.yyy.yyx.52 unit_id 1 } box { address yyy.yyy.yyx.53 unit_id 2 } monitor "bigip" vs { name "B" address zzz.zzz.zzz.102:443 // https monitor "B" translates to www.www.www.144:443 } vs { name "testA" address zzz.zzz.zzz.151:443 // https monitor "testA" translates to www.www.www.127:443 } vs { name "testgtm" address zzz.zzz.zzz.199:80 // http monitor "testgtm" translates to yyy.yyy.yyx.128:80 } }

     

    server { // datacenter=W, VS=0 name "GTM_W" type bigip box { // gtm address yyy.yyy.yyx.161 unit_id 1 } monitor "bigip" }

     

    server { // datacenter=NY, VS=2 name "LTM_NY" type bigip box { address yyy.yyy.yyy.148 unit_id 1 } box { address yyy.yyy.yyy.149 unit_id 2 } monitor "bigip" vs { name "B_NY" address zzz.zzz.zzx.102:443 // https monitor "gateway_icmp" translates to yyy.yyy.yyy.102:443 } vs { name "testgtm_NY" address zzz.zzz.zzx.199:80 // http monitor "testgtm_NY" translates to yyy.yyy.yyy.199:80 } }

     

    server { // datacenter=NY, VS=0 name "GTM_NY" type bigip box { // gtm address yyy.yyy.yyy.161 unit_id 1 } monitor "bigip" }

     

    // *** Pool Definitions (3) ***

     

    pool { name "B" ttl 30 preferred ga alternate null fallback drop_packet partition "Common"

     

    member { address zzz.zzz.zzz.102:443 monitor "B" partition "Common" } member { address zzz.zzz.zzx.102:443 monitor "B_NY" disabled partition "Common" } }

     

    pool { name "testgtm" ttl 30 monitor all min 1 of "testgtm_NY" "testgtm" preferred ga alternate null fallback null partition "Common"

     

    member { address zzz.zzz.zzz.199:80 monitor "testgtm" partition "Common" } member { address zzz.zzz.zzx.199:80 monitor "testgtm_NY" partition "Common" } }

     

    pool { name "testA" ttl 30 preferred rr partition "Common"

     

    member zzz.zzz.zzz.151:443 }

     

    // *** Wide IP Definitions (3) ***

     

    wideip { name "B.external.it" pool_lbmode rr partition "Common" pool "B" }

     

    wideip { name "testgtm.domain.external.it" pool_lbmode ga partition "Common" pool "testgtm" }

     

    wideip { name "testA.external.it" pool_lbmode rr partition "Common" pool "testA" }

     

    // *** Application Definitions (1) ***

     

    application { name "testB.external.it" partition "Common" }

     

  • So the pool member that won't show as up is your B_NY virtual server on your LTM_NY GTM server object?

     

  • Yes. Why in the case of the B_NY the GTM makes a check by probing the virtual server on 443 port (like showed in the traffic capture), while for the remote one (B) it doesn't? I've noticed it doing a capture on LTM_W (the LTM where B is defined), in fact I had no traffic coming from the GTM on the 443 port, only traffic for the big3d process. Despite that the healtmonitor for B is ok.

     

  • I think in the event that you configure a health monitor on a GTM pool member that's actually a virtual server on an LTM, the GTM doesn't use it as one may think.

     

    It looks like you have your GTM configured correctly with both LTM_NY and LTM_W as servers, and the target LTM virtual servers as your GTM pool members, such that no health monitoring at all should be necessary at the GTM level.

     

    Can you pull all monitoring configured at the GTM level whether it's on the pool or on the member/virtual server?

     

  • "Can you pull all monitoring configured at the GTM level whether it's on the pool or on the member/virtual server?" I can't understand, could you explain it better again? Thank you

     

  • Hi, I've solved making ridiculous operations:

     

    • delete vs from the pool

       

    • delete vs from the relative server

       

    • recreate from zero all without monitors

       

    What a weird behaviour!

     

  • It works only if I leave all the monitors field empty and "inherit from pool" in the vs

     

    Thank you

     

  • I am experiencing a similar issue with GTM monitors. What version are you using? In 10.2.4 HF6 I have a GTM Pool which consist of a LTM VIP. In 10.2.4 I have a HTTPS monitor on the GTM pool and the pool is "green" When I upgrade to 11.5.1, or for that matter 11.2.1, the monitor doesn't work, but if I remove the monitor the pool comes up...I can't seem to figure this out...In fact when I do a TCPDUMP, I am not seeing GTM send a health check at all...strange...

     

    • Cory_50405's avatar
      Cory_50405
      Icon for Noctilucent rankNoctilucent
      That's because the GTM -> LTM health check is done over iQuery (TCP port 4353). LTM shares its object health to GTM over this link. Assigning a monitor to a GTM pool that contains a virtual server on an LTM will not work. You should let LTM report the health of its virtual servers to GTM through iQuery.
  • So is that a new behavior in version 11? It works in 10.2.4, in multiple sync groups...

     

    • Cory_50405's avatar
      Cory_50405
      Icon for Noctilucent rankNoctilucent
      I've not found any documentation stating this, but it certainly seems to be the case.