v.10 - Working the GTM Command Line Interface

A couple weeks ago I blogged about the enhancements that v.10 brought to GTM,  the most anticipated being that GTM now has a command line for object configuration!  The Traffic Management Shell, or tmsh, can be set as the default shell for your users, or if you have administrative access to the console, you can enter the shell with the tmsh command (go figure!).  This tech tip will work through a wideIP configuration shown in the image below, built out exclusively in the tmsh shell

 

 

1) Create the Listener

This is assumed after the initial configuration of the GTM, but I'll include it here for reference:

create gtm listener self_isp1 address 10.10.10.5

2) Create the iRule*

edit gtm rule testwip-rule

    when DNS_REQUEST {
      if { [IP::addr [IP::client_addr]/24 equals "10.10.1.0"] } {
        persist disable
      }
    }


*A couple of notes:

  1. An iRule isn't required here, just included for demonstration purposes
  2. The edit command will launch (go figure!) an editor.  Place your rule within the definition { } section and hit :wq! when finished and select y to submit.
  3. edit is not supported within the transaction process, so any necessary edits need to occur prior to creating the transaction

3) Create the GTM objects within a transaction

A transaction isn't necessary, but it's a nice option to have none of the configuration accepted if all of it isn't quite right.  I disabled the verify-member-availability because ltm1 & ltm2 don't exist.  I've purposely made a typo (highlighted below) to the datacenter object reference in server ltm2 to check out the functionality:

create transaction
create gtm datacenter dc1 dc2
create gtm server ltm1 addresses add { 10.10.100.1 } monitor bigip datacenter dc1 virtual-servers add { 10.10.100.10:80 10.10.100.11:80 10.10.100.12:80 }
create gtm server ltm2 addresses add { 10.10.200.1 } monitor bigip datacenter dc3 virtual-servers add { 10.10.200.10:80 10.10.200.11:80 10.10.200.12:80 }
create gtm pool gpool1 members add { 10.10.100.10:80 {ratio 1} 10.10.100.11:80 {ratio 2} 10.10.100.12:80 {ratio 3} } load-balancing-mode ratio verify-member-availability disabled
create gtm pool gpool2 members add { 10.10.200.10:80 {ratio 1} 10.10.200.11:80 {ratio 2} 10.10.200.12:80 {ratio 3} } load-balancing-mode ratio verify-member-availability disabled
create gtm wideip test.wip.com pool-lb-mode ratio pools add { gpool1 { ratio 1 } gpool2 { ratio 2 } } persistence enabled ttl-persistence 300 rules add { testwip-rule }

Now that the transaction is completed, a listing shows the configuration steps in sequence:

list transaction

1: (tmos)# create gtm datacenter dc1 dc2
2: (tmos)# create gtm server ltm1 addresses add { 10.10.100.1 } monitor bigip datacenter dc1 virtual-servers add { 10.10.100.10:80 10.10.100.11:80 10.10.100.12:80 }
3: (tmos)# create gtm server ltm2 addresses add { 10.10.200.1 } monitor bigip datacenter dc3 virtual-servers add { 10.10.200.10:80 10.10.200.11:80 10.10.200.12:80 }
4: (tmos)# create gtm pool gpool1 members add { 10.10.100.10:80 {ratio 1} 10.10.100.11:80 {ratio 2} 10.10.100.12:80 {ratio 3} } load-balancing-mode ratio verify-member-availability disabled
5: (tmos)# create gtm pool gpool2 members add { 10.10.200.10:80 {ratio 1} 10.10.200.11:80 {ratio 2} 10.10.200.12:80 {ratio 3} } load-balancing-mode ratio verify-member-availability disabled
6: (tmos)# create gtm wideip test.wip.com pool-lb-mode ratio pools add { gpool1 { ratio 1 } gpool2 { ratio 2 } } persistence enabled ttl-persistence 300 rules add { testwip-rule }

I can now submit the transaction, and due to my intentially poor proof-reading skills, I get an error:

submit transaction

01070189:3: Server ltm2 refers to a data center (dc3) that does not exist

 

Your options here are to delete the transaction with the delete transaction command, or modify the line causing the error, which I'll do here and resubmit the transaction:

modify transaction replace 3 "create gtm server ltm2 addresses add { 10.10.200.1 } monitor bigip datacenter dc2 virtual-servers add { 10.10.200.10:80 10.10.200.11:80 10.10.200.12:80 }"

submit transaction

OK, cool.  No errors.  Taking a look at the wideIP to make sure everything took, you can issue a show running-config gtm wideip (which will list all of them, but in this case there's only one, so I didn't specify):

show running-config gtm wideip

wideip test.wip.com {
    persistence enabled
    pool-lb-mode ratio
    pools {
        gpool1 { }
        gpool2 {
            order 1
            ratio 2
        }
    }
    rules {
        testwip-rule
    }
    ttl-persistence 300
}

Good, everything's there, now we save the configuration

save config

UPDATE -- With the GTM module, the save is unnecessary.  This is still a necessary step for the LTM module, however.

4) Take it for a test run


Sweet!  Now let's make sure the configuration works.  We have ratios at the pool and wideIP level, so we need to get some stats generated.  However, manually generating lookups = long & boring...so Powershell to the rescue!

PS> for($i=0; $i -le 40000; $i++)
 { nslookup test.wip.com 10.10.10.5 }

5) Digging in to the results!

Hmm, not near as snappy as a dedicated test platform, time for a break...  Twenty minutes later, all 40,000 requests accounted for.  Details of the distribution courtesy of the show gtm pool detail command:

sho gtm pool detail

Pool: gpool2
--------------------------------------
Status
  Availability : unknown
  State        : enabled
  Reason       : Pool gpool2: Checking

Load Balancing
  Preferred            26.7K
  Alternate                0
  Fallback                 0
  Returned to DNS          0

Miscellaneous
  Connections Dropped      0
  Local DNS Persisted      0

Pool Member: 10.10.200.10:80:gpool2
-------------------------------------
Status
  Availability : unknown
  State        : enabled
  Reason       : Pool gpool2 member :

Load Balancing
  Preferred     4.4K
  Alternate        0
  Fallback         0

Pool Member: 10.10.200.11:80:gpool2
-------------------------------------
Status
  Availability : unknown
  State        : enabled
  Reason       : Pool gpool2 member :

Load Balancing
  Preferred     8.9K
  Alternate        0
  Fallback         0

Pool Member: 10.10.200.12:80:gpool2
-------------------------------------
Status
  Availability : unknown
  State        : enabled
  Reason       : Pool gpool2 member :

Load Balancing
  Preferred     13.3K
  Alternate         0
  Fallback          0

Pool: gpool1
--------------------------------------
Status
  Availability : unknown
  State        : enabled
  Reason       : Pool gpool1: Checking

Load Balancing
  Preferred            13.3K
  Alternate                0
  Fallback                 0
  Returned to DNS          0

Miscellaneous
  Connections Dropped      0
  Local DNS Persisted      0

Pool Member: 10.10.100.10:80:gpool1
-------------------------------------
Status
  Availability : unknown
  State        : enabled
  Reason       : Pool gpool1 member :

Load Balancing
  Preferred     2.2K
  Alternate        0
  Fallback         0

Pool Member: 10.10.100.11:80:gpool1
-------------------------------------
Status
  Availability : unknown
  State        : enabled
  Reason       : Pool gpool1 member :

Load Balancing
  Preferred     4.4K
  Alternate        0
  Fallback         0

Pool Member: 10.10.100.12:80:gpool1
-------------------------------------
Status
  Availability : unknown
  State        : enabled
  Reason       : Pool gpool1 member :

Load Balancing
  Preferred     6.6K
  Alternate        0
  Fallback         0

And in visual format courtesy of excel:

 

 

Published Apr 21, 2009
Version 1.0
  • virtual-servers add { 10.10.100.1:80 { ltm-name vs1 }}
  • Jason, i don't think that works. what i got to work was the following (on v11.0)

     

     

    virtual-servers add { my_vs_name1 { destination 5.5.5.5:8080 } my_vs_name2 { destination 5.5.5.6:8080 }}
  • that may well be on v11, I was looking at the v10 syntax.
  • you are right there is a syntax change in v11. However, i looked in the tmsh ref guide for v10 (both 10.1 and 10.2) and I don't see an option named "ltm-name" in the gtm module components-> server section. or in the whole guide actually. (of course this doesn't mean it doesn't work though- maybe the docs are wrong :) )