Configuring A Generic Routing Encapsulation (GRE) Tunnel Using BIG-IP

This article is written by, and published on behalf of, DevCentral MVP Leonardo Souza.

---

Introduction

Recently I helped a customer define and test the configuration of multiple Generic Routing Encapsulation (GRE) tunnels, the tunnels were between BIG-IPs and Zscaler.

 

While I have implemented IPsec using BIG-IPs before, GRE on BIG-IPs was new to me. There is a lot of documentation about configuring IPsec on BIG-IP, but very little on GRE configuration. That is understandable, since GRE is not a common setup in BIG-IP compared to IPsec.

 

One important difference between GRE and IPsec; GRE does not provide encryption, while IPsec does. In this customer case, the traffic passing the GRE tunnel is to a cloud Internet proxy, so encryption (if necessary) is provide by the higher layers.

 

This means that the GRE tunnel will pass HTTP and HTTPs traffic so we do not need to care about encryption at the tunnel layer.

While Zscaler has documentation about setting up GRE for using their services, their examples only cover Cisco and Juniper, so this article fills in the gap on how to set up GRE on the BIG-IP side of the tunnel.

The Setup

In this case I am going to document the setup of a GRE tunnel between 2 BIG-IPs. This should give you the general idea how GRE works in F5 world, so you might translate this into whatever requirement you have when setting up GRE between BIG-IP and other devices.

 

 

 

  1. I have a laptop [LABWin] I will use to simulate a user accessing the webserver (everything in the lab is virtual).
  2. I have the 2 BIG-IPs [LABBIGIP1, LABBIGIP2] connected via the GRE tunnel, and connected to the same VLAN and network.
  3. Lastly, the webserver [LABServer3] used to provide the website content.

 

Both LABWin and LABBIGIP1 will be connected to network 172.20.0.0/24 with IPs 172.20.0.10 and 172.20.0.1 respectively.

 

I will explain this better next, but LABBIGIP1 and LABBIGIP2 will also be connected to 2 networks, 172.18.0.0/30 and 198.18.0.0/30, with their respective IPs ending in .1 and .2.

Both LABBIGIP2 and LABServer3 will be connected to network 172.30.0.0/24, with IPs 172.30.0.2 and 172.30.0.200 respectively.

 

GRE Overview

GRE was developed by Cisco, but is now a standard defined in RFC.

 

I am sure you heard about the OSI model, or TCP/IP model, they are based on the idea of encapsulation. The easiest way to explain these concepts is to think about those dolls you open and there is another one inside, and you keep open until you get the last one.

 

 

(I did not know the name, but Google is a friend, so if you want to learn something non-technical today - https://en.wikipedia.org/wiki/Matryoshka_doll )

 

Encapsulation works like this; you continue adding layers until you have added all layers. Once you have sent the result to the other device, it removes the layers until it gets to the message. GRE adds one GRE layer and another IP layer, allowing messages to be sent, for example, via the Internet.

 

A computer sends a packet to a destination via its local router, the router has a route to the destination via a GRE tunnel that uses the Internet. The router then adds a layer which includes GRE information, adds another IP layer which includes public IPs allowing the traffic to be routed successfully over the Internet.

 

The destination router receives the packet, analyses the GRE information, uses this to send the traffic to its internal network, and the packet is then able to arrive at its proper destination.

 

Here is a tcpdump example showing multiple encapsulation layers.

 

The VLAN layer says the above layer (or below, depending how you look it) is IPv4, the IPv4 layer says the above layer is GRE, the GRE layer says the above layer is IP, and it continues until the HTTP layer.

 

So, GRE just uses the encapsulation approach to add extra layers that will help with the traffic flow.

 

Configuration

NOTE: I am using BIG-IP version 15.1.0, but there is nothing here that would not work in other supported versions.

 

First we need the VLANs setup:

 

LABBIGIP1

net vlan gre_tunnel_outside_vlan {
   interfaces {
       1.2 { }
   }
   tag 4093
}
net vlan laptop_lan_vlan {
   interfaces {
       1.1 { }
   }
   tag 4094
}

 

LABBIGIP2

net vlan gre_tunnel_outside_vlan {
   interfaces {
       1.2 { }
   }
   tag 4093
}
net vlan server_lan_vlan {
   interfaces {
       1.1 { }
   }
   tag 4094
}

 

Next, the most important part, the GRE tunnel itself:

 

LABBIGIP1

net tunnels tunnel gre_tunnel {
   local-address 198.18.0.1
   profile gre
   remote-address 198.18.0.2
   traffic-group traffic-group-local-only
}

 

LABBIGIP2

net tunnels tunnel gre_tunnel {
   local-address 198.18.0.2
   profile gre
   remote-address 198.18.0.1
   traffic-group traffic-group-local-only
}

 

The configuration above only shows the relevant/non-standard settings.

I could use the all-properties for the tmsh command, but it is just easy to explain the options you will see via GUI.

 

The important ones (using LABBIGIP1 as example for the IPs):

  • Name: no explanation needed.
  • Profile: gre, as that is the type of tunnel we want to setup.
  • Local Address: 198.18.0.1, this is the local IP of the outside layer of the tunnel, the public address for example if you are doing the tunnel via the Internet.
  • Remote Address: select Specify…, 198.18.0.2, same idea as local address but in this case is the remote device IP.
  • Mode: Bidirectional, this is important as this is not like IPsec that would indicate who can start the tunnel, this indicates in which direction traffic can flow, so if you select Outbound and send a ping from the laptop to the server, the ICMP packet would arrive to the server, but the laptop would never receive the response.
  • Traffic Group: /Common/traffic-group-local-only, in this lab we are using standalone devices, if you have a HA pair, you can choose to have a tunnel per device or just a tunnel in the active device.

 

Now let’s create the self IPs:

 

LABBIGIP1

net self gre_tunnel_outside_self {
   address 198.18.0.1/30
   traffic-group traffic-group-local-only
   vlan gre_tunnel_outside_vlan
}
net self gre_tunnel_self {
   address 172.18.0.1/30
   traffic-group traffic-group-local-only
   vlan gre_tunnel
}
net self laptop_lan_self {
   address 172.20.0.1/24
   traffic-group traffic-group-local-only
   vlan laptop_lan_vlan
}

 

LABBIGIP2

net self server_lan_self {
   address 172.30.0.2/24
   traffic-group traffic-group-local-only
   vlan server_lan_vlan
}
net self gre_tunnel_self {
   address 172.18.0.2/30
   traffic-group traffic-group-local-only
   vlan gre_tunnel
}
net self gre_tunnel_outside_self {
   address 198.18.0.2/30
   traffic-group traffic-group-local-only
   vlan gre_tunnel_outside_vlan
}

 

As you can see above, gre_tunnel_self is the self IP that is linked the GRE tunnel.

 

Basically it is an interface in the tunnel, and that automatically creates a route to the local connected network of the tunnel, so routing can be done.

In relation to port lockdown, you can use any settings, including allow none, and the GRE tunnel will work.

 

Also, traffic group here is similar to the GRE settings, so make sure both use the same.

 

Next we need routing:

 

LABBIGIP1

net route gre_route {
   interface /Common/gre_tunnel
   network 172.30.0.0/24
}

 

LABBIGIP2

net route gre_route {
   interface /Common/gre_tunnel
   network 172.20.0.0/24
}

 

LABWin

Network Destination       Netmask         Gateway      Interface Metric

172.30.0.0   255.255.255.0      172.20.0.1     172.20.0.10    26

 

LABServer3

Destination    Gateway        Genmask        Flags  MSS Window irtt Iface

172.20.0.0     172.30.0.2     255.255.255.0  UG       0 0         0 ens192

 

Normal routing rules, devices need to know where to send traffic for non-local networks.

In the BIG-IP you tell it to use a vlan/tunnel, and then select the GRE tunnel.

 

Virtual Servers:

Lastly, because we are talking about a BIG-IP, you need a listener for BIG-IP to process the traffic and send it via the tunnel.

 

LABBIGIP1

ltm virtual gre_fw_vs {
   destination 172.30.0.0:any
   ip-forward
   mask 255.255.255.0
   profiles {
       fastL4 { }
   }
   source 0.0.0.0/0
   translate-address disabled
   translate-port disabled
   vlans {
       laptop_lan_vlan
   }
   vlans-enabled
}

 

To provide some security, and avoid the tunnel having to handle unnecessary traffic, the virtual server is only listening on VLAN laptop_lan_vlan.

  • Make sure the destination matches the Server Lan VLAN.
  • Make sure you select all protocols for protocol option when creating the virtual server.

 

LABBIGIP2

ltm virtual gre_fw_vs {
   destination 172.30.0.0:any
   ip-forward
   mask 255.255.255.0
   profiles {
       fastL4 { }
   }
   source 0.0.0.0/0
   translate-address disabled
   translate-port disabled
   vlans {
       gre_tunnel
   }
   vlans-enabled
}

 

Similar to LABBIGIP1, but virtual server listening on the tunnel gre_tunnel, as that is where the traffic arrives.

 

Testing

First let me show that it works, showing the content when accessing the URL http://172.30.0.200/.

 

The IP 172.30.0.200 is the LABServer3, that is Linux server with Apache.

 

The image comes from this, but I am just using the final HTML, and not iRules: http://irulesmagic.blogspot.com/2011/06/can-it-make-cup-of-coffee.html

When someone says to you can’t make coffee with iRules, that is false, as my Aussie friend Kevin (https://devcentral.f5.com/s/profile/0051T000008u9sQQAQ ) proves that.

 

Now let me show you the traffic from the laptop to the server and back.

I started first with tcpdump using 0.0, that means all VLANs on BIG-IP, but I could not see the GET from LABBIGIP1 to LABBIGIP2 via the tunnel.

 

I also tried tcpdump using the tunnel as interface, but that only shows you the inside of the tunnel, so the packet before it gets wrapped on the tunnel information. So, I end up doing the tcpdump on the interfaces, to make sure I could show you all the information.

 

LABWin

Using Wireshark to capture the traffic.

No /favicon, so it is normal for that 404 request.

 

LABBIGIP1

tcpdump commands:

tcpdump -nnvi 1.1:nnn -s0 \(host 172.20.0.10 and 172.30.0.200\) or \(host 198.18.0.1 and 198.18.0.2\) or \(host 172.18.0.1 and 172.18.0.2\) -w /shared/tmp/gre_bigip1_1.1.cap
tcpdump -nnvi 1.2:nnn -s0 \(host 172.20.0.10 and 172.30.0.200\) or \(host 198.18.0.1 and 198.18.0.2\) or \(host 172.18.0.1 and 172.18.0.2\) -w /shared/tmp/gre_bigip1_1.2.cap

 

I will show only the relevant part, to avoid a lot of images.

Packet received from LABWin in LABBIGIP1 in VLAN laptop_lan_vlan (interface 1.1),

As you can see no GRE layer.

 

Packet received from LABBIGIP1 to LABBIGIP2 via the GRE tunnel (interface 1.2), as you can see 2 IPv4 layers and a GRE layer.

First layer is the outside of the tunnel (the one selected), and the second layer inside of the tunnel.

 

Also, you can see the stats for the tunnel via this tmsh command:

tmsh show net tunnels tunnel gre_tunnel

 

To reset the stats use:

tmsh reset-stats net tunnels tunnel gre_tunnel

 

Before and after the traffic:

root@(LABBIGIP1)(cfg-sync Standalone)(Active)(/Common)(tmos)# show net tunnels tunnel gre_tunnel

---------------------------------
Net::Tunnel: gre_tunnel
---------------------------------
Incoming Discard Packets       0
Incoming Error Packets         0
Incoming Unknown Proto Packets 0
Outgoing Discard Packets       0
Outgoing Error Packets         0
HC Incoming Octets             0
HC Incoming Unicast Packets    0
HC Incoming Multicast Packets  0
HC Incoming Broadcast Packets  0
HC Outgoing Octets             0
HC Outgoing Unicast Packets    0
HC Outgoing Multicast Packets  0
HC Outgoing Broadcast Packets  0

root@(LABBIGIP1)(cfg-sync Standalone)(Active)(/Common)(tmos)# show net tunnels tunnel gre_tunnel

------------------------------------
Net::Tunnel: gre_tunnel
------------------------------------
Incoming Discard Packets          0
Incoming Error Packets            0
Incoming Unknown Proto Packets    0
Outgoing Discard Packets          0
Outgoing Error Packets            0
HC Incoming Octets             1.3K
HC Incoming Unicast Packets       6
HC Incoming Multicast Packets     0
HC Incoming Broadcast Packets     0
HC Outgoing Octets             1.1K
HC Outgoing Unicast Packets       8
HC Outgoing Multicast Packets     0
HC Outgoing Broadcast Packets     0

root@(LABBIGIP1)(cfg-sync Standalone)(Active)(/Common)(tmos)#

 

 

LABBIGIP2

Similar to LABBIGIP1, only the relevant parts.

 

tcpdump command:

tcpdump -nnvi 1.1:nnn -s0 \(host 172.20.0.10 and 172.30.0.200\) or \(host 198.18.0.1 and 198.18.0.2\) or \(host 172.18.0.1 and 172.18.0.2\) -w /shared/tmp/gre_bigip2_1.1.cap
tcpdump -nnvi 1.2:nnn -s0 \(host 172.20.0.10 and 172.30.0.200\) or \(host 198.18.0.1 and 198.18.0.2\) or \(host 172.18.0.1 and 172.18.0.2\) -w /shared/tmp/gre_bigip2_1.2.cap

 

Packet received from LABBIGIP1 in LABBIGIP2 via GRE tunnel (interface 1.2).

 

 

Packet from LABBIGIP1 to LABServer3 via VLAN server_lan_vlan (interface 1.1).

 

 

Before and after the traffic:

root@(LABBIGIP2)(cfg-sync Standalone)(Active)(/Common)(tmos)# show net tunnels tunnel gre_tunnel

---------------------------------
Net::Tunnel: gre_tunnel
---------------------------------
Incoming Discard Packets       0
Incoming Error Packets         0
Incoming Unknown Proto Packets 0
Outgoing Discard Packets       0
Outgoing Error Packets         0
HC Incoming Octets             0
HC Incoming Unicast Packets    0
HC Incoming Multicast Packets  0
HC Incoming Broadcast Packets  0
HC Outgoing Octets             0
HC Outgoing Unicast Packets    0
HC Outgoing Multicast Packets  0
HC Outgoing Broadcast Packets  0

root@(LABBIGIP2)(cfg-sync Standalone)(Active)(/Common)(tmos)# show net tunnels tunnel gre_tunnel

------------------------------------
Net::Tunnel: gre_tunnel
------------------------------------
Incoming Discard Packets          0
Incoming Error Packets            0
Incoming Unknown Proto Packets    0
Outgoing Discard Packets          0
Outgoing Error Packets            0
HC Incoming Octets             1.1K
HC Incoming Unicast Packets       8
HC Incoming Multicast Packets     0
HC Incoming Broadcast Packets     0
HC Outgoing Octets             1.3K
HC Outgoing Unicast Packets       6
HC Outgoing Multicast Packets     0
HC Outgoing Broadcast Packets     0

root@(LABBIGIP2)(cfg-sync Standalone)(Active)(/Common)(tmos)#

 

LABServer3

tcpdump command:

tcpdump -i ens192 -w gre_labserver3.cap

 

Normal traffic flow, that matches what is seen in the LABWin.

Load Balancing

If you want to have multiple tunnels, like a primary and standby, or even to for a active/active scenario, you need LTM.

You will need to create a virtual server to do load balancing, then you can use all the magic from LTM to do load balancing, like priority groups.

I will document above just the changed necessary to pass from the routing only example above to a load balancing example.

 

LABBIGIP1

ltm virtual gre_vs {
   destination 172.20.0.100:http
   ip-protocol tcp
   mask 255.255.255.255
   pool gre_pool
   profiles {
       http { }
       tcp { }
   }
   source 0.0.0.0/0
   translate-address enabled
   translate-port enabled
}
ltm pool gre_pool {
   members {
       172.30.0.200:http {
           address 172.30.0.200
           session monitor-enabled
           state up
       }
   }
   monitor http
}

 

Just a standard virtual server and a pool.

Pool member is the LABServer3.

The initial route setup in LABWin for 172.30.0.0/24 is not necessary anymore.

The laptop is directly connected to the virtual server and has no visibility of the back end server.

LABBIGIP1 does not need a forward virtual server anymore, as the virtual server above will handle the traffic.

Nothing changes for LABBIGIP2.

LABServer3 either has a default gateway pointing to 172.30.0.2 (LABBIGIP2), that is the case in this lab setup, or need to create a route for 172.18.0.0/30.

This is because LABBIGIP1 will monitor LABServer3 using the self IP linked to the tunnel, so IP 172.18.0.1.

 

Conclusion

GRE configuration in BIG-IP is not that complicated, but does have some tricks. You need to have a self IP linked to the tunnel, and also have a route that point to the tunnel.

 

Another trick that may cause some people, accustomed to setting up GRE on a router, is - even with a tunnel setup you still need a virtual server so BIGIP accepts traffic to the tunnel and also a virtual server so traffic received from the tunnel is sent to the LAN network.

Published Apr 16, 2020
Version 1.0

Was this article helpful?

5 Comments

  • Hello Leonardo_Souza !  We are looking to implement this in an HA active/standby scenario, and are having some issues with using Floating Self IPs.  Ideally, we would want to have a Tunnel per device, and be able to perform failover.  

    Is there any further documentation as it relates to HA, and some of the caveats of setting up in HA.

     

    Thank you,

    Chad

  • Hi Lief,

    Why do we need 2 GRE tunnel here ?.. (172.18.* & 198.18.*)

    Is it a must in every GRE tunnel setting in F5 ?.. currently I'm doing GRE tunnel between a Juniper & F5. One-way traffic from Juniper to F5 through GRE. From F5 to its pool-members is using default gateway. F5 used as scrubber (AFM). We only given /24 ip segment here for all network element addressing.

    Appreciate your help. Thank you. Nice article, btw.

  • Hi  ,

    Sorry for the typo, I should addressed you instead of Lief in my previous comment.

    Regards.

  • The 198.18.0.0 network is just to simulate a network that could be the Internet or an internal network.

    The 172.18.0.0 is the network used inside of the tunnel.

    However, it is just one tunnel between those 2 F5s.