Setting Up A Basic Customer Edge To Run vk8s in F5 Distributed Cloud App Stack

This article walks through my experience standing up a Customer Edge for F5 Distributed Cloud and running a basic workload with vk8s on Distributed Cloud App Stack. I used a very basic scenario so this is should be helpful to anyone looking to get some basic testing done. I also ran through part of this on my Live Stream show, At The Edge.

Some requirements for this build:

If you are performing this install as you are reading this article, then I would suggest kicking off a download of the Distributed Cloud image and then continue reading this article.

Here is some terminology that we need to define to start off with. I will define more as we progress.

  • F5 Distributed Cloud Platform - this is the platform as a whole and encompasses all the services that run on it
  • F5 Distributed Cloud Console (or shortened to XC Console) - this is the User Interface for the platform
  • F5 Distributed Cloud App Stack (or shortened to XC App Stack) - this is the service that allows you to run containers through a feature called vk8s
  • Regional Edge - this is a hosted environment where you can run various Distributed Cloud services
  • Customer Edge - this is a self-hosted environment where you can run various Distributed Cloud services

First thing you will need to do is generate a "site token" from the Distributed Cloud Console. A site token authorizes a Edge node to join the Distributed Cloud Platform.

The most up to date steps can be followed here, under How To -> Site Management -> Create a VMware Site -> Deploy Site -> Create a Site Token.

Next, with everything downloaded, you will want to kick off the installation of the Distrbuted Cloud VM

This continues to be documented in the above linked document under the Install the Node on VMware ESXi Hypervisor section.

A couple things to note:

  • Make sure you have entered your Site Token under Token when you go through the VMware node build
  • I connected my VM to my regular network which has a DHCP server
  • Make sure you get your Latitude and Longitude are entered. This will automatically home your node to the closest regional edges

Next, continue onto the Register the VMware Site section of the documentation. You should see a Pending Registration from your Distributed Cloud node which you can accept into your tenant.

From there you should see your node .

You'll need to add a label to your node. This allows it to be identified and pulled into a Virtual Site later. Adding a label may not be entirely clear at first. Under the list of labels, you'll see it have a grey drop down box. You can just start typing in here. The label you can use is ves.io/siteName then I gave mine a value of blam-site.


From here, I created a Container Registry. I am just using Docker Hub and so not to repeat what's already documented, here is a link to the latest steps for this.
  • FQDN: docker.io
Following that, we want to create a Virtual Site. Virtual Sites allows you to group multiple sites and although I am only using 1 site in my case, I want to create this so I have the ability to scale. Here is the documentation with the steps.
 
I will note that this is where we will use the ves.io/siteName label that was configured previously, to bring in my NUC site, "blam-site".
 

Next, I'm creating a vk8s site. This is where I'll add the Virtual Site that was just created. The documentation for this is found here. Note that within a couple minutes, a cluster is up and running on my NUC!

With the vk8s instance up and running, we can now deploy a workload to it. We're going to deploy an NGINX container. Here is the documentation for this step however I'm going to note some specific things to what I built.

We'll be configuring the Workload Type as just Service.

When you get to Container Configuration, use the Private Registry we configured before and specify nginx as the image name.

You'll also be asked which Customer Virtual Site to Deploy. The list should have the one that we've already created.

There is also an option to specify Port to Advertise. This would be 80/TCP.

And in this scenario, we're going to select Advertise in Cluster for where to advertise.
 
Upon finishing that, you'll see the workload listed and soon after, you should see an indication that there is a pod running.

Now with a Workload deployed, I want to get it out onto the internet. The way to do that is through the creation of an HTTP Load Balancer object. I'm biased, but these are pretty cool. Maybe from working with BIG-IP for so long, I can appreciate a simplistic approach to things. You can do things like your rotating SSL certs from Let's Encrypt with a simple drop down option, as well as your HTTP->HTTPS redirect with a check box. Here's the documentation for it.

I will just note that I also went ahead and configured myself a subdomain from one of my domain names that I own. It's pretty straight forward to configure - at least the Distributed Cloud side of it. The configuration steps with your Domain Registrar may vary.

With that configured, I can specify a hostname as per below and the HTTP Load Balancer will take care of setting up it's own SSL/TLS certificate for it!

You'll be creating an Origin Pool as well. It's documented here but I'll also note for configuring the Origin Pool, in order to call upon the Workload we created, I selected:

Type of Origin Server: K8s Service name of Origin Server on given Sites

 

Service Name: blam-dc-vk8s-nginx.b-lam-devcentral *this one is easy to miss. You need to append the service name with the namespace that it resides in. In this case, my name space is b-lam-devcentral
Make sure to specify the port as 80

Optionally, you can add a health check. I created a basic one that simply does a GET /


And after that you'll want to configure the VIP for Advertise On Internet.
Once that is done, you'll see the objects which you can click to see statistics.
 

Also note that if you chose not to configure deleted DNS, you'll need to go under Manage -> HTTP Load Balancers. You will see the new objects and they will list the CNAMES for these objects. With that, you can go to your DNS and create a record that points to the CNAME you retrieved.

 
After all that, you should be able to type in the URL you configured and get the NGINX default page!

 

Published Mar 02, 2022
Version 1.0

Was this article helpful?

5 Comments

  • I just noticed that I listed installing kubectl as a requirement. It's not but I listed that because I was going to go into how to work with vk8s using kubectl. I'll cover that in another write up!

  • Fantastic article and video. This is super helpful for anyone getting started with running k8s on CE.

  • Very Useful video, thank you Buulam for posting these steps and video.

    Is there any other article states how to deploy cluster on CE? As far as I know, there are two modes: active/standby (VRRP) and all nodes are active (ECMP BGP), but I coulldn't find any articles going deep into the technical points.

     

  • This is great!  I was able to run this on KVM.