How to Split DNS with Managed Namespace on F5 Distributed Cloud (XC) Part 1 – DNS over HTTPS

Introduction

DNS, everyone’s least favorite infrastructure service. So simple, yet so hard. Simple because it’s really just some text files served up, so hard because get it wrong and everything breaks. And it really doesn’t require a ton of resources, so why use a lot? Containers, rulers of our age, everything must be a container!

Not really, but we are in a major shift from waterfall to modern architecture, and its handy to have something small that can be spun up in a lot of locations for redundancy and automated for our needs.

But also, if I don’t want to spin up servers or hardware, I probably don’t want to spin up a container infrastructure either. So, use F5 XC Managed K8s solutions…

F5 XC is a platform that can not only be used to provide native security solutions in any cloud or datacenter but is also a compute platform like any other cloud service provider. Bring us all your containers to host and secure.

For this use-case we are going to use our Managed Namespace solution. It’s very similar to our Managed K8s solution, but more of a sandbox with hardened security policies.

Part 1 will focus on DNS over HTTPS, Part 2 will cover TCP/UDP, however, the initial deployment will set up all the ports and services needed for Part 2 now.

Managed Namespace - Sandbox Policies

Architecture

For this solution I went with Bind 9.19-dev, which seemed to have some issues with grpc and HTTP/2 conversion, which I was able to resolve by slapping NGINX in front of the DNS over HTTPS listener to proxy grpc to http/2, all other TCP/UDP traffic goes directly to Bind. Hopefully this is patched in future Bind releases. Otherwise, it’s just a standard tcp/udp DNS deployment.

An important note about the architecture is that the workloads can be deployed on Customer Edge Nodes or on F5 owned Regions, so if there is no desire to manage a node on-premises, or manage / host k8s whether on-prem or in a 3rd party Cloud Service Provider, running on our regions works perfectly fine and give a tremendous amount of redundancy.

Managed Namespace Deployment

From within the XC Console, we need to ensure that there is a Managed Namespace deployed, so click on the Distributed Apps Tile.

Under Applications, select Virtual K8s.

From here, if there is not already a vk8s deployed in the namespace, deploy one now. We won’t be covering deploying virtual k8s here, but it’s not too complex, click add new, give it a name, select some virtual sites, leave service isolation disabled and choose a default workload flavor.

Once the Managed Namespace (virtual k8s) is online, you can download the kubeconfig by clicking the ellipses on the far right and selecting Kubeconfig.

For a more detailed walkthrough of Creating a Managed Namespace you can go to the F5 Tech Docs located here: https://docs.cloud.f5.com/docs/how-to/app-management/create-vk8s-obj

Click-Ops Deployment

Since it would take up a ton of space I will not cover Click-Ops deployment of workloads, while it may be in a future article a detailed walkthrough can be found here today: https://docs.cloud.f5.com/docs/how-to/app-management/vk8s-workload

Kubectl Deployment

We WILL be covering deployment via kubectl with a manifest in this guide, so now we can actually start getting into it.

As detailed in the architecture we are going to proxy requests to bind via NGINX, and to get NGINX set up as a proxy to Bind we need to get it configured. Posting the YAML in the article was a bit long, so all sources are posted in github and the yaml images link to the specific sections, while a full manifest is located near the end of the article.

NGINX Config-Map

There are a couple of critical or key differences to pay attention to when deploying to Managed Namespaces versus another k8s provider. The main one that we care about now is annotations. In the context we will be using them, they determine where the configurations and workloads will be deployed, and in other scenarios also include things like workload flavors and other internal details.

  • ves.io/sites: determines the sites we are going to want the objects deployed to. This can be a Customer Site, a Virtual Site, or to all F5 XC Owned Regions.

In our nginx.conf, All of these configs are standard as well with a location added for a health check and some self signed certs to force a secure channel.

If you need a quick command to generate Cert & Key without searching:

openssl req -x509 -nodes -subj '/CN=bind9.local' -newkey rsa:4096 -keyout /etc/ssl/private/dns.key -out /etc/ssl/certs/dns.pem -sha256 -days 3650

Server Block & Upstream

upstream http2-doh {
    server 127.0.0.1:80;
}
server {
    listen  8080      default_server; 
    listen  4443      ssl http2;

    server_name  _;

    # TLS certificate chain and corresponding private key
    ssl_certificate     /etc/ssl/certs/dns.pem;
    ssl_certificate_key /etc/ssl/private/dns.key;

    location / {
        grpc_pass grpc://http2-doh;
    }
    location /health-check {
        add_header Content-Type text/plain;
        return 200 'what is up buttercup?!';
    }
}

Source: config-map.yml

Deployment

The deployment models in XC are pretty great, deploy to a cloud site, deploy to on-prem datacenters, deploy to our compute, or any combination. The services can then be published to the internet, to a cloud site, or to an on-prem site with all of the same security models for every facility.

The deployment is pretty standard as well, the important pieces are

  • ves.io/sites: this is important for the same reasons mentioned previously, but determines where the workloads will reside, with the same options as before.
  • Environment Variables are where we need to tweek the settings a bit. A full listing of the values can be seen here: https://github.com/Mikej81/docker-bind
  • Some of the values should be self-explanatory, but an important setting for a zone / a mapping is DNS_A
  • If there is a desire to bring in full zone files, it is possible to create that via a FILE value or a config map for Bind and storing the zone file in the proper named path and mapping the volume. **Not covered here**
  • We will also be mapping some example self-signed certificates, which are only required if encryption is desired all the way to the container / pod.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: bind-doh-dep
  labels:
    app: bind
  annotations:
    ves.io/sites: system/coleman-azure,system/coleman-cluster-100,system/colemantest
spec:
  replicas: 1
  selector:
    matchLabels:
      app: bind
  template:
    metadata:
      labels:
        app: bind
    spec:
      containers:
      - name: bind
        image: mcoleman81/bind-doh
        env:
        - name: DOCKER_LOGS
          value: "1"
        - name: ALLOW_QUERY
          value: "any"
        - name: ALLOW_RECURSION
          value: "any"
        - name: DNS_FORWARDER
          value: "8.8.8.8, 8.8.4.4"
        - name: DNS_A
          value: domain1.com=68.183.126.197,domain2.com=68.183.126.197

Source: deployment.yml

Services

We are almost done building out the manifest. We created the config map, we created the deployment, now we just need to expose some services. The targetPorts need to be above 1024 in the managed namespace so if those are changed, just follow that guideline.

apiVersion: v1
kind: Service
metadata:
  name: bind-services
  annotations:
    ves.io/sites: system/coleman-azure
spec:
  type: ClusterIP
  selector:
    app: bind
  ports:
    - name: dns-udp
      port: 53
      targetPort: 5553
      protocol: UDP
    - name: dns-tcp
      port: 53
      targetPort: 5353
      protocol: TCP
    - name: dns-http
      port: 80
      targetPort: 8888
      protocol: TCP
    - name: nginx-http-listener
      port: 8080
      targetPort: 8080
      protocol: TCP
    - name: nginx-https-listener
      port: 4443
      targetPort: 4443
      protocol: TCP

Source: service.yml

Based on everything we have done, we know that the service name will be our [servicename].[namespace created previously], in my case it will be bind-services.m-coleman. We will need that value in a few steps when creating our Origin Pool.

bind-manifest.yml

Putting it all together!

Full manifest can be found here: bind-manifest.yml

Apply!

kubectl apply -f bind-manifest.yml

Application Deliver & Load Balancers

HTTPS Origin

Now we can create an origin pool. Over on the left menu, under Manage, Load Balancers, click Origin Pools.

Let’s give our origin pool a name, and add some Origin Servers, so under Origin Servers, click Add Item.

In the Origin Server settings, we want to select K8s Service Name of Origin Server on given Sites as our type, and enter our service name, which will be the service name we remembered from earlier and our namespace, so “servicename.namespace”. 

For the Site, we select one of the sites we deployed the workload to, and under Select Network on the Site, we want to select vK8s Networks on the Site, then click Apply. 

Do this for each site we deployed to so we have several servers in our Origin Pool.

We also need to tweak the TLS settings since it will be encrypted over 4443 to the origin, but we don’t want to validate the certs since they are self-signed certs with low security settings, in my case, so update this as needed.

Once everything is set right, click Save and Exit.

HTTPS Load Balancer

Now we need a load balancer, on the left menu bar, under Manage, select Load Balanacers and click HTTP Load Balancers.

Click Add HTTP Load Balancer and lets assign a name. This is another location where configurations will diverge, for ease of deployment I am going to use an HTTPS Load Balancer with Auto Generated Certificates, but you can use HTTPS with Custom certificates as well.

Note: For HTTPS with Auto-Certificate, Advertisement is Internet only, Custom Certificate allows Internet and Internal based advertising.

HSTS is optional, as are most of the options shown below aside from Load Balancer Type and HTTPS Port.

Under Origin, we Add Item and add the origin pool we created previously.

Under Other Settings is where we can configure how & where the service is advertised. If we are going to advertise this service to an internal network only, we would select Custom here, then click configure.

An example of what that would look like would be to click add item under the Custom Advertise VIP Configuration menu, Select the type of Site to advertise to, the type of interface to advertise on, and the specific site location.

Click Apply as needed, then Save and Exit.

Moment of Truth

There are several ways to test to make sure we have everything up and running, first lets make sure our services are up.

kubectl get services -n m-coleman

NAME            TYPE        CLUSTER-IP        EXTERNAL-IP   PORT(S)                                  AGE

bind-services   ClusterIP   192.168.175.193   <none>        53/UDP,53/TCP,80/TCP,8080/TCP,4443/TCP   3h57m

Curl has built in DNS over HTTPS support so we can test via curl to see if our sites are resolving so first ill test one of our custom zones / A records.

We are in business! We can also test with firefox or chrome or any number of other tools.

  • Dig
  • Dog
  • Kdig
  • Etc

In Part 2 we will cover publishing TCP and UDP.

Updated Nov 15, 2022
Version 8.0

Was this article helpful?

No CommentsBe the first to comment