NGINX Unit running in Distributed Cloud vK8s

F5 Distributed Cloud provides a mechanism to easily deploy applications using virtual Kubernetes (vK8s) across a global network. This results in applications running closer to the end user.

This article will demonstrate creating a NGINX Unit container to run a simple server-side WebAssembly (Wasm) application and deliver it via Distributed Cloud Regional Edge (RE) sites.  Distributed Cloud provides a vK8s infrastructure for deploying modern apps as well as load balancing and security services to securely and reliably deliver applications at the edge.

 

NGINX Unit Pod

A Kubernetes pod is the smallest execution unit that can be deployed in Kubernetes.  A pod is a single instance of an application and may contain single or multiple containers.  For this article, a single NGINX Unit container will make up the application pod and replicas of the pod will be deployed to each of the configured Distributed Cloud Points of Presence (PoPs).

Building the NGINX Unit Container

The first step is to build a server-side WebAssemby (Wasm) application that Unit can execute. Instructions on creating a simple Hello World Wasm application are available here. After the Hello World Wasm component is built, a NGINX Unit Container needs to be created to execute the application.

To build the NGINX Unit container, download the Unit Wasm Dockerfile that is available on the NGINX Unit github repo. The Dockerfile needs to be modified to allow the container to run as a non-root user in the Distributed Cloud virtual Kubernetes (vK8s) environment.  Below is the Dockerfile that I used for this demonstration.

FROM debian:bullseye-slim 

LABEL org.opencontainers.image.title="Unit (wasm)" 
LABEL org.opencontainers.image.description="Official build of Unit for Docker." 
LABEL org.opencontainers.image.url="https://unit.nginx.org" 
LABEL org.opencontainers.image.source="https://github.com/nginx/unit" 
LABEL org.opencontainers.image.documentation="https://unit.nginx.org/installation/#docker-images" 
LABEL org.opencontainers.image.vendor="NGINX Docker Maintainers <docker-maint@nginx.com>" 
LABEL org.opencontainers.image.version="1.32.1" 

RUN set -ex \ 
    && savedAptMark="$(apt-mark showmanual)" \ 
    && apt-get update \ 
    && apt-get install --no-install-recommends --no-install-suggests -y ca-certificates git build-essential libssl-dev libpcre2-dev curl pkg-config \ 
    && mkdir -p /usr/lib/unit/modules /usr/lib/unit/debug-modules \ 
    && mkdir -p /usr/src/unit \ 
    && mkdir -p /unit/var/lib/unit \ 
    && mkdir -p /unit/var/run \ 
    && mkdir -p /unit/var/log \ 
    && mkdir -p /unit/var/tmp \ 
    && mkdir /app \ 
    && chmod -R 777 /unit/var/lib/unit \ 
    && chmod -R 777 /unit/var/run \ 
    && chmod -R 777 /unit/var/log \ 
    && chmod -R 777 /unit/var/tmp \ 
    && chmod 777 /app \ 
    && cd /usr/src/unit \ 
    && git clone --depth 1 -b 1.32.1-1 https://github.com/nginx/unit \ 
    && cd unit \ 
    && NCPU="$(getconf _NPROCESSORS_ONLN)" \ 
    && DEB_HOST_MULTIARCH="$(dpkg-architecture -q DEB_HOST_MULTIARCH)" \ 
    && CC_OPT="$(DEB_BUILD_MAINT_OPTIONS="hardening=+all,-pie" DEB_CFLAGS_MAINT_APPEND="-Wp,-D_FORTIFY_SOURCE=2 -fPIC" dpkg-buildflags --get CFLAGS)" \ 
    && LD_OPT="$(DEB_BUILD_MAINT_OPTIONS="hardening=+all,-pie" DEB_LDFLAGS_MAINT_APPEND="-Wl,--as-needed -pie" dpkg-buildflags --get LDFLAGS)" \ 
    && CONFIGURE_ARGS_MODULES="--prefix=/usr \ 
        --statedir=/unit/var/lib/unit \ 
        --debug \ 
        --control=unix:/unit/var/run/control.unit.sock \ 
        --runstatedir=/unit/var/run \ 
        --pid=/unit/var/run/unit.pid \ 
        --logdir=/unit/var/log \ 
        --log=/unit/var/log/unit.log \ 
        --tmpdir=/unit/var/tmp \ 
        --openssl \ 
        --libdir=/usr/lib/$DEB_HOST_MULTIARCH" \ 
    && CONFIGURE_ARGS="$CONFIGURE_ARGS_MODULES \ 
         --njs" \ 
    && make -j $NCPU -C pkg/contrib .njs \ 
    && export PKG_CONFIG_PATH=$(pwd)/pkg/contrib/njs/build \ 
    && ./configure $CONFIGURE_ARGS --cc-opt="$CC_OPT" --ld-opt="$LD_OPT" --modulesdir=/usr/lib/unit/debug-modules --debug \ 
    && make -j $NCPU unitd \ 
    && install -pm755 build/sbin/unitd /usr/sbin/unitd-debug \ 
    && make clean \ 
    && ./configure $CONFIGURE_ARGS --cc-opt="$CC_OPT" --ld-opt="$LD_OPT" --modulesdir=/usr/lib/unit/modules \ 
    && make -j $NCPU unitd \ 
    && install -pm755 build/sbin/unitd /usr/sbin/unitd \ 
    && make clean \ 
    && apt-get install --no-install-recommends --no-install-suggests -y libclang-dev \ 
    && export RUST_VERSION=1.76.0 \ 
    && export RUSTUP_HOME=/usr/src/unit/rustup \ 
    && export CARGO_HOME=/usr/src/unit/cargo \ 
    && export PATH=/usr/src/unit/cargo/bin:$PATH \ 
    && dpkgArch="$(dpkg --print-architecture)" \ 
    && case "${dpkgArch##*-}" in \ 
        amd64) rustArch="x86_64-unknown-linux-gnu"; rustupSha256="0b2f6c8f85a3d02fde2efc0ced4657869d73fccfce59defb4e8d29233116e6db" ;; \ 
        arm64) rustArch="aarch64-unknown-linux-gnu"; rustupSha256="673e336c81c65e6b16dcdede33f4cc9ed0f08bde1dbe7a935f113605292dc800" ;; \ 
        *) echo >&2 "unsupported architecture: ${dpkgArch}"; exit 1 ;; \ 
        esac \ 
    && url="https://static.rust-lang.org/rustup/archive/1.26.0/${rustArch}/rustup-init" \ 
    && curl -L -O "$url" \ 
    && echo "${rustupSha256} *rustup-init" | sha256sum -c - \ 
    && chmod +x rustup-init \ 
    && ./rustup-init -y --no-modify-path --profile minimal --default-toolchain $RUST_VERSION --default-host ${rustArch} \ 
    && rm rustup-init \ 
    && rustup --version \ 
    && cargo --version \ 
    && rustc --version \ 
    && make -C pkg/contrib .wasmtime \ 
    && install -pm 755 pkg/contrib/wasmtime/target/release/libwasmtime.so /usr/lib/$(dpkg-architecture -q DEB_HOST_MULTIARCH)/ \ 
    && ./configure $CONFIGURE_ARGS_MODULES --cc-opt="$CC_OPT" --modulesdir=/usr/lib/unit/debug-modules --debug \ 
    && ./configure wasm --include-path=`pwd`/pkg/contrib/wasmtime/crates/c-api/include --lib-path=/usr/lib/$(dpkg-architecture -q DEB_HOST_MULTIARCH)/ && ./configure wasm-wasi-component \ 
    && make -j $NCPU wasm-install wasm-wasi-component-install \ 
    && make clean \ 
    && ./configure $CONFIGURE_ARGS_MODULES --cc-opt="$CC_OPT" --modulesdir=/usr/lib/unit/modules \ 
    && ./configure wasm --include-path=`pwd`/pkg/contrib/wasmtime/crates/c-api/include --lib-path=/usr/lib/$(dpkg-architecture -q DEB_HOST_MULTIARCH)/ && ./configure wasm-wasi-component \ 
    && make -j $NCPU wasm-install wasm-wasi-component-install \ 
    && cd \ 
    && rm -rf /usr/src/unit \ 
    && for f in /usr/sbin/unitd /usr/lib/unit/modules/*.unit.so; do \ 
        ldd $f | awk '/=>/{print $(NF-1)}' | while read n; do dpkg-query -S $n; done | sed 's/^\([^:]\+\):.*$/\1/' | sort | uniq >> /requirements.apt; \ 
        done \ 
    && apt-mark showmanual | xargs apt-mark auto > /dev/null \ 
    && { [ -z "$savedAptMark" ] || apt-mark manual $savedAptMark; } \ 
    && /bin/true \ 
    && mkdir -p /var/lib/unit/ \ 
    && mkdir -p /docker-entrypoint.d/ \ 
    && apt-get update \ 
    && apt-get --no-install-recommends --no-install-suggests -y install curl $(cat /requirements.apt) \ 
    && apt-get purge -y --auto-remove build-essential \ 
    && rm -rf /var/lib/apt/lists/* \ 
    && rm -f /requirements.apt \ 
    && ln -sf /dev/stderr /var/log/unit.log 

WORKDIR /app 

COPY --chmod=755 hello_wasi_http.wasm /app 
COPY ./*.json /docker-entrypoint.d/ 
COPY --chmod=755 docker-entrypoint.sh /usr/local/bin/ 
COPY welcome.* /usr/share/unit/welcome/ 

STOPSIGNAL SIGTERM 

ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"] 
EXPOSE 8000 
CMD ["unitd", "--no-daemon", "--control", "unix:/unit/var/run/control.unit.sock"]

The changes to the Dockerfile are due to the non-root user not being able to write to the /var directory.  Rather than modify the permissions on /var and its subdirectories, this docker file creates a /unit directory that is used for storing process information and logs.

The docker-entrypoint.sh script also needed to be modified to use the /unit directory structure.  The updated docker-entrypoint.sh script is shown below.

#!/bin/sh 

set -e 

WAITLOOPS=5 
SLEEPSEC=1 

curl_put() 
{ 
    RET=$(/usr/bin/curl -s -w '%{http_code}' -X PUT --data-binary @$1 --unix-socket /unit/var/run/control.unit.sock http://localhost/$2) 
    RET_BODY=$(echo $RET | /bin/sed '$ s/...$//') 
    RET_STATUS=$(echo $RET | /usr/bin/tail -c 4) 
    if [ "$RET_STATUS" -ne "200" ]; then 
        echo "$0: Error: HTTP response status code is '$RET_STATUS'" 
        echo "$RET_BODY" 
        return 1 
    else 
        echo "$0: OK: HTTP response status code is '$RET_STATUS'" 
        echo "$RET_BODY"
     fi 
         return 0 
} 

if [ "$1" = "unitd" ] || [ "$1" = "unitd-debug" ]; then 
    if /usr/bin/find "/unit/var/lib/unit/" -mindepth 1 -print -quit 2>/dev/null | /bin/grep -q .; then 
        echo "$0: /unit/var/lib/unit/ is not empty, skipping initial configuration..." 
    else 
        echo "$0: Launching Unit daemon to perform initial configuration..." 
        /usr/sbin/$1 --control unix:/unit/var/run/control.unit.sock 

        for i in $(/usr/bin/seq $WAITLOOPS); do 
            if [ ! -S /unit/var/run/control.unit.sock ]; then 
                echo "$0: Waiting for control socket to be created..." 
                /bin/sleep $SLEEPSEC 
            else 
                break 
            fi 
        done 
        # even when the control socket exists, it does not mean unit has finished initialisation 
        # this curl call will get a reply once unit is fully launched 
        /usr/bin/curl -s -X GET --unix-socket /unit/var/run/control.unit.sock http://localhost/
 
        if /usr/bin/find "/docker-entrypoint.d/" -mindepth 1 -print -quit 2>/dev/null | /bin/grep -q .; then 
            echo "$0: /docker-entrypoint.d/ is not empty, applying initial configuration..." 

            echo "$0: Looking for certificate bundles in /docker-entrypoint.d/..." 
            for f in $(/usr/bin/find /docker-entrypoint.d/ -type f -name "*.pem"); do 
                echo "$0: Uploading certificates bundle: $f" 
                curl_put $f "certificates/$(basename $f .pem)" 
            done
 
            echo "$0: Looking for JavaScript modules in /docker-entrypoint.d/..." 
            for f in $(/usr/bin/find /docker-entrypoint.d/ -type f -name "*.js"); do 
                echo "$0: Uploading JavaScript module: $f" 
                curl_put $f "js_modules/$(basename $f .js)" 
            done
 
            echo "$0: Looking for configuration snippets in /docker-entrypoint.d/..." 
            for f in $(/usr/bin/find /docker-entrypoint.d/ -type f -name "*.json"); do 
                echo "$0: Applying configuration $f"; 
                curl_put $f "config" 
            done
 
            echo "$0: Looking for shell scripts in /docker-entrypoint.d/..." 
            for f in $(/usr/bin/find /docker-entrypoint.d/ -type f -name "*.sh"); do 
                echo "$0: Launching $f"; 
                "$f" 
            done
 
            # warn on filetypes we don't know what to do with 
            for f in $(/usr/bin/find /docker-entrypoint.d/ -type f -not -name "*.sh" -not -name "*.json" -not -name "*.pem" -not -name "*.js"); do 
                echo "$0: Ignoring $f"; 
            done 
        else 
            echo "$0: /docker-entrypoint.d/ is empty, creating 'welcome' configuration..." 
            curl_put /usr/share/unit/welcome/welcome.json "config" 
        fi
 
        echo "$0: Stopping Unit daemon after initial configuration..." 
        kill -TERM $(/bin/cat /unit/var/run/unit.pid) 

        for i in $(/usr/bin/seq $WAITLOOPS); do 
            if [ -S /unit/var/run/control.unit.sock ]; then 
                echo "$0: Waiting for control socket to be removed..." 
                /bin/sleep $SLEEPSEC 
            else 
                break 
            fi 
        done 
        if [ -S /unit/var/run/control.unit.sock ]; then 
            kill -KILL $(/bin/cat /unit/var/run/unit.pid) 
            rm -f /unit/var/run/control.unit.sock 
        fi
 
        echo 
        echo "$0: Unit initial configuration complete; ready for start up..." 
        echo 
    fi 
fi 

exec "$@"

The Dockerfile copies all files ending in .json to a directory named /docker-entrypoint.d.  The docker-entrypoint.sh script checks the /docker-entrypoint.d directory for configuration files that are used to configure NGINX Unit.  The following config.json file was created in the same directory as the Dockerfile and docker-entrypoint.sh script.

{
    "listeners": {
        "*:8000": {
            "pass": "applications/wasm"
        }
    },
    "applications": {
        "wasm": { 
            "type": "wasm-wasi-component",
            "component": "/app/hello_wasi_http.wasm"
        }
    }
}

This config file instructs NGINX unit to listen on port 8000 and pass any client request to the Wasm application component located at /app/hello_wasi_http.wasm.

The last step before building the container is to copy the Wasm application component into the same directory as the Dockerfile, docker-entrypoint.sh, and config.json files.  With these four files in the same directory, the docker build command is used to create a container:

docker build --no-cache --platform linux/amd64 -t hello-wasi-xc:1.0 .

Here I use the --platform option to specify that the container will run on a linux/amd64 platform.  I also use -t to give the image a name. The portion of the name after the ":" can be used for versioning.  For example, if I added a new feature to my application, I could create a new image with -t hello-wasi-xc:1.1 to represent version 1.1 of the hello-wasi-xc application. 

Once the image is built, it can be uploaded to the container registry of your choice. I have access to an Azure Container Registry, so that is what I chose to use.  Distributed Cloud supports pulling images from both public and private container registries.  To push the image to my registry, I had to first tag the image with the registry location ({{registry_name}} should be replaced with the name of your registry):

docker tag hello-wasi-xc:1.0 {{registry_name}}/examples/hello-wasi-xc:1.0

Next, I logged into my Azure registry:

az login az acr login --name {{registry_name}}

I then pushed the image to the registry:

docker push {{registry_name}}/examples/hello-wasi-xc:1.0

F5 Distributed Cloud Virtual Kubernetes

F5 Distributed Cloud Services support a Kubernetes compatible API for centralized orchestration of applications across a fleet of sites (customer sites or F5 Distributed Cloud Regional Edges).  Distributed Cloud utilizes a distributed control plane to manage scheduling and scaling of applications across multiple sites.

Kubernetes Objects

Virtual Kubernetes supports the following Kubernetes objects: Deployments, StatefulSets, Jobs, CronJob, DaemonSet, Service, ConfigMap, Secrets, PersistentVolumeClaim, ServiceAccount, Role, and Role Binding.  For this article I will focus on the Deployments object.

A deployment is commonly used for stateless applications like web-servers, front-end web-ui, etc.  A deployment works well for this use case because the NGINX Unit container is serving up a stateless web application.

Workload Object

A workload within Distributed Cloud is used to configure and deploy components of an application in Virtual Kubernetes.  Workload encapsulates all the operational characteristics of Kubernetes workload, storage, and network objects (deployments, statefulsets, jobs, persistent volume claims, configmaps, secrets, and services) configuration, as well as configuration related to where the workload is deployed and how it is advertised using L7 or L4 load balancers.  Within Distributed Cloud there are four types of workloads: Simple Service, Service, Stateful Service, and Job.

Namespaces

In Distributed Cloud, tenant configuration objects are grouped under namespaces.  Namespaces can be thought of as administrative domains.  Within each Namespace, a user can create an object called vK8s and use that for application management.  Each Namespace can have a maximum of one vK8s object.

Distributed Cloud vK8s Configuration

vK8s is configured under the Distributed Apps tile within the Distributed Cloud console.

After clicking on that tile, I next created a new Virtual K8s. By clicking Add Virtual K8s from the Virtual K8s menu under Applications.

This brings up the Virtual K8s configuration form. I provided a name for my vK8s site and click Save and Exit.

This initiates the vK8s build. 

Deploy the Application

Once the vK8s cluster is ready, I click on the name of my vk8s to configure the deployment of my application.

For this demo, I am going to deploy my application via Workloads.  I click on Workloads and then Add VK8s Workload.

This brings up the Workload configuration form.  I provide my Workload a name, select Service for Workload type and click Configure.  Using Service allows me to have more granular controls on how my application will be advertised.

Next, I click Add Item under containers to specify to Distribtued Cloud where it can pull my container image from.

In the resulting form, I supply a name for my container, supply the public registry FQDN along with the container name and version and click Apply.

Next I configure where to advertise the application. I want to manually configure my HTTP LB, so I chose to advertise in Cluster.  Manually configuring the LB allows me to configure additional options such as Web Application Firewall (WAF).

Within the Advertise in Cluster form, I specify port 8000, because that is the port my container is listening on.

Click Apply and then click Save and Exit.

Create an HTTP Load Balancer to Advertise the Application

The next step is to create an HTTP Load Balancer to expose the application to the Internet.  This can be done by clicking Manage, selecting Load Balancers, and clicking on HTTP Load Balancers.

Next, click on Add HTTP Load Balancer to create a new load balancer.

I gave the LB a Name, Domain, Load Balancer Type, and Checked the box to Automatically Manage DNS Records.

Then I configured the Origin Pool by clicking Add Item in the Origin Pool section and then clicking Add Item under the Origin Pool.

On the resulting form, I created a name for my Origin Pool and then clicked Add Item under Origin Servers to add an origin Server.

In the Origin Server form, I selected K8s Service Name of Origin Server on given Sites, Service Name, and provided my vK8s workload name along with my namespace. 

I also selected Virtual Site and created a Virtual Site for the REs I wanted to use to access my application and clicked Continue. 

Back on the Origin Server form, I selected vK8s Networks on Site from the Select Network on the site drop down and then clicked Apply.

On the Origin Pool form, I specified port 8000 for my origin server Port and then clicked Continue.

I then clicked Apply to return to the LB configuration form.

I added a WAF policy to protect my application and then clicked Save and Exit.

At this point my application is now accessible on the domain name I specified in the HTTP Load Balancer config (http://aconley-wasm.amer-ent.f5demos.com/).

Conclusion

This is a basic demo of using NGINX Unit and Distributed Cloud Virtual Kubernetes to deploy a server-side WASM application.  The principals in this demo could be expanded to build more complex applications.

Published Oct 30, 2024
Version 1.0
No CommentsBe the first to comment