How I did it - “Delivering Kasm Workspaces three ways”
Securing modern, containerized platforms like Kasm Workspaces requires a robust and multi-faceted approach to ensure performance, reliability, and data protection. In this edition of "How I did it" we'll see how F5 technologies can enhance the security and scalability of Kasm Workspaces deployments. We’ll start by detailing how F5 BIG-IP TMOS can fortify network and application traffic; then move to securing Kubernetes-based Kasm Workspaces with the NGINX Plus Ingress Controller. Finally, we’ll demonstrate how F5 Distributed Cloud Services can deliver a comprehensive solution for secure and efficient application delivery.
Kasm Workspaces
Kasm Workspaces is a containerized streaming platform designed for secure, web-based access to desktops, applications, and web browsing. It leverages container technology to deliver virtualized environments directly to users' browsers, enhancing security, scalability, and performance. Commonly used for remote work, cybersecurity, and DevOps workflows, Kasm Workspaces provides a flexible and customizable solution for organizations needing secure and efficient access to virtual resources.
Let me count the ways....
As noted in the Kasm documentation, the Kasm Workspaces Web App Role servers should not be exposed directly to the public. To address this requirement, F5 provides a variety of solutions to secure and deliver Kasm web servers. For the remainder of this article, we'll take a look at three of these solutions.
Kasm Workspaces with F5 BIG-IP and FAST Templates
For the following walkthrough, I utilized the F5 BIG-IP, (ver. 17.1.1.4 Build 0.0.9) to deliver and secure my on-premises Kasm Workspaces installation. While I could configure the various BIG-IP resources (virtual server, pool members, profiles, etc.) manually, I elected to utilize the BIG-IP Automation Toolchain to greatly simplify my deployment.
The F5 BIG-IP Automation Toolchain is a suite of tools designed to automate the deployment, configuration, and management of F5 BIG-IP devices. It enables efficient and consistent management through the use of declarative APIs, templates, and integrations with popular automation frameworks. Specifically, I'll used AS3 and FAST templates. Application services (FAST) templates are predefined configurations that streamline the deployment and management of applications by providing consistent and repeatable setups.
As a prerequisite, I downloaded and installed the above packages (delivered as RPMs) onto the BIG-IP, (see below).
With the RPMs installed, I was ready to deploy my application. From the side menu bar of the BIG-IP UI I navigated to 'iApps' --> 'Application Services' --> 'Applications LX' and selected 'F5 Application Services Templates'.
From the provided templates, I selected the 'HTTP Application Template', (see below). The rest was a simple matter of completing and deploying the template.
I needed to provide a tenant, (partition) name, application name, VIP address and listening port.
I selected a previously installed certificate/key combination and enable client-side TLS. Since I'm using SAML federation between my Kasm Workspaces and Microsoft Entra ID, I have elected to enable TLS for server-side connections as well. Additionally, I provided the backend Kasm server address and port (see below).
I created a custom HTTPs health monitor and associate with the Kasm backend pool. The send string makes a GET request to the Kasm healthcheck API (see below).
- Send String = GET /api/__healthcheck\r\n
- Receive String = OK
I enabled and associated a template-generated WAF policy, enabled BOT defense and configured logging options.
With the highlighted fields completed, I navigated back to the top of the template and selected 'Deploy',
With the template successfully deployed, my application is ready to test.
Video Walkthrough
Want to get a feel for it before trying yourself? The video below provides a step-by-step walkthrough of the above deployment.
Kasm Workspaces running on K8s with NGINX Plus Ingress Controller
Both Kasm and F5 provide Helm charts for simplified deployments of Kasm Workspaces and NGINX Ingress Controller on Kubernetes.
Create Kubernetes Secrets
From the MyF5 Portal, I navigated to your subscription details, and downloaded the relevant .JWT file. With the JWT token in hand, I created the docker registry secret. This will be used to pull NGINX Plus IC from the private registry.
kubectl create secret docker-registry regcred --docker-server=private-registry.nginx.com --docker-username=<jwt token=""> --docker-password=none </jwt>
I needed to create a Kubernetes TLS secret to use with my NGINX-hosted endpoint (kasmn.f5demo.net). I used the command below to create the secret.
kubectl create secret tls tls-secret --cert=/users/coward/certificates/combined-cert.pem --key=/users/coward/certificates/combined-key.pem
Once created, I used the command 'kubectl get secret' to verify the secrets were successfully created.
Update K8s Custom Resource Definitions
I ran the following command to update the CRDs required to deploy and operate the NGINX Ingress Controller.
kubectl apply -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/v3.7.2/deploy/crds.yaml
Deploy NGINX Plus Ingress Controller
Prior to deploying the NGINX Helm chart, I needed to navigate to the repo folder and modify a few settings in the 'values.yaml' file. To deploy NGINX Plus Ingress Controller with (NAP) I specified a repo image, set the 'nginxplus' flag to true and enabled NGINX App Protect.
I used the below command to deploy the Helm chart.
helm install nginx .
Once deployed, I used the command, 'kubectl get svc' to view the NGINX service and capture the external IP assigned (see below). For this example, the DNS entry, 'kasmn.f5demo.net' was updated to reflect the assigned IP address.
Deploy Kasm Workspaces
For this example, I have deployed an Azure AKS cluster to host my Kasm Workspaces application. Prior to deploying the Kasm Helm chart, I created a file in the 'templates' directory defining a VirtualServer resource. The VirtualServer resource defines load balancing configuration for a domain name, such as f5demo.net and maps the backend K8s service with the NGINX Ingress Controller.
I navigated to the 'templates' directory and created the file (virtualserver.yaml) using the contents below.
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
name: {{ .Values.kasmApp.name | default "kasm" }}-virtualserver
namespace: {{ .Values.global.namespace | default .Release.Namespace }}
labels:
app.kubernetes.io/name: {{ .Values.kasmApp.name }}-virtualserver
{{- include "kasm.defaultLabels" . | indent 4 }}
spec:
host: {{ .Values.global.hostname | quote }}
tls:
secret: tls-secret
gunzip: on
upstreams:
- name: kasm-proxy
service: kasm-proxy
port: 8080
routes:
- path: /
action:
pass: kasm-proxy
With the file created and saved, I was ready to deploy the Kasm Helm chart.
I navigated to the parent repo folder and used the following command to deploy the chart.
helm install kasm kasm-single-zone
With the deployment complete, I am ready to test access to the Kasm Workspaces UI.
Video Walkthrough
The video below provides a step-by-step walkthrough of the above deployment.
Bringing it all together with F5 Distributed Cloud Services
F5 Distributed Cloud Services (F5 XC) facilitates the deployment of applications across on-premises, cloud, and colocation facilities by offering a unified platform that integrates networking, security, and application delivery services. This approach ensures consistent performance, security, and management regardless of the deployment environment, enabling seamless hybrid and multi-cloud operations.
For this demonstration, I used F5 XC to secure and publish globally both my on-premises and Azure AKS-hosted Kasm workloads using a single HTTP load balancer.
Customer Edge (CE) Sites
To publish my Kasm Workspace application with F5 XC I first needed to establish secure connectivity between my on-premises and Azure AKS environments and the F5 Distributed Cloud. To accomplish this, I have deployed CE devices and created a Customer Edge site in each location.
Origin Pool
I created two origin pools with a custom health check that will be associated to the load balancer. The first origin pool references the Kasm Workspace Web App server located on-premises. As shown below, the server is located behind and reached via the Customer edge site connection. Additionally, like the BIG-IP deployment scenario, I configured the connection to the backend server to use TLS.
I have created a second origin pool referencing the AKS-hosted workload, (see below). I've used K8s Service Discovery to expose the Kasm-proxy K8s service.
Custom Health Check
HTTP Load Balancer
With my origin pools created, I configured the load balancer. Similar to a BIG-IP virtual server, I used the HTTP load balancer to create a public-facing endpoint, associate backend origin pool(s) and apply various security policies and profiles.
As shown below, I provided a load balancer’s name and domain name. Additionally, I have enabled automatic certificate generation for the domain name specified. I’ve enabled HTTP to HTTPS redirection and specified the port of the backend server(s).
I associated both origin pools. Incoming connections will be directed to both my on-premises and AKS-hosted Kasm Workspace servers based on the load balancing algorithm selected. As an option, I can modify the weight and priorities of my backend pools.
For additional security, I've enabled WAF and specified and a policy. Additionally, I have enabled API discovery and DDOS protection.
With respect to VIP advertisement, I elected to publish my Kasm Workspaces deployment publicly.
Once configured and saved (and allowing a few minutes to deploy) the application is ready for testing.
Video Walkthrough
The video below provides a step-by-step walkthrough of the above deployment.