How To Run Ollama On F5 AppStack With An NVIDIA GPU In AWS
I got a chance last week to (finally) sit down with MichaelatF5 for a show-and-tell of him running Ollama on F5 Distributed Cloud (XC) AppStack with an NVIDIA Tesla T4 GPU instance in AWS. I'd seen Preston_Ashworth running Ollama on a Customer Edge (CE) with no GPU already, but to get to see it with full driver support (coming in the next release of XC after writing this) was another notch up that our partners and customers want us to...display (#punintended).
This video will walk you through, soup-to-nuts, how to configure and install Ollama on a F5 Distributed Cloud CE, running AppStack. Note that it assumes you've already configured your AppStack environment appropriately enough to accept a kubectl apply. The video is chaptered, so here's a peek before the link:
- 00:00 Introduction
- 04:23 Introducing Ollama
- 05:37 Selecting a model
- 08:27 Customizing and Installing Ollama
- 11:59 MultiCloud walkthrough
- 12:43 Adding WAAP
- 14:27 Using Ollama with Mistral and Gemma
- 21:15 WAAP alerts on a security threat in conversation