security
18156 TopicsiRule Developer Tools
Hi All, I've made a set of developer tools for Tcl including iRules, https://github.com/bitwisecook/tcl-lsp This includes LSP server Editor integrations for VSCode, Sublime Text, Zed, Jetbrains, Helix, neovim, emacs and more (though I've only really hammered on vscode there) MCP server Claude skills cli tool Semantic token highlighting Hover docs Format string interpreters AI tools for creating, explaing, validating, documenting, diagramming iRules and Tcl full optimising compiler chain with 26 optimiser passes 27 iRule specific diagnostics and optimisations Security warnings through taint tracking (use of user input tracked through the code) Shimmer detection with inline type hints (know when a variable type is being reinterpreted) Code formatting Code minification Compiler explorer to look at how your code is interpreted A full iRule testing framework and more. This is only based on publicly available information and my memory, though I have deployed enough iRules. This is the tool I always wanted. I could do with help expanding and improving the profile -> event / command maps, and the iRule event graph, and with generally finding bugs, so please, open issues. I will be away on holiday for a couple of weeks so please bear in mind I may take a little time to get back to you. cheers, Jim π¬π§π¦πΊ107Views2likes3CommentsLevel up your F5 Distributed Cloud WAAP Ops
Learn how to stream F5 Distributed Cloud WAAP logs to Splunk and unlock insights beyond the built-in the F5 Distributed Cloud Console - from tenant-wide attack visibility to traffic source analysis and long-term threat pattern detection. Get started with ready-to-use Splunk queries that help you build dashboards tailored to your organization's security and operational needs.53Views1like0CommentsBIG IP LTM BEST PRACTICES
I want to do an F5 deployment to balance traffic to multiple web servers for an application that will be accessed by 500k users, and I have several questions. As an architecture, I have a VXLAN fabric (ONE-SITE)where the F5 (HA ACTIVE-PASIVE) and the firewall(HA ACTIVE-PASIVE) are attached to the border/service leafs(eBGP PEERING for FIREWALL-BORDER LEAF, STATIC FOR F5-BORDER). The interface to the ISP is connected to the firewall(I think it would have been recommended to attach it to the border leafs), where the first VIP is configured, translating the public IP to an IP in the FIRST ARM VLAN(CLIENT SIDE TRANSIT TO BORDER), specifically where I created the VIP on F5. 1) I want to know if the design up to this point is correct. I would also like to know whether the subnet where the VIPs reside on the F5 can be different, and if it is recommended for it to be different, from the subnet used for CLIENT SIDE TRANSIT. 2) I also want to know if it is recommended for the second ARM VLAN (server side) to be the same as the web server VLAN, or if it is better for the web server subnet(another vlan) to be different, with routing between the two networks. 3) I would also like to know whether it is recommended for the SOURCE NAT pool to be the same as the SECOND ARM VLAN (server side) or if it should be different. In any of the approaches, I would still need to perform Source NAT, I also need to implement SSL offloading and WAF (Web Application Firewall). I am very familiar with the routing aspects for any deployment model. What I would like to know is what the best architectural approach would be, or how you would design such a deployment. Thank you very muchβany advice would be greatly appreciated.101Views0likes1CommentDeploy Bot Defense on any Edge with F5 Distributed Cloud (SaaS Console, Automation)
XC Bot Defense Connector Strategy F5 Distributed Cloud Bot Defense meets you where youβre at when it comes to deployment flexibility.ββ We make it ridiculously easy for you to deploy XC Bot Defense either in the cloud, on-prem, or as a hybrid configuration with pre-built connecters in leading application platforms and CDNs to make deployment easy and fast. Choose Your Path Within each deployment scenario, you can choose your path with the following options to deploy the specified Bot Defense environment using either the console deployment link or automation with terraform. Module 1 Deploy Bot Defense on Regional Edges with F5 Distributed Cloud Module 2 Deploy F5 XC Bot Defense for AWS Cloudfront with F5 Distributed Cloud Module 3 Deploy Bot Defense in Azure with BIG-IP Connector for F5 Distributed Cloud Module 4 Deploy Bot Defense in GCP Using BIG-IP Connector for F5 Distributed Cloud XC Bot Defense Scenarios The modules below lay out a framework for connecting and managing distributed app services for this scenario, with a focus on the three core use cases. MODULE 1: Deploy Bot Defense on Regional Edges with F5 Distributed Cloud In this scenario, we will be deploying our fictitious airline application into a Regional Edge location of our choosing via the VK8's service in XC. We'll walk through all of the required steps, provide the vk8's manifest file and front end this application with an XC HTTP Load Balancer. In addition, the HTTP Load Balancer will be used to front-end our application and enable our XC Bot Defense Service. Choose your path: Console Steps for XC Bot Defense on Regional Edges Automated Deployment of XC Bot Defense on Regional Edge via Terraform MODULE 2: Deploy F5 XC Bot Defense for AWS Cloudfront with F5 Distributed Cloud In this scenario, we will be deploying our fictitious application in AWS with the XC Bot Defense Connector for AWS Cloudfront Distributions. Choose your path: Console Steps to Deploy F5 XC Bot Defense for AWS Cloudfront Coming Soon*** Automated Deployment of XC Bot Defense for AWS Cloudfront MODULE 3: Deploy Bot Defense in Azure with BIG-IP Connector for F5 Distributed Cloud In this scenario, we will be deploying our fictitious application into Azure with the XC Bot Defense Connector for BIG-IP. Choose your path: Console Steps to Deploy F5 XC Bot Defense in Azure with BIG-IP Connector Automated Deployment of XC Bot Defense in Azure with BIG-IP Connector MODULE 4: Deploy Bot Defense in GCP Using BIG-IP Connector for F5 Distributed Cloud In this scenario, we will be deploying our fictitious application into GCP with the XC Bot Defense Connector for BIG-IP. Choose your path: Console Steps to Deploy F5 XC Bot Defense in GCP Using BIG-IP Connector Automated Deployment of XC Bot Defense in GCP with BIG-IP Connector For additional information, refer to these resources: Deploy Bot Defense on any Edge with F5 Distributed Cloud (SaaS Console, Automation) GitHub repository with the walk-through of the deployment steps & demo YouTube video series discussing the different aspects of this configuration DevCentral Learning Series: Edge Compute Get Started with F5 Distributed Cloud Services1.6KViews4likes2CommentsF5 partners with Chainguard to offer NGINX Plus in security-hardened containers
Cloud-native applications demand container images that are both efficient and secure. To help enterprises meet these expectations, F5 NGINX is partnering with Chainguard to deliver NGINX within their Commercial Builds ecosystem. F5 has long been synonymous with scalable, reliable application delivery and security solutions. Partnering with Chainguard allows us to extend this trust into the world of secure container images. F5 NGINX Plus is available in Chainguard-built containers, enabling organizations to simplify security and compliance while focusing on what matters most: running their applications with confidence. Delivering software in containers requires consistency across security, compliance, and operational reliabilityβareas where traditional methods, like distributing binaries, fall short by creating inefficiencies and manual maintenance burdens. Chainguard takes the complexity out of container management with secure, hardened images that minimize vulnerabilities and accelerate compliance processes. This collaboration empowers F5 NGINX Plus users to deploy production-ready images effortlessly, providing peace of mind and improved operational efficiency. Why Chainguard Commercial Builds? Chainguard Commercial Builds introduces a modern model for packaging commercial software. We work directly with Chainguard, who packages and maintains our commercial software in the Chainguard Factory β a secure, SLSA Level 3-compliant system, designed to deliver minimal attack surface, zero CVEs, full provenance, SBOMs, and predictable vulnerability response. This partnership means we can deliver the security, compliance, and ease of use our customers demand while letting Chainguard handle the burden of securely building and maintaining container images with the latest dependencies β so you can have wall-to-wall coverage across your stack. Why NGINX Plus? NGINX Plus powers scalable application delivery through advanced proxying, load balancing, API gateway, and caching features. It offers dynamic configuration updates, robust observability, and integrated security tools, making it ideal for modern architectures. Now delivered with Chainguard images, NGINX Plus combines its core capabilities with enterprise-grade security and compliance features. F5 NGINX in F5βs Application Delivery & Security Platform NGINX One is part of F5βs Application Delivery & Security Platform. It helps organizations deliver, improve, and secure new applications and APIs. This platform is a unified solution designed to ensure reliable performance, robust security, and seamless scalability for applications deployed across cloud, hybrid, and edge architectures. NGINX Plus, a key component of NGINX One, adds features to open-source NGINX that are designed for enterprise-grade performance, scalability, and security. Better Deployment, Reduced Overhead NGINX Plus packaged in Chainguard images provides: Minimal attack surfaces Zero CVEs and complete provenance Built-in SBOMs for compliance FIPS readiness and fast vulnerability remediation This partnership simplifies deployments, reduces operational work, and helps teams unlock NGINX Plusβs full performance. Get Started NGINX Plus with Chainguard images is available now. Learn more here. NGINX Plus documentation can be found here.108Views2likes0CommentsCIS F5 Benchmark Reporter
Code is community submitted, community supported, and recognized as βUse At Your Own Riskβ. The CIS_F5_Benchmark_Reporter.py is a Python script that can be run on a F5 BIG-IP. This script will check if the configuration of the F5 BIG-IP is compliant with the CIS Benchmark for F5. The script will generate a report that can be saved to a file, send by e-mail or send its output to the screen. Just use the appropriate arguments when running the script. [root@bigipa:Active:Standalone] # ./CIS_F5_Benchmark_Reporter.py Usage: CIS_F5_Benchmark_Reporter.py [OPTION]... Mandatory arguments to long options are mandatory for short options too. -f, --file=FILE output report to file. -m, --mail output report to mail. -s, --screen output report to screen. Report bugs to nvansluis@gmail.com [root@bigipa:Active:Standalone] # To receive a daily or weekly report from your F5 BIG-IP, you can create a cron job. Below is a screenshot that shows what the report will look like. Settings In the script, there is a section named 'User Options'. These options should be modified to reflect your setup. #----------------------------------------------------------------------- # User Options - Configure as desired #----------------------------------------------------------------------- E-mail settings Here the e-mail setting can be configured, so the script will be able to send a report by e-mail. # e-mail settings port = 587 smtp_server = "smtp.example.com" sender_email = "johndoe@example.com" receiver_email = "johndoe@example.com" login = "johndoe" password = "mySecret" SNMP settings Here you can add additional SNMP clients. These are necessary to be compliant with control 6.1. # list containing trusted IP addresses and networks that have access to SNMP (control 6.1) snmp_client_allow_list = [ "127.0.0.0/8", ] Exceptions Sometimes there are valid circumstances, why a specific requirement of a security control can't be met. In this case you can add an exception. See the example below. # set exceptions (add your own exceptions) exceptions = { '2.1' : "Exception in place, because TACACS is used instead of RADIUS.", '2.2' : "Exception in place, because TACACS is used and there are two TACACS-servers present." } Recommendations Store the script somewhere in the /shared partition. The data stored on this partition will still be available after an upgrade. Feedback This script has been tested on F5 BIG-IP version 17.x. If you have any questions, remarks or feedback, just let me know. Download The script can be downloaded from github.com. https://github.com/nvansluis/CIS_F5_Benchmark_Reporter477Views8likes8CommentsDeploying the F5 AI Security Certified OpenShift Operator: A Validated Playbook
Introduction As enterprises race to deploy Large Language Models (LLMs) in production, securing AI workloads has become as critical as securing traditional applications. The F5 AI Security Operator installs two products on your cluster β F5 AI Guardrails and F5 AI Red Team β both powered by CalypsoAI. Together they provide inline prompt/response scanning, policy enforcement, and adversarial red-team testing, all running natively on your own OpenShift cluster. This article is a validated deployment runbook for F5 AI Security on OpenShift (version 4.20.14) with NVIDIA GPU nodes. It is based on the official Red Hat Operator installation baseline, in a real lab deployment on a 3ΓA40 GPU cluster. If you follow these steps in order, you will end up with a fully functional AI Security stack, avoiding the most common pitfalls along the way. What Gets Deployed F5 AI Security consists of four main components, each running in its own OpenShift namespace: Component Namespace Role Moderator + PostgreSQL cai-moderator Web UI, API gateway, policy management, and backing database Prefect Server + Worker prefect Workflow orchestration for scans and red-team runs AI Guardrails Scanner cai-scanner Inline scanning against your OpenAI-compatible LLM endpoint AI Red Team Worker cai-redteam GPU-backed adversarial testing; reports results to Moderator via Prefect The Moderator is CPU-only. The Scanner and Red Team Worker can leverage GPUs depending on the policies and models you configure. Infrastructure Requirements Before you begin, verify your cluster meets these minimums: CPU / Control Node 16 vCPUs, 32 GiB RAM, x86_64, 100 GiB persistent storage Worker Nodes (per GPU-enabled component) 4 vCPUs, 16 GiB RAM (32 GiB recommended for Red Team), 100 GiB storage GPU Nodes AI Guardrails: CUDA-compatible GPU, minimum 24 GB VRAM, 100 GiB storage AI Red Team: CUDA-compatible GPU, minimum 48 GB VRAM, 200 GiB storage GPU must NOT be shared with other workloads Verify your cluster: # Check nodes oc get nodes -o wide # Check GPU allocatable resources oc get node -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.allocatable.nvidia\.com/gpu}{"\n"}{end}' # Check available storage classes oc get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE lvms-vg1 (default) topolvm.io Delete WaitForFirstConsumer true 15d Step 1 β Install Prerequisites 1.1 Node Feature Discovery (NFD) Operator NFD labels your nodes with hardware capabilities, which NVIDIA GPU Operator relies on to target the right nodes. OpenShift Console β Ecosystem β Software Catalog β Search Node Feature Discovery Operator β Install After installation: Installed Operators β Node Feature Discovery β Create NodeFeatureDiscovery β Accept defaults Verify: oc get pods -n openshift-nfd oc get node --show-labels | grep feature.node.kubernetes.io || true 1.2 NVIDIA GPU Operator OpenShift Console β Ecosystem β Software Catalog β Search GPU Operator β Install After installation: Installed Operators β NVIDIA GPU Operator β Create ClusterPolicy β Accept defaults Verify: oc get pods -n nvidia-gpu-operator oc describe node <gpu-node> | grep -i nvidia nvidia-smi</gpu-node> Step 2 β Install F5 AI Security Operator Prerequisites: You will need registry credentials and a valid license from the F5 AI Security team before proceeding. Contact F5 Sales: https://www.f5.com/products/get-f5 2.1 Create the Namespace and Pull Secret export DOCKER_USERNAME='<registry-username>' export DOCKER_PASSWORD='<registry-password>' export DOCKER_EMAIL='<your-email>' oc new-project f5-ai-sec oc create secret docker-registry regcred \ -n f5-ai-sec \ --docker-username=$DOCKER_USERNAME \ --docker-password=$DOCKER_PASSWORD \ --docker-email=$DOCKER_EMAIL</your-email></registry-password></registry-username> 2.2 Install from OperatorHub OpenShift Console β Ecosystem β Software Catalog β Search F5 AI Security Operator β Install into namespace f5-ai-sec Verify your F5 AI Security Operator: # Verify the controller-manager pod is Running oc -n f5-ai-sec get pods # NAME READY STATUS RESTARTS AGE # controller-manager-6f784bd96d-z6sbh 1/1 Running 1 43s # Verify the CSV reached Succeeded phase oc -n f5-ai-sec get csv # NAME DISPLAY VERSION PHASE # f5-ai-security-operator.v0.4.3 F5 Ai Security Operator 0.4.3 Succeeded # Verify the CRD is registered oc -n f5-ai-sec get crd | grep ai.security.f5.com # securityoperators.ai.security.f5.com 2.3 Deploy the SecurityOperator Custom Resource After installation: Installed Operators β F5 AI Security Operator β Create SecurityOperator Choose YAML and copy the below Custom Resource Template in there, changing select values to match your installation. apiVersion: ai.security.f5.com/v1alpha1 kind: SecurityOperator metadata: name: security-operator-demo namespace: f5-ai-sec spec: registryAuth: existingSecret: "regcred" # Internal PostgreSQL β convenient for labs, not recommended for production postgresql: enabled: true values: postgresql: auth: password: "pass" jobManager: enabled: true moderator: enabled: true values: env: CAI_MODERATOR_BASE_URL: https://<your-hostname> secrets: CAI_MODERATOR_DB_ADMIN_PASSWORD: "pass" CAI_MODERATOR_DEFAULT_LICENSE: "<valid_license_from_f5>" scanner: enabled: true redTeam: enabled: true</valid_license_from_f5></your-hostname> Key values to customize: Field What to set CAI_MODERATOR_BASE_URL Your cluster's public hostname for the UI (e.g., https://aisec.apps.mycluster.example.com ) CAI_MODERATOR_DEFAULT_LICENSE License string provided by F5 CAI_MODERATOR_DB_ADMIN_PASSWORD DB password β must match the value set in the PostgreSQL block For external PostgreSQL (recommended for production), replace the postgresql block with: moderator: values: env: CAI_MODERATOR_DB_HOST: <my-external-db-hostname> secrets: CAI_MODERATOR_DB_ADMIN_PASSWORD: <my-external-db-password></my-external-db-password></my-external-db-hostname> Verify your F5 AI Security Operator: oc -n f5-ai-sec get securityoperator oc -n f5-ai-sec get securityoperator security-operator-demo -o yaml | sed -n '/status:/,$p' Step 3 β Required OpenShift Configuration This is where most deployments hit problems. OpenShift's default restricted Security Context Constraint (SCC) blocks these containers from running. You must explicitly grant anyuid to each service account. 3.1 Apply SCC Policies oc adm policy add-scc-to-user anyuid -z cai-moderator-sa -n cai-moderator oc adm policy add-scc-to-user anyuid -z default -n cai-moderator oc adm policy add-scc-to-user anyuid -z default -n prefect oc adm policy add-scc-to-user anyuid -z prefect-server -n prefect oc adm policy add-scc-to-user anyuid -z prefect-worker -n prefect oc adm policy add-scc-to-user anyuid -z cai-scanner -n cai-scanner oc adm policy add-scc-to-user anyuid -z cai-redteam-worker -n cai-redteam 3.2 Force PostgreSQL to Restart (if Stuck at 0/1) If PostgreSQL was stuck before the SCC was applied, bounce it manually: oc -n cai-moderator scale sts/cai-moderator-postgres-cai-postgresql --replicas=0 oc -n cai-moderator scale sts/cai-moderator-postgres-cai-postgresql --replicas=1 3.3 Restart All Components oc -n cai-moderator rollout restart deploy oc -n prefect rollout restart deploy oc -n cai-scanner rollout restart deploy oc -n cai-redteam rollout restart deploy 3.4 Verify β oc -n cai-moderator get statefulset NAME READY AGE cai-moderator-postgres-cai-postgresql 1/1 3d4h β oc -n cai-moderator get pods | grep postgres cai-moderator-postgres-cai-postgresql-0 1/1 Running 0 3d4h β oc -n cai-moderator get pods | grep cai-moderator cai-moderator-75c47fc9db-sl8t2 1/1 Running 0 3d4h cai-moderator-postgres-cai-postgresql-0 1/1 Running 0 3d4h β oc -n cai-moderator get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cai-moderator ClusterIP 172.30.123.197 <none> 5500/TCP,8080/TCP 3d4h cai-moderator-headless ClusterIP None <none> 8080/TCP 3d4h cai-moderator-postgres-postgresql ClusterIP None <none> 5432/TCP 3d4h β oc -n cai-moderator get endpoints Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice NAME ENDPOINTS AGE cai-moderator 10.130.0.139:8080,10.130.0.139:5500 3d4h cai-moderator-headless 10.130.0.139:8080 3d4h cai-moderator-postgres-postgresql 10.128.0.177:5432 3d4h</none></none></none> Step 4 β Create OpenShift Routes (Required for UI Access) The Moderator exposes two ports that must be routed separately: port 5500 for the UI and port 8080 for the /auth path. Skipping the auth route is the most common cause of the blank/black page issue. # UI route oc -n cai-moderator create route edge cai-moderator-ui \ --service=cai-moderator \ --port=5500 \ --hostname=<your-hostname> \ --path=/ # Auth route β required, or the UI will render blank oc -n cai-moderator create route edge cai-moderator-auth \ --service=cai-moderator \ --port=8080 \ --hostname=<your-hostname> \ --path=/auth</your-hostname></your-hostname> Verify all pods are running: oc get pods -n cai-moderator oc get pods -n cai-scanner oc get pods -n cai-redteam oc get pods -n prefect Access the UI Open https:// in a browser. Log in with the default credentials: admin / pass Log in and update the admin email address immediately. You should be able to log in successfully and see the Guardrails dashboard. Step 5 β Grant Prefect Worker Cluster-scope RBAC The Prefect worker watches Kubernetes Pods and Jobs at cluster scope to monitor scan and red-team workflow execution. Without this RBAC, prefect-worker fills its logs with 403 Forbidden errors. The Guardrails UI still loads, but scheduled workflows and Red Team runs will fail silently. # ClusterRole: allow prefect-worker to list/watch pods, jobs, and events cluster-wide oc apply -f - <<'YAML' apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prefect-worker-watch-cluster rules: - apiGroups: ["batch"] resources: ["jobs"] verbs: ["get","list","watch"] - apiGroups: [""] resources: ["pods","pods/log","events"] verbs: ["get","list","watch"] YAML # ClusterRoleBinding: bind to the prefect-worker ServiceAccount oc apply -f - <<'YAML' apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: prefect-worker-watch-cluster subjects: - kind: ServiceAccount name: prefect-worker namespace: prefect roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: prefect-worker-watch-cluster YAML # Restart to pick up the new permissions oc -n prefect rollout restart deploy/prefect-worker Verify RBAC errors are gone: oc -n prefect logs deploy/prefect-worker --tail=200 \ | egrep -i 'forbidden|rbac|permission|denied' \ || echo "OK: no RBAC errors detected" oc get clusterrolebinding prefect-worker-watch-cluster LlamaStack Integration F5 AI Security works alongside any OpenAI-compatible LLM inference endpoint. In our lab we pair it with LlamaStack running a quantized Llama 3.2 model on the same OpenShift cluster β F5 AI Guardrails then scans every prompt and response inline before it reaches your application. A dedicated follow-up post will walk through the full LlamaStack deployment and end-to-end integration in detail. Stay tuned. Summary Deploying F5 AI Security on OpenShift is straightforward once you know the OpenShift-specific friction points: SCC policies, the dual-route requirement, and the Prefect cluster-scope RBAC. Following this runbook in sequence β prerequisites, operator install, SCC grants, routes, Prefect RBAC β gets you to a fully operational AI guardrailing stack in a single pass. If you run into anything not covered here, drop a comment below. Tested on: OpenShift 4.20.14 Β· F5 AI Security Operator v0.4.3 Β· NVIDIA A40 GPUs Β· LlamaStack with Llama-3.2-1B-Instruct-quantized.w8a8 Additional Resources F5 AI Security Operator β Red Hat Catalog322Views1like0CommentsUnable to Forward APM and AFM Logs to AWS CloudWatch Using Telemetry Streaming
Hello Team, I am trying to forward AFM (Network Firewall) logs and APM logs from F5 BIG-IP to Amazon CloudWatch using F5 Telemetry Streaming. F5 BigIP version - BIG-IP 17.1.0.1 Build 0.0.4 Point Release 1 Current Behavior When I configure the security logging profile with local-db-publisher, I am able to see logs on the BIG-IP dashboard: Security β Event Logs β Network Firewall Security β Event Logs β Access However, when I change the logging profile to use a remote log publisher, I am unable to receive the logs in CloudWatch. My Decalartion { "class": "Telemetry", "My_Listener": { "class": "Telemetry_Listener", "port": 6514 }, "My_Consumer": { "class": "Telemetry_Consumer", "type": "AWS_CloudWatch", "region": "us-east-1", "logGroup": "loggrpname", "logStream": "logstreamname", "username": "Access Key", "passphrase": { "cipherText": "Secret Key" } } } Telemetry Architecture for AFM Security Log Profile β Log Publisher β Remote High Speed Log β telemetry_pool β 127.0.0.1:6514 β Telemetry Listener β Telemetry Consumer β CloudWatch Configuration Summary AFM policy and APM access policy attached to the virtual server Security logging profile attached to the virtual server Log Publisher configured Remote High-Speed Log destination configured Pool member configured as 127.0.0.1:6514 Telemetry Streaming declaration deployed.37Views0likes0Comments