application delivery
43164 Topicserror code 503 redirect irule
Hello, I want to create a logical path in F5 where if one server pool is down, we get an error code 503, then a redirect happens to a second pool. This is what I have written, but does not seem to redirect when the second pool is offline. Is the i-rule OK but need to set priority activation on the pools or is there something flawed with the irule? here is it below; when HTTP_RESPONSE { # Check if the response status code from the server is 503 if {[HTTP::status] == 503} { # Log the action (optional, for troubleshooting) log local0. "Received 503 from backend. Reselecting to fallback_pool." # Attempt to select an alternate pool pool ta55-web-lb-dev-f5-ssl-pool2 } else { pool ta55-web-lb-dev-f5-ssl-pool } }57Views0likes5CommentsKey Steps to Securely Scale and Optimize Production-Ready AI for Banking and Financial Services
This article outlines three key actions banks and financial firms can take to better securely scale, connect, and optimize their AI workflows, which will be demonstrated through a scenario of a bank taking a new AI application to production.22Views1like0CommentsGet Started with BIG-IP and BIG-IQ Virtual Edition (VE) Trial
Welcome to the BIG-IP and BIG-IQ trials page! This will be your jumping off point for setting up a trial version of BIG-IP VE or BIG-IQ VE in your environment. As you can see below, everything you’ll need is included and organized by operating environment — namely by public/private cloud or virtualization platform. To get started with your trial, use the following software and documentation which can be found in the links below. Upon requesting a trial, you should have received an email containing your license keys. Please bear in mind that it can take up to 30 minutes to receive your licenses. Don't have a trial license? Get one here. Or if you're ready to buy, contact us. Looking for other Resources like tools, compatibility matrix... BIG-IP VE and BIG-IQ VE When you sign up for the BIG-IP and BIG-IQ VE trial, you receive a set of license keys. Each key will correspond to a component listed below: BIG-IQ Centralized Management (CM) — Manages the lifecycle of BIG-IP instances including analytics, licenses, configurations, and auto-scaling policies BIG-IQ Data Collection Device (DCD) — Aggregates logs and analytics of traffic and BIG-IP instances to be used by BIG-IQ BIG-IP Local Traffic Manager (LTM), Access (APM), Advanced WAF (ASM), Network Firewall (AFM), DNS — Keep your apps up and running with BIG-IP application delivery controllers. BIG-IP Local Traffic Manager (LTM) and BIG-IP DNS handle your application traffic and secure your infrastructure. You’ll get built-in security, traffic management, and performance application services, whether your applications live in a private data center or in the cloud. Select the hypervisor or environment where you want to run VE: AWS CFT for single NIC deployment CFT for three NIC deployment BIG-IP VE images in the AWS Marketplace BIG-IQ VE images in the AWS Marketplace BIG-IP AWS documentation BIG-IP video: Single NIC deploy in AWS BIG-IQ AWS documentation Setting up and Configuring a BIG-IQ Centralized Management Solution BIG-IQ Centralized Management Trial Quick Start Azure Azure Resource Manager (ARM) template for single NIC deployment Azure ARM template for three NIC deployment BIG-IP VE images in the Azure Marketplace BIG-IQ VE images in the Azure Marketplace BIG-IQ Centralized Management Trial Quick Start BIG-IP VE Azure documentation Video: BIG-IP VE Single NIC deploy in Azure BIG-IQ VE Azure documentation Setting up and Configuring a BIG-IQ Centralized Management Solution VMware/KVM/Openstack Download BIG-IP VE image Download BIG-IQ VE image BIG-IP VE Setup BIG-IQ VE Setup Setting up and Configuring a BIG-IQ Centralized Management Solution Google Cloud Google Deployment Manager template for single NIC deployment Google Deployment Manager template for three NIC deployment BIG-IP VE images in Google Cloud Google Cloud Platform documentation Video: Single NIC deploy in Google Other Resources AskF5 Github community (f5devcentral, f5networks) Tools to automate your deployment BIG-IQ Onboarding Tool F5 Declarative Onboarding F5 Application Services 3 Extension Other Tools: F5 SDK (Python) F5 Application Services Templates (FAST) F5 Cloud Failover F5 Telemetry Streaming Find out which hypervisor versions are supported with each release of VE. BIG-IP Compatibility Matrix BIG-IQ Compatibility Matrix Do you have any comments or questions? Ask here78KViews9likes24CommentsI Tried to Beat OpenAI with Ollama in n8n—Here’s Why It Failed (and the Bug I’m Filing)
Hey, community. I wanted to share a story about how I built the n8n Labs workflow. It watches a YouTube channel, summarizes the latest videos with AI agents, and sends a clean HTML newsletter via Gmail. In the video, I show it working flawlessly with OpenAI. But before I got there, I spent a lot of time trying to copy the same flow using open source models through Ollama with the n8n Ollama node. My results were all over the map. I really wanted this to be a great “open source first” build. I tried many local models via Ollama, tuned prompts, adjusted parameters, and re‑ran tests. The outputs were always unpredictable: sometimes I’d get partial JSON, sometimes extra text around the JSON. Sometimes fields would be missing. Sometimes it would just refuse to stick to the structure I asked for. After enough iterations, I started to doubt whether my understanding of the agent setup was off. So, I built a quick proof inside the n8n Code node. If the AI Agent step is supposed to take the XML→JSON feed and reshape it into a structured list—title, description, content URL, thumbnail URL—then I should be able to do that deterministically in JavaScript and compare. I wrote a tiny snippet that reads the entries array, grabs the media fields, and formats a minimal output. And guess what? Voila. It worked on the first try and my HTML generator lit up exactly the way I wanted. That told me two things: one, my upstream data (HTTP Request + XML→JSON) was solid; and two, my desired output structure was clear and achievable without any trickery. With that proof in hand, I turned to OpenAI. I wired the same agent prompt, the same structured output parser, and the same workflow wiring—but swapped the Ollama node for an OpenAI chat model. It worked immediately. Fast, cheap, predictable. The agent returned a perfectly clean JSON with the fields I requested. My code node transformed it into HTML. The preview looked right, and Gmail sent the newsletter just like in the demo. So at that point, I felt confident the approach was sound and the transcript you saw in the video was repeatable—at least with OpenAI in the loop. Where does that leave Ollama and open source models? I’m not throwing shade—I love open source, and I want this path to be great. My current belief is the failure is somewhere inside the n8n Ollama node code path. I don’t think it’s the models themselves in isolation; I think the node may be mishandling one or more of these details: how messages are composed (system vs user). Whether “JSON mode” or a grammar/format hint is being passed, token/length defaults that cause truncation, stop settings that let extra text leak into the output; or the way the structured output parser constraints are communicated. If you’ve worked with local models, you know they can follow structure very well when you give them a strict format or grammar. If the node isn’t exposing that (or is dropping it on the floor), you get variability. To make sure this gets eyes from the right folks, my intent is to file a bug with n8n for the Ollama node. I’ll include a minimal, reproducible workflow: the same RSS fetch, the same XML→JSON conversion, the same agent prompt and required output shape, and a comparison run where OpenAI succeeds and Ollama does not. I’ll share versions, logs, model names, and settings so the team can trace exactly where the behavior diverges. If there’s a missing parameter (like format: json) or a message-role mix‑up, great—let’s fix it. If it needs a small enhancement to pass a grammar or schema to the model, even better. The net‑net is simple: for AI agents inside n8n to feel predictable with Ollama, we need the node to enforce reliably structured outputs the same way the OpenAI path does. That unlocks a ton of practical automation for folks who prefer local models. In the meantime, if you’re following the lab and want a rock‑solid fallback, you can use the Code node to do the exact transformation the agent would do. Here’s the JavaScript I wrote and tested in the workflow: const entries = $input.first().json.feed?.entry ?? []; function truncate(str, max) { if (!str) return ''; const s = String(str).trim(); return s.length > max ? s.slice(0, max) + '…' : s; // If you want total length (including …) to be max, use: // return s.length > max ? s.slice(0, Math.max(0, max - 1)) + '…' : s; } const output = entries.map(entry => { const g = entry['media:group'] ?? {}; return { title: g['media:title'] ?? '', description: truncate(g['media:description'], 60), contentUrl: g['media:content']?.url ?? '', thumbnailUrl: g['media:thumbnail']?.url ?? '' }; }); return [{ json: { output } }]; That snippet proves the data is there and your HTML builder is fine. If OpenAI reproduces the same structured JSON as the code, and Ollama doesn’t, the issue is likely in the node’s request/response handling rather than your workflow logic. I’ll keep pushing on the bug report so we can make agents with Ollama as predictable as they need to be. Until then, if you want speed and consistency to get the job done, OpenAI works great. If you’re experimenting with open source, try enforcing stricter formats and shorter outputs—and keep an eye on what the node actually sends to the model. As always, I’ll share updates, because I love sharing knowledge—and I want the open-source path to shine right alongside the rest of our AI, agents, n8n, Gmail, and OpenAI workflows. As always, community, if you have a resolution and can pull it off, please share!
290Views2likes1CommentVIP is not responding on SYN after enabling other modules like ASM, APM and AFM.
Hi all, I have an F5 VE running 17.5.1.3 in my lab environment for learning purposes. As back-end I installed the phpauction webpage and all configuration works flawlessly if only the LTM module is enabled. This in the most simple form: Virtual server on port 80. TCP profile HTTP profile Pool Automap When I add another module, for example ASM, the vip stopped working although it's still green/up and not even a security policy has been attached to the vip. Captures show that the SYN is reaching the F5 but I do not get a response from it: tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on any, link-type EN10MB (Ethernet), capture size 65535 bytes 16:24:51.691462 IP 192.168.1.100.64282 > 192.168.2.10.80: Flags [S], seq 5173934, win 65535, options [mss 1260,nop,wscale 8,nop,nop,sackOK], length 0 in slot1/tmm1 lis= port=1.1 trunk= 16:24:51.942738 IP 192.168.1.100.64625 > 192.168.2.10.80: Flags [S], seq 1642892817, win 65535, options [mss 1260,nop,wscale 8,nop,nop,sackOK], length 0 in slot1/tmm0 lis= port=1.1 trunk= I checked the back-end connection as well but the F5 is not sending out the SYN to the webserver. So it looks like it's blackholing my traffic. When I disable ASM and use only LTM, everything starts to work again. Even when trying with different modules like APM, the same issue happens. VIP is not responding after only enabling APM or AFM. I tried the following: - Factory reset the machine. - Upgrade to 17.5.1.3. - Enable RST CAUSE. (but there isn't any because the SYN isn't there in the first place) - Force reload config on the mcpd process. - Enabled ltm debugging without receiving any logs about the connection. - Looked into the dos and bot defense logs to see if traffic is dropped at an earlier point in the chain. - Enabled tmm debug without getting any relevant logs. - Changing the vip from standard to fastl4. - Remove http profile. I did play a lot with other modules as well like ASM, APM, AFM, SSLO, DNS, so that's why I though it was a configuration issue at first. But make the machine factory default, did not solve it. Is it possible there are some left overs during my learning path on this machine? Do you know what additional steps I can take to solve this issue? Thanks. Best regards, Mitchel125Views0likes9CommentsAny way to cache HEAD request
Hi, I was wandering if anybody had any ideas on how to cache a HEAD request on an f5? Running multiple web sites, HEAD method requests are the worst, as the f5 refuses to cache and the back end servers have to continually process automated bot HEAD requests for pages that change infrequently. Using acceleration web profile would be great if it did HEAD requests. I have tried it in an irule, but f5 web server will always override content length with zero. Thanks for any thoughts.44Views0likes6CommentsAbout f5 access scheme additional functionality
Regarding the F5 Access Scheme for F5 APM, the feature is currently being implemented. https://techdocs.f5.com/en-us/apm-f5-access/apm-f5-access-ios-3-0-1/c_edge_client_chapter_title_addl_config_info/c_f5_access_starting_from_url.html I have referred to the link above. "password A parameter used to specify the password with which to start the connection. When the password parameter is specified, it is used as a one-time password and not saved in the configuration. " I have reviewed the above content, but I am looking into whether there is a way to save the password when clicking on the scheme in the F5 Access App. I would appreciate any help from those with relevant experience. Thank you.38Views0likes1CommentF5 Container Ingress Services (CIS) and using k8s traffic policies to send traffic directly to pods
This article will take a look how you can use health monitors on the BIG-IP to solve the issue with constant AS3 REST-API pool member changes or when there is a sidecar service mesh like Istio (F5 has version called Aspen mesh of the istio mesh) or Linkerd mesh. I also have described some possible enchantments for CIS/AS3, Nginx Ingress Controller or Gateway Fabric that will be nice to have in the future. Intro Install Nginx Ingress Open source and CIS F5 CIS without Ingress/Gateway F5 CIS with Ingress F5 CIS with Gateway fabric Summary 1. Intro F5 CIS allows integration between F5 and k8s kubernetes or openshift clusters. F5 CIS has two modes and that are NodePort and ClusterIP and this is well documented at https://clouddocs.f5.com/containers/latest/userguide/config-options.html . There is also a mode called auto that I prefer as based on k8s service type NodePort or ClusterIP it knows how to configure the pool members. CIS in ClusterIP mode generally is much better as you bypass the kube-proxy as send traffic directly to pods but there could be issues if k8s pods are constantly being scaled up or down as CIS uses AS3 REST-API to talk and configure the F5 BIG-IP. I also have seen some issues where a bug or a config error that is not well validated can bring the entire CIS to BIG-IP control channel down as you then see 422 errors in the F5 logs and on CIS logs. By using NodePort and "externaltrafficpolicy: local" and if there is an ingress also "internaltrafficpolicy: local" you can also bypass the kubernetes proxy and send traffic directly to the pods and BIG-IP health monitoring will mark the nodes that don't have pods as down as the traffic policies prevent nodes that do not have the web application pods to send the traffic to other nodes. 2..Install Nginx Ingress Open source and CIS As I already have the k8s version of nginx and F5 CIS I need 3 different classes of ingress. k8s nginx is end of life https://kubernetes.io/blog/2025/11/11/ingress-nginx-retirement/ , so my example also shows how you can have in parallel the two nginx versions the k8s nginx and F5 nginx. There is a new option to use The Operator Lifecycle Manager (OLM) that when installed will install the components and this is even better way than helm (you can install OLM with helm and this is even newer way to manage nginx ingress!) but I found it still in early stage for k8s while for Openshift it is much more advanced. I have installed Nginx in a daemonset not deployment and I will mention why later on and I have added a listener config for the F5 TransportServer even if later it is seen why at the moment it is not usable. helm install -f values.yaml ginx-ingress oci://ghcr.io/nginx/charts/nginx-ingress \ --version 2.4.1 \ --namespace f5-nginx \ --set controller.kind=daemonset \ --set controller.image.tag=5.3.1 \ --set controller.ingressClass.name=nginx-nginxinc \ --set controller.ingressClass.create=true \ --set controller.ingressClass.setAsDefaultIngress=false cat values.yaml controller: enableCustomResources: true globalConfiguration: create: true spec: listeners: - name: nginx-tcp port: 88 protocol: TCP kubectl get ingressclasses NAME CONTROLLER PARAMETERS AGE f5 f5.com/cntr-ingress-svcs <none> 8d nginx k8s.io/ingress-nginx <none> 40d nginx-nginxinc nginx.org/ingress-controller <none> 32s niki@master-1:~$ kubectl get pods -o wide -n f5-nginx NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-ingress-controller-2zbdr 1/1 Running 0 62s 10.10.133.234 worker-2 <none> <none> nginx-ingress-controller-rrrc9 1/1 Running 0 62s 10.10.226.87 worker-1 <none> <none> niki@master-1:~$ The CIS config is shown below. I have used "pool_member_type" auto as this allows Cluster-IP or NodePort services to be used at the same time. helm install -f values.yaml f5-cis f5-stable/f5-bigip-ctlr cat values.yaml bigip_login_secret: f5-bigip-ctlr-login rbac: create: true serviceAccount: create: true name: namespace: f5-cis args: bigip_url: X.X.X.X bigip_partition: kubernetes log_level: DEBUG pool_member_type: auto insecure: true as3_validation: true custom_resource_mode: true log-as3-response: true load-balancer-class: f5 manage-load-balancer-class-only: true namespaces: [default, test, linkerd-viz, ingress-nginx, f5-nginx] # verify-interval: 35 image: user: f5networks repo: k8s-bigip-ctlr pullPolicy: Always nodeSelector: {} tolerations: [] livenessProbe: {} readinessProbe: {} resources: {} version: latest 3. F5 CIS without Ingress/Gateway Without Ingress actually the F5's configuration is much simpler as you just need to create nodeport service and the VirtualServer CR. As you see below the health monitor marks the control node and the worker node that do not have pod from "hello-world-app-new-node" as shown in the F5 picture below. Sending traffic without Ingresses or Gateways removes one extra hop and sub-optimal traffic patterns as when the Ingress or Gateway is in deployment mode for example there could be 20 nodes and only 2 ingress/gateway pods on 1 node each. Traffic will need to go to only those 2 nodes to enter the cluster. apiVersion: v1 kind: Service metadata: name: hello-world-app-new-node labels: app: hello-world-app-new-node spec: externalTrafficPolicy: Local ports: - name: http protocol: TCP port: 8080 targetPort: 8080 selector: app: hello-world-app-new type: NodePort --- apiVersion: "cis.f5.com/v1" kind: VirtualServer metadata: name: vs-hello-new namespace: default labels: f5cr: "true" spec: virtualServerAddress: "192.168.1.71" virtualServerHTTPPort: 80 host: www.example.com hostGroup: "new" snat: auto pools: - monitor: interval: 10 recv: "" send: "GET /" timeout: 31 type: http path: / service: hello-world-app-new-node servicePort: 8080 For Istio and Linkerd Integration an irule could be needed to send custom ALPN extensions to the backend pods that now have a sidecar. I suggest seeing my article at "the Medium" for more information see https://medium.com/@nikoolayy1/connecting-kubernetes-k8s-cluster-to-external-router-using-bgp-with-calico-cni-and-nginx-ingress-2c45ebe493a1 Keep in mind that for the new options with Ambient mesh (sidecarless) the CIS without Ingress will not work as F5 does not speak HBONE (or HTTP-Based Overlay Network Environment) protocol that is send in the HTTP Connect tunnel to inform the zTunnel (layer 3/4 proxy that starts or terminates the mtls) about the real source identity (SPIFFE and SPIRE) that may not be the same as the one in CN/SAN client SSL cert. Maybe in the future there could be an option based on a CRD to provide the IP address of an external device like F5 and the zTunnel proxy to terminate the TLS/SSL (the waypoint layer 7 proxy usually Envoy is not needed in this case as F5 will do the HTTP processing) and send traffic to the pod but for now I see no way to make F5 work directly with Ambient mesh. If the ztunnel takes the identity from the client cert CN/SAN F5 will not have to even speak HBONE. 4. F5 CIS with Ingress Why we may need an ingress just as a gateway into the k8s you may ask? Nowadays many times a service mesh like linkerd or istio or F5 aspen mesh is used and the pods talk to each other with mTLS handled by the sidecars and an Ingress as shown in https://linkerd.io/2-edge/tasks/using-ingress/ is an easy way for the client-side to be https while the server side to be the service mesh mtls, Even ambient mesh works with Ingresses as it captures traffic after them. It is possible from my tests F5 to talk to a linkerd injected pods for example but it is hard! I have described this in more detail at https://medium.com/@nikoolayy1/connecting-kubernetes-k8s-cluster-to-external-router-using-bgp-with-calico-cni-and-nginx-ingress-2c45ebe493a1 Unfortunately when there is an ingress things as much more complex! F5 has Integration called "IngressLink" but as I recently found out it is when BIG-IP is only for Layer 3/4 Load Balancing and the Nginx Ingress Controller will actually do the decryption and AppProtect WAF will be on the Nginx as well F5 CIS IngressLink attaching WAF policy on the big-ip through the CRD ? | DevCentral Wish F5 to make an integration like "IngressLink" but the reverse where each node will have nginx ingress as this can be done with demon set and not deployment on k8s and Nginx Ingress will be the layer 3/4, as the Nginx VirtualServer CRD support this and to just allow F5 in the k8s cluster. Below is how currently this can be done. I have created a Transportserver but is not used as it does not at the momemt support the option "use-cluster-ip" set to true so that Nginx does not bypass the service and to go directly to the endpoints as this will cause nodes that have nginx ingress pod but no application pod to send the traffic to other nodes and we do not want that as add one more layer of load balancing latency and performance impact. The gateway is shared as you can have a different gateway per namespace or shared like the Ingress. apiVersion: v1 kind: Service metadata: name: hello-world-app-new-cluster labels: app: hello-world-app-new-cluster spec: internalTrafficPolicy: Local ports: - name: http protocol: TCP port: 8080 targetPort: 8080 selector: app: hello-world-app-new type: ClusterIP --- apiVersion: k8s.nginx.org/v1 kind: TransportServer metadata: name: nginx-tcp annotations: nginx.org/use-cluster-ip: "true" spec: listener: name: nginx-tcp protocol: TCP upstreams: - name: nginx-tcp service: hello-world-app-new-cluster port: 8080 action: pass: nginx-tcp --- apiVersion: k8s.nginx.org/v1 kind: VirtualServer metadata: name: nginx-http spec: host: "app.example.com" upstreams: - name: webapp service: hello-world-app-new-cluster port: 8080 use-cluster-ip: true routes: - path: / action: pass: webapp The second part of the configuration is to expose the Ingress to BIG-IP using CIS. --- apiVersion: v1 kind: Service metadata: name: f5-nginx-ingress-controller namespace: f5-nginx labels: app.kubernetes.io/name: nginx-ingress spec: externalTrafficPolicy: Local type: NodePort selector: app.kubernetes.io/name: nginx-ingress ports: - name: http protocol: TCP port: 80 targetPort: http --- apiVersion: "cis.f5.com/v1" kind: VirtualServer metadata: name: vs-hello-ingress namespace: f5-nginx labels: f5cr: "true" spec: virtualServerAddress: "192.168.1.81" virtualServerHTTPPort: 80 snat: auto pools: - monitor: interval: 10 recv: "200" send: "GET / HTTP/1.1\r\nHost:app.example.com\r\nConnection: close\r\n\r\n" timeout: 31 type: http path: / service: f5-nginx-ingress-controller servicePort: 80 Only the nodes that have a pod will answer the health monitor. Hopefully F5 can make some Integration and CRD that makes this configuration simpler like the "IngressLink" and to add the option "use-cluster-ip" to the Transport server as Nginx does not need to see the HTTP traffic at all. This is on my wish list for this year 😁 Also if AS3 could reference existing group of nodes and just with different ports this could help CIS will need to push AS3 declaration of nodes just one time and then the different VirtualServers could reference it but with different ports and this will make the AS3 REST-API traffic much smaller. 5. F5 CIS with Gateway fabric This does not at the moment work as gateway-fabric unfortunately does not support "use-cluster-ip" option. The idea is to deploy the gateway fabric in daemonset and to inject it with a sidecar or even without one this will work with ambient meshes. As k8s world is moving away from an Ingress this will be a good option. Gateway fabric natively supports TCP , UDP traffic and even TLS traffic that is not HTTPS and by exposing the gateway fabric with a Cluster-IP or Node-Port service then with different hostnames the Gateway fabric will select to correct route to send the traffic to! helm install ngf oci://ghcr.io/nginx/charts/nginx-gateway-fabric --create-namespace -n nginx-gateway -f values-gateway.yaml cat values-gateway.yaml nginx: # Run the data plane per-node kind: daemonSet # How the data plane gets exposed when you create a Gateway service: type: NodePort # or NodePort # (optional) if you’re using Gateway API experimental channel features: nginxGateway: gwAPIExperimentalFeatures: enable: true apiVersion: gateway.networking.k8s.io/v1 kind: Gateway metadata: name: shared-gw namespace: nginx-gateway spec: gatewayClassName: nginx listeners: - name: https port: 443 protocol: HTTPS tls: mode: Terminate certificateRefs: - kind: Secret name: wildcard-tls allowedRoutes: namespaces: from: ALL --- apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: app-route namespace: app spec: parentRefs: - name: shared-gw namespace: nginx-gateway hostnames: - app.example.com rules: - backendRefs: - name: app-svc port: 8080 F5 Nginx Fabric mesh is evolving really fast from what I see , so hopefully we see the features I mentioned soon and always you can open a github case. The documentation is at https://docs.nginx.com/nginx-gateway-fabric and as this use k8s CRD the full options can be seen at TLS - Kubernetes Gateway API 6. Summary With the release of TMOS 21 F5 now supports much more health monitors and pool members, so this way of deploying CIS with NodePort services may offer benefits with TMOS 21.1 that will be the stable version as shown in https://techdocs.f5.com/en-us/bigip-21-0-0/big-ip-release-notes/big-ip-new-features.html With auto mode some services can still be directly exposed to BIG-IP as the CIS config changes are usually faster to remove a pool member pod than BIG-IP health monitors to mark a node as down. The new version of CIS that will be CIS advanced may take of the concerns of hitting a bug or not well validated configuration that could bring the control channel down and TMOS 21.1 may also handle AS3 config changes better with less cpu/memory issue, so there could be no need in the future of using trafficpolicies and NodePort mode and k8s services of this type. For ambient mesh my example with Ingress and Gateway seems the only option for direct communication at the moment. We will see what the future holds!130Views4likes0CommentsVIP in https that redirect to another vip in https
Hi, I have a VIP in https with a certificate that have a policy LTM attached. In the policy, if the path is /prova, i'm trying to redirect to another VIP in https, but this doesn't work. Usually I redirect the calls only to VIPs in HTTP. There's a solution for use all the VIPS in HTTPS? ThanksSolved145Views0likes6Comments