devops
24051 TopicsTerraform AS3 code for GTM Only.
Hello All, I am really really suffering here :( Have been looking for GTM ONLY code in AS3 form, need a simple code hardcoded values will also work. I have seen documentation and couldn't see exact use case. We are doing POC for where VMs are direct;y added to GTM and NO LTM component are there. I can't post my LTM + GTM code as its in office. Would really appreciate any help and guidance here. Any simple code work snippet using only AS3 please.172Views0likes9CommentsBIG-IP device fails to install node-inspector
Hi all, when I followed the steps in 'Steps to Setup Node-Inspector on BIG-IP' and executed the following command, an error occurred. command: [root@bigip1:Active:Standalone] ~ # npm install -g node-inspector@0.12.8 errors: npm ERR! Linux 3.10.0-862.14.4.el7.ve.x86_64 npm ERR! argv "/usr/bin/node" "/usr/bin/.npm__" "install" "-g" "node-inspector@0.12.8" npm ERR! node v6.9.1 npm ERR! npm v3.10.8 npm ERR! path /usr/lib/node_modules npm ERR! code EROFS npm ERR! errno -30 npm ERR! syscall access npm ERR! rofs EROFS: read-only file system, access '/usr/lib/node_modules' npm ERR! rofs This is most likely not a problem with npm itself npm ERR! rofs and is related to the file system being read-only. npm ERR! rofs npm ERR! rofs Often virtualized file systems, or other file systems npm ERR! rofs that don't support symlinks, give this error. npm ERR! Please include the following file with any support request: npm ERR! /root/npm-debug.log logs: [root@bigip1:Active:Standalone] ~ # tail -30 /root/npm-debug.log 7616 silly idealTree | `-- lodash@3.10.1 7616 silly idealTree +-- xmldom@0.1.31 7616 silly idealTree +-- xtend@4.0.2 7616 silly idealTree +-- y18n@3.2.2 7616 silly idealTree `-- yargs@3.32.0 7617 silly generateActionsToTake Starting 7618 silly install generateActionsToTake 7619 warn checkPermissions Missing write access to /usr/lib/node_modules 7620 silly rollbackFailedOptional Starting 7621 silly rollbackFailedOptional Finishing 7622 silly runTopLevelLifecycles Finishing 7623 silly install printInstalled 7624 verbose stack Error: EROFS: read-only file system, access '/usr/lib/node_modules' 7624 verbose stack at Error (native) 7625 verbose cwd /root 7626 error Linux 3.10.0-862.14.4.el7.ve.x86_64 7627 error argv "/usr/bin/node" "/usr/bin/.npm__" "install" "-g" "node-inspector@0.12.8" 7628 error node v6.9.1 7629 error npm v3.10.8 7630 error path /usr/lib/node_modules 7631 error code EROFS 7632 error errno -30 7633 error syscall access 7634 error rofs EROFS: read-only file system, access '/usr/lib/node_modules' 7635 error rofs This is most likely not a problem with npm itself 7635 error rofs and is related to the file system being read-only. 7635 error rofs 7635 error rofs Often virtualized file systems, or other file systems 7635 error rofs that don't support symlinks, give this error. 7636 verbose exit [ -30, true ] This seems like a directory access permission issue, but I can't change the read/write permissions on the F5 device. How should this be resolved? f5-appsvcs-extension/contributing/node_inspector_profiling_as3.md at v3.54.2 · F5Networks/f5-appsvcs-extension16Views0likes1CommentEncrypted UCS Backup with REST-API
Because it seems this nowhere documented: Create a encrypted F5 backup with REST-API including private keys. This script should creates the task, starts it and get's it status. #!/usr/bin/env bash CURL_OPTS=("--fail-with-body" "--show-error" "-s" "-k" "-u" "user:pass" "-H" "Content-Type: application/json" "-H" "Accept: application/json, */*") # Create task and get id TASK_ID=$(jq -n --arg name /var/local/ucs/test.ucs \ --arg passphrase "testpw" \ '{ "command": "save", "name": $name, "options": [ { "passphrase": $passphrase } ] }' \ | curl "${CURL_OPTS[@]}" -X POST -d @- https://f5-lab/mgmt/tm/task/sys/ucs \ | jq -r "._taskId") # Start task jq -n '{ "_taskState": "VALIDATING" }' | curl "${CURL_OPTS[@]}" -X PUT -d @- "https://f5-lab/mgmt/tm/task/sys/ucs/$TASK_ID" # Get task status curl "${CURL_OPTS[@]}" --retry 5 --retry-all-errors --retry-delay 10 "https://f5-lab/mgmt/tm/task/sys/ucs/$TASK_ID" \ | jq -r "._taskState" Reference was https://my.f5.com/manage/s/article/K000138875 and the passphrase options was found by trial and error.28Views0likes0Commentsgetting compiling error when enabling Nginx App_potect
i m trying to install NGinx plus with App_ptotect but when trying to enable app_protect module after installing it i get the following error nginx: [emerg] APP_PROTECT config_set_id 1752649466-871-149162 not found within 45 seconds nginx: [emerg] APP_PROTECT fstat() "/opt/app_protect/config/compile_error_msg.json" failed (2: No such file or directory) and i can not start the nginx service, any idea about the issue?151Views0likes3CommentsJSON Web Key Set Endpoint
Hello, I am using Java Web Tokens (JWT) for user authentication against the backend servers. These tokens are being created by the F5. In order for the backend servers to validate these JWTs they need the public key signing these tokens from the F5. Many IDPs solve this by providing its JWT signing public keys on a well-known endpoint for the backend servers to fetch. My idea would be to bundle the public keys used for JWT signing into an Json Web Key Set (JWKS) and upload this as an iFile that is hosted on a certain URL on the F5, e.g. https://my-auth.test/.well-known/jwks.json Similar to the how jwks_uri is used in https://datatracker.ietf.org/doc/html/rfc8414#section-2 These JWKS have the following format: { "keys": [ { "alg": "RS256", "kty": "RSA", "use": "sig", "x5c": [ "MIIC+DCCAeCgAwIBAgIJBIGjYW6hFpn2MA0GCSqGSIb3DQEBBQUAMCMxITAfBgNVBAMTGGN1c3RvbWVyLWRlbW9zLmF1dGgwLmNvbTAeFw0xNjExMjIyMjIyMDVaFw0zMDA4MDEyMjIyMDVaMCMxITAfBgNVBAMTGGN1c3RvbWVyLWRlbW9zLmF1dGgwLmNvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMnjZc5bm/eGIHq09N9HKHahM7Y31P0ul+A2wwP4lSpIwFrWHzxw88/7Dwk9QMc+orGXX95R6av4GF+Es/nG3uK45ooMVMa/hYCh0Mtx3gnSuoTavQEkLzCvSwTqVwzZ+5noukWVqJuMKNwjL77GNcPLY7Xy2/skMCT5bR8UoWaufooQvYq6SyPcRAU4BtdquZRiBT4U5f+4pwNTxSvey7ki50yc1tG49Per/0zA4O6Tlpv8x7Red6m1bCNHt7+Z5nSl3RX/QYyAEUX1a28VcYmR41Osy+o2OUCXYdUAphDaHo4/8rbKTJhlu8jEcc1KoMXAKjgaVZtG/v5ltx6AXY0CAwEAAaMvMC0wDAYDVR0TBAUwAwEB/zAdBgNVHQ4EFgQUQxFG602h1cG+pnyvJoy9pGJJoCswDQYJKoZIhvcNAQEFBQADggEBAGvtCbzGNBUJPLICth3mLsX0Z4z8T8iu4tyoiuAshP/Ry/ZBnFnXmhD8vwgMZ2lTgUWwlrvlgN+fAtYKnwFO2G3BOCFw96Nm8So9sjTda9CCZ3dhoH57F/hVMBB0K6xhklAc0b5ZxUpCIN92v/w+xZoz1XQBHe8ZbRHaP1HpRM4M7DJk2G5cgUCyu3UBvYS41sHvzrxQ3z7vIePRA4WF4bEkfX12gvny0RsPkrbVMXX1Rj9t6V7QXrbPYBAO+43JvDGYawxYVvLhz+BJ45x50GFQmHszfY3BR9TPK8xmMmQwtIvLu1PMttNCs7niCYkSiUv2sc2mlq1i3IashGkkgmo=" ], "n": "yeNlzlub94YgerT030codqEztjfU_S6X4DbDA_iVKkjAWtYfPHDzz_sPCT1Axz6isZdf3lHpq_gYX4Sz-cbe4rjmigxUxr-FgKHQy3HeCdK6hNq9ASQvMK9LBOpXDNn7mei6RZWom4wo3CMvvsY1w8tjtfLb-yQwJPltHxShZq5-ihC9irpLI9xEBTgG12q5lGIFPhTl_7inA1PFK97LuSLnTJzW0bj096v_TMDg7pOWm_zHtF53qbVsI0e3v5nmdKXdFf9BjIARRfVrbxVxiZHjU6zL6jY5QJdh1QCmENoejj_ytspMmGW7yMRxzUqgxcAqOBpVm0b-_mW3HoBdjQ", "e": "AQAB", "kid": "NjVBRjY5MDlCMUIwNzU4RTA2QzZFMDQ4QzQ2MDAyQjVDNjk1RTM2Qg", "x5t": "NjVBRjY5MDlCMUIwNzU4RTA2QzZFMDQ4QzQ2MDAyQjVDNjk1RTM2Qg" } ]} Is there a way to automatically create such an JWKS endpoint or export the signing public keys in the JWKS format. Currently it seems that the convertion from public key to JWKS and providing it via an iFile to the backend servers needs to be done manually. Greetings, Yannik19Views0likes1CommentUsing n8n To Orchestrate Multiple Agents
I’ve been heads-down building a series of AI step-by-step labs, and this one might be my favorite so far: a practical, cost-savvy “mixture of experts” architectural pattern you can run with n8n and self-hosted models on Ollama. The idea is simple. Not every prompt needs a heavyweight reasoning model. In fact, most don’t. So we put a small, fast model in front to classify the user’s request—coding, reasoning, or something else—and then hand that prompt to the right expert. That way, you keep your spend and latency down, and only bring out the big guns when you really need them. Architecture at a glance: Two hosts: one for your models (Ollama) and one for your n8n app. Keeping these separate helps n8n stay snappy while the model server does the heavy lifting. Docker everywhere, with persistent volumes for both Ollama and n8n so nothing gets lost across restarts. Optional but recommended: NVIDIA GPU on the model host, configured with the NVIDIA Container Toolkit to get the most out of inference. On the model server, we spin up Ollama and pull a small set of targeted models: deepseek-r1:1.5b for the classifier and general chit-chat deepseek-r1:7b for the reasoning agent (this is your “brains-on” model) codellama:latest for coding tasks (Python, JSON, Node.js, iRules, etc.) llama3.2:3b as an alternative generalist On the app server, we run n8n. Inside n8n, the flow starts with the “On Chat Message” trigger. I like to immediately send a test prompt so there’s data available in the node inspector as I build. It makes mapping inputs easier and speeds up debugging. Next up is the Text Classifier node. The trick here is a tight system, prompt and clear categories: Categories: Reasoning and Coding Options: When no clear match → Send to an “Other” branch Optional: You can allow multiple matches if you want the same prompt to hit more than one expert. I’ve tried both approaches. For certain, ambiguous asks, allowing multiple can yield surprisingly strong results. I attach deepseek-r1:1.5b to the classifier. It’s inexpensive and fast, which is exactly what you want for routing. In the System Prompt Template, I tell it: If a prompt explicitly asks for coding help, classify it as Coding If it explicitly asks for reasoning help, classify it as Reasoning Otherwise, pass the original chat input to a Generalist From there, each classifier output connects to its own AI Agent node: Reasoning Agent → deepseek-r1:7b Coding Agent → codellama:latest Generalist Agent (the “Other” branch) → deepseek-r1:1.5b or llama3.2:3b I enable “Retry on Fail” on the classifier and each agent. In my environment (cloud and long-lived connections), a few retries smooth out transient hiccups. It’s not a silver bullet, but it prevents a lot of unnecessary red Xs while you’re iterating. Does this actually save money? If you’re paying per token on hosted models, absolutely. You’re deferring the expensive reasoning calls until a small model decides they’re justified. Even self-hosted, you’ll feel the difference in throughput and latency. CodeLlama crushes most code-related queries without dragging a reasoning model into it. And for general questions—“How do I make this sandwich?”—A small generalist is plenty. A few practical notes from the build: Good inputs help. If you know you’re asking for code, say so. Your classifier and downstream agent will have an easier time. Tuning beats guessing. Spend time on the classifier’s system prompt. Small changes go a long way. Non-determinism is real. You’ll see variance run-to-run. Between retries, better prompts, and a firm “When no clear match” path, you can keep that variance sane. Bigger models, better answers. If you have the budget or hardware, plugging in something like Claude, GPT, or a higher-parameter DeepSeek will lift quality. The routing pattern stays the same. Where to take it next: Wire this to Slack so an engineering channel can drop prompts and get routed answers in place. Add more “experts” (e.g., a data-analysis agent or an internal knowledge agent) and expand your classifier categories. Log token counts/latency per branch so you can actually measure savings and adjust thresholds/models over time. This is a lab, not a production, but the pattern is production-worthy with the right guardrails. Start small, measure, tune, and only scale up the heavy models where you’re seeing real business value. Let me know what you build—especially if you try multi-class routing and send prompts to more than one expert. Some of the combined answers I’ve seen are pretty great. Here's the lab in our git, if you'd like to try it out for yourself. If video is more your thing, try this: Thanks for building along, and I’ll see you in the next lab.
108Views3likes0CommentsACME DNS RFC-2136 Let's Encrypt certs
I've been pushing on certbot to handle CNAME entries when ordering certs, and finally given up. https://github.com/certbot/certbot/issues/6787 https://github.com/certbot/certbot/pull/9970 https://github.com/certbot/certbot/pull/7244 This repo contains scripts that: Create an ACME account with Let's Encrypt use TSIG credentials to talk to bind (RFC-2136) create TXT record in correct zone by following CNAME and SOA entries if present downloads certs installs certs on one or more F5s. The F5 credentials requires Administrator rights as Certificate Manager can't upload files. https://github.com/timriker/certmgr CNAME records are recommended to a zone with minimal or no replication and a low TTL. ie: _acme-challenge.example.com CNAME example.com._tls.example.com _acme-challenge.example.net CNAME example.net._tls.example.com _tls.example.com would have one name server and 30 second TTL or so a TSIG key would be created that only needs update access to _tls.example.com Comments welcome. JRahm I'm looking at you. 😎 More info: https://letsencrypt.org/docs/challenge-types/43Views2likes0CommentsF5 iRule Reverse Proxy, rewrite, redirect
Hello everyone, We currently have a scenario where a URL is no longer available and needs to be (redirected). The starting point is when https://company.com/tool is accessed, it should (redirect) to https://x.x.x.x/tool. Unfortunately, the (redirected) website doesn't have an FQDN, so it needs to be (redirected) to the IP address. Of course, https://company.com/tool should remain in the browser. Is this possible? A reverse proxy approach? Could someone provide me an example iRule? THX120Views0likes8CommentsModern Applications-Demystifying Ingress solutions flavors
Introduction In this article, we explore the different ingress services provided by F5 and how those solutions fit within our environment. With different ingress services flavors, you gain the ability to interact with your microservices at different points, allowing for flexible, secure deployment. The ingress services tools can be summarized into two main categories, Management plane: NGINX One BIG-IP CIS Traffic plane: NGINX Ingress Controller / Plus / App Protect / Service Mesh BIG-IP Next for Kubernetes Cloud Native Functions (CNFs) F5 Distributed Cloud kubernetes deployment mode Name Integration type Licensing Features NGINX One Console Management Plane Try for free Access to different NGINX products, NGINX Plus, NGINX Ingress Controller, NGINX Instance Manager, etc. BIG-IP CIS Management Plane Free, need to integrate with licensed BIG-IP Automatically configure performance, routing, and security services on BIG-IP. NGINX OSS Traffic Plane Free Features availability NGINX Ingress Controller Traffic Plane Vary based on the deployment https://www.f5.com/products/nginx/nginx-ingress-controller#introduction BIG-IP Next for Kubernetes (BNK) Traffic Plane Paid Ingress, Load balancing, Routing, firewall policing BIG-IP Next for Kubernetes Cloud Native Functions (CNFs) Traffic Plane Paid CGNAT, FW, DoS, TLS proxy, DNS, IPS and more upcoming F5 Distributed Cloud kubernetes deployment mode Traffic Plane Part of F5 Distributed Cloud Use F5 Distributed Cloud to integrate and work with your owned K8s environment Ingress solutions definitions In this section we go quickly through the Ingress services to understand the concept for each service, and then later move to the use cases’ comparison. BIG-IP Next for Kubernetes Kubernetes' native networking architecture does not inherently support multi-network integration or non-HTTP/HTTPS protocols, creating operational and security challenges for complex deployments. BIG-IP Next for Kubernetes addresses these limitations by centralizing ingress and egress traffic control, aligning with Kubernetes design principles to integrate with existing security frameworks and broader network infrastructure. This reduces operational overhead by consolidating cross-network traffic management into a unified ingress/egress point, eliminating the need for multiple external firewalls that traditionally require isolated configuration. The solution enables zero-trust security models through granular policy enforcement and provides robust threat mitigation, including DDoS protection, by replacing fragmented security measures with a centralized architecture. Additionally, BIG-IP Next supports 5G Core deployments by managing North/South traffic flows in containerized environments, facilitating use cases such as network slicing and multi-access edge computing (MEC). These capabilities enable dynamic resource allocation aligned with application-specific or customer-driven requirements, ensuring scalable, secure connectivity for next-generation 5G consumer and enterprise solutions while maintaining compatibility with existing network and security ecosystems. Cloud Native Functions (CNFs) BIG-IP Next for Kubernetes enables the advanced networking, traffic management and security functionalities; CNFs enables additional advanced services. VNFs and CNFs can be consolidated in the S/Gi-LAN or the N6 LAN in 5G networks. A consolidated approach results in simpler management and operation, reduced operational costs up to reduced TCO by 60% and more opportunities to monetize functions and services. Functions can include DNS, Edge Firewall, DDoS, Policy Enforcer, and more. BIG-IP Next CNFs provide scalable, automated, resilient, manageable, and observable cloud-native functions and applications. Support dynamic elasticity, occupy a smaller footprint with fast restart, and use continuous deployment and automation principles. NGINX for Kubernetes / NGINX One NGINX for Kubernetes is a versatile and cloud-native application delivery platform that aligns closely with DevOps and microservices principles. It is built around two primary models: NGINX Ingress Controller (OSS and Plus): Deployed directly inside Kubernetes clusters, it acts as the primary ingress gateway for HTTP/S, TCP, and UDP traffic. It supports Kubernetes-native CRDs, and integrates easily with GitOps pipelines, service meshes (e.g., Istio, Linkerd), and modern observability stacks like Prometheus and OpenTelemetry. NGINX One/NGINXaaS: This SaaS-delivered, managed service extends the NGINX experience by offloading the operational overhead, providing scalability, resilience, and simplified security configurations for Kubernetes environments across hybrid and multi-cloud platforms. NGINX solutions prioritize lightweight deployment, fast performance, and API-driven automation. NGINX Plus variants offer extended features like advanced WAF (NGINX App Protect), JWT authentication, mTLS, session persistence, and detailed application-layer observability. Some under the hood differences, BIG-IP Next for Kubernetes/CNF make use of F5 own TMM to perform application delivery and security, NGINX rely on Kernel to perform some network level functions like NAT, IP tables and routing. So it’s a matter of the architecture of your environment to go with one or both options to enhance your application delivery and security experience. BIG-IP Container Ingress Services (CIS) BIG-IP CIS works on management flow. The CIS service is deployed at Kubernetes cluster, sending information on created Pods to an integrated BIG-IP external to Kubernetes environment. This allows to automatically create LTM pools and forwarding traffic based on pool members health. This service allows for application teams to focus on microservice development and automatically update BIG-IP, allowing for easier configuration management. Use cases categorization Let’s talk in use cases terms to make it more related to the field and our day-to-day work, NGINX One Access to NGINX commercial products, support for open-source, and the option to add WAF. Unified dashboard and APIs to discover and manage your NGINX instances. Identify and fix configuration errors quickly and easily with the NGINX One configuration recommendation engine. Quickly diagnose bottlenecks and act immediately with real-time performance monitoring across all NGINX instances. Enforce global security polices across diverse environments. Real-time vulnerability management identifies and addresses CVEs in NGINX instances. Visibility into compliance issues across diverse app ecosystems. Update groups of NGINX systems simultaneously with a single configuration file change. Unified view of your NGINX fleet for collaboration, performance tuning, and troubleshooting. NGINX One to automate manual configuration and updating tasks for security and platform teams. BIG-IP CIS Enable self-service Ingress HTTP routing and app services selection by subscribing to events to automatically configure performance, routing, and security services on BIG-IP. Integrate with the BIG-IP platform to scale apps for availability and enable app services insertion. In addition, integrate with the BIG-IP system and NGINX for Ingress load balancing. BIG-IP Next for Kubernetes Supports ingress and egress traffic management and routing for seamless integration to multiple networks. Enables support for 4G and 5G protocols that are not supported by Kubernetes—such as Diameter, SIP, GTP, SCTP, and more. BIG-IP Next for Kubernetes enables security services applied at ingress and egress, such as firewalling and DDoS. Topology hiding at ingress obscures the internal structure within the cluster. As a central point of control, per-subscriber traffic visibility at ingress and egress allows traceability for compliance tracking and billing. Support for multi-tenancy and network isolation for AI applications, enabling efficient deployment of multiple users and workloads on a single AI infrastructure. Optimize AI factories implementations with BIG-IP Next for Kubernetes on Nvidia DPU. F5 Cloud Native Functions (CNFs) Add containerized services for example Firewall, DDoS, and Intrusion Prevention System (IPS) technology Based on F5 BIG-IP AFM. Ease IPv6 migration and improve network scalability and security with IPv4 address management. Deploy as part of a security strategy. Support DNS Caching, DNS over HTTPS (DoH). Supports advanced policy and traffic management use cases. Improve QoE and ARPU with tools like traffic classification, video management and subscriber awareness. NGINX Ingress Controller Provide L4-L7 NGINX services within Kubernetes cluster. Manage user and service identities and authorize access and actions with HTTP Basic authentication, JSON Web Tokens (JWTs), OpenID Connect (OIDC), and role-based access control (RBAC). Secure incoming and outgoing communications through end-to-end encryption (SSL/TLS passthrough, TLS termination). Collect, monitor, and analyze data through prebuilt integrations with leading ecosystem tools, including OpenTelemetry, Grafana, Prometheus, and Jaeger. Easy integration with Kubernetes Ingress API, Gateway API (experimental support), and Red Hat OpenShift Routes F5 Distributed Cloud Kubernetes deployment mode The F5 Distributed Cloud k8s deployment is supported only for Sites running Managed Kubernetes, also known as Physical K8s (PK8s). Deployment of the ingress controller is supported only using Helm. The Ingress Controller manages external access to HTTP services in a Kubernetes cluster using the F5 Distributed Cloud Services Platform. The ingress controller is a K8s deployment that configures the HTTP Load Balancer using the K8s ingress manifest file. The Ingress Controller automates the creation of load balancer and other required objects such as VIP, Layer 7 routes (path-based routing), advertise policy, certificate creation (k8s secrets or automatic custom certificate) Conclusion As you can see, the diverse Ingress controllers tools give you more flexibility, tailoring your architecture based on organization requirements and maintain application delivery and security practices across your applications ecosystem. Related Content and Technical demos NGINX One Console Experience the power of F5 NGINX One with feature demos | DevCentral Introducing F5 WAF for NGINX with Intuitive GUI in NGINX One Console and NGINX Instance Manager | DevCentral F5 NGINX One Console July features | DevCentral NGINX One BIG-IP Container Ingress Services (CIS) F5 CIS, TLS Extensions, and troubleshooting Use topology labels to reduce cross-AZ ingress traffic with F5 CIS and EKS | DevCentral Enable Consistent Application Services for Containers with CIS | DevCentral Configuring ExternalDNS for Kubernetes with F5 CIS, LTM and DNS | DevCentral My first CRD deployment with CIS | DevCentral Overview of F5 BIG-IP Container Ingress Services NGINX Ingress Controller JWT authorization with NGINX Ingress Controller Better together - F5 Container Ingress Services and NGINX Plus Ingress Controller Integration | DevCentral Integrating Hashicorp Vault with Cert Manager and F5 NGINX Ingress Controller | DevCentral Using F5 NGINX Plus as the Ingress Controller within Nutanix Kubernetes Platform (NKP) | DevCentral Announcing F5 NGINX Ingress Controller v4.0.0 | DevCentral F5 NGINX Ingress Controller BIG-IP Next for Kubernetes (BNK) BIG-IP Next for Kubernetes Nvidia DPU deployment walkthrough | DevCentral BIG-IP Next for Kubernetes, addressing today’s enterprise challenges | DevCentral BIG-IP Next SPK: a Kubernetes native ingress and egress gateway for Telco workloads. BIG-IP Next for Kubernetes Cloud Native Functions (CNFs) F5 BIG-IP Next CNF solutions suite of Kubernetes native 5G Network Functions F5 Cloud-Native Functions For Modern Demands - Part 2 Deploy F5 Cloud Native Functions in Kubernetes From virtual to cloud-native, infrastructure evolution | DevCentral F5 Distributed Cloud kubernetes deployment mode Kubernetes architecture options with F5 Distributed Cloud Services457Views3likes0CommentsLogstash pipeline tester
Code is community submitted, community supported, and recognized as ‘Use At Your Own Risk’. Short Description A tool that makes developing logstash pipelines much much easier. Problem solved by this Code Snippet Oh. The problem... Have you ever tried to write a logstash pipeline? Did you suffer hair loss and splitting migraines? So did I. Presenting, logstash pipeline tester which gives you a web interface where you can paste raw logs, send them to the included logstash instance and see the result directly in the interface. The included logstash instance is also configured to automatically reload once it detects a config change. How to use this Code Snippet TLDR; Don't do this, read the manual or checkout the video below Still here? Ok then! 🙂 Install docker Clone the repo Run these commands in the repo root folder:sudo docker-compose build # Skip sudo if running Windows sudo docker compose up # Skip sudo if running WindowsGo to http://localhost:8080 on your PC/Mac Pick a pipeline and send data Edit the pipeline Send data Rince, repeat Version info v1.0.27: Dependency updates, jest test retries and more since 1.0.0 https://github.com/epacke/logstash-pipeline-tester/releases/tag/v1.0.29 Video on how to get started: https://youtu.be/Q3IQeXWoqLQ Please note that I accidentally started the interface on port 3000 in the video while the shipped version uses port 8080. It took me roughly 5 hours and more retakes than I can count to make this video, so that mistake will be preserved for the internet to laugh at. 🙂 The manual: https://loadbalancing.se/2020/03/11/logstash-testing-tool/ Code Snippet Meta Information Version: Check GitHub Coding Language: NodeJS, Typescript + React Full Code Snippet https://github.com/epacke/logstash-pipeline-tester2.7KViews3likes16Comments