modern applications
6 TopicsUnderstanding Modern Application Architecture - Part 1
This is part 1 of a series. Here are the other parts: Understanding Modern Application Architecture - Part 2 Understanding Modern Application Architecture - Part 3 Over the past decade, there has been a change taking place in how applications are built. As applications become more expansive in capabilities and more critical to how a business operates, (or in many cases, the application is the business itself) a new style of architecture has allowed for increased scalability, portability, resiliency, and agility. To support the goals of a modern application, the surrounding infrastructure has had to evolve as well. Platforms like Kubernetes have played a big role in unlocking the potential of modern applications and is a new paradigm in itself for how infrastructure is managed and served. To help our community transition the skillset they've built to deal with monolithic applications, we've put together a series of videos to drive home concepts around modern applications. This article highlights some of the details found within the video series. In these first three videos, we breakdown the definition of a Modern Application. One might think that by name only, a modern application is simply an application that is current. But we're actually speaking in comparison to a monolithic application. Monolithic applications are made up of a single, or a just few pieces. They are rigid in how they are deployed and fragile in their dependencies. Modern applications will instead incorporate microservices. Where a monolithic application might have all functions built into one broad encompassing service, microservices will break down the service into smaller functions that can be worked on separately. A modern application will also incorporate 4 main pillars. Scalability ensures that the application can handle the needs of a growing user base, both for surges as well as long term growth. Portability ensures that the application can be transportable from its underlying environment while still maintaining all of its functionality and management plane capabilities. Resiliency ensures that failures within the system go unnoticed or pose minimal disruption to users of the application. Agility ensures that the application can accommodate for rapid changes whether that be to code or to infrastructure. There are also 6 design principles of a modern application. Being agnostic will allow the application to have freedom to run on any platform. Leveraging open source software where it makes sense can often allow you to move quickly with an application but later be able to adopt commercial versions of that software when full support is needed. Defining by code allows for more uniformity of configuration and move away rigid interfaces that require specialized knowledge. Automated CI/CD processes ensures the quick integration and deployment of code so that improvements are constantly happening while any failures are minimized and contained. Secure development ensures that application security is integrated into the development process and code is tested thoroughly before being deployed into production. Distributed Storage and Infrastructure ensures that applications are not bound by any physical limitations and components can be located where they make the most sense. These videos should help set the foundation for what a modern application is. The next videos in the series will start to define the fundamental technical components for the platforms that bring together a modern application. Continued in Part 24.1KViews8likes0CommentsUnderstanding Modern Application Architecture - Part 2
To help our Community transfer their skills to handle Modern Applications, we've released a video series to explain the major points. This article is part 2 and here are the other parts: Understanding Modern Application Architecture - Part 1 Understanding Modern Application Architecture - Part 3 This next set of videos discuss the platforms and components that make up modern applications. In this video, we review containers. These have become a key building block of microservices. They help achieve the application portability by neatly packaging up everything needing to bring up an application within a container runtime such as Docker. One great example of a container is the f5-demo-httpd container. This small lightweight container can be downloaded quickly to run a web server. It's incorporated into a lot of F5 demo environments because it is lightweight and can be customized by simply forking the repository and making your own changes. In this next video, we talk about Kubernetes (or k8s for short). While there are container runtimes like Docker that can work individually on a server, the Kubernetes project has brought the concept into a form that can be scaled out. Worker nodes, where containers are run on, can be brought together into clusters. Commands can be issued to a Master Node via YAML files and have affect across the cluster. Containers can be scheduled efficiently across a cluster which is managed as one. In this next video, we break down the Kubernetes API. The Kubernetes API is the main interface to a k8s cluster. While there are GUI solutions that can be added to a k8s cluster, they are still interfacing with the API so it is important to understand what the API is capable of and what it is doing with the cluster. The main way to issue commands to the API is through YAML files and the kubectl command. From there, the API server will interact with the other parts of the cluster to perform operations. In this next video, we discuss Securing a Kubernetes cluster. There are a number of attack vectors that need to be understood and so we review them along with some of the actions that can be taken in order to increase the security for them. In this next video, we go over Ingress Controller. An Ingres Controller is one of the main ways that traffic is brought from outside of the cluster, into a pod. This role is of particular interest to F5 customers as they can use NGINX, NGINX+ or BIG-IP to play this strategic role within a Kubernetes cluster. In this next video, we talk about Microservices. As applications are decomposed from monolithic applications to modern applications, they are broken up into microservices that carry out individual functions of an application. The microservices then communicate with each other in order to deliver the overall application. It's important to then understand this service to service communication so that you can design application services around them such as load balancing, routing, visibility and security. We hope that you've enjoyed this video series so far. In the next article, we'll be reviewing the components that aid in the management of a Kubernetes platform. Understanding Modern Application Architecture - Part 31.2KViews2likes0CommentsUnderstanding Modern Application Architecture - Part 3
In this last article of the series discussing Modern Application Architecture, we will be discussion manageability with respect to the traffic. As the traffic patterns grow and look quite different from monolithic applications, different approaches need to take place in order to maintain the stability of the application. Understanding Modern Application Architecture - Part 1 Understanding Modern Application Architecture - Part 2 In this next video, we discuss Service Mesh. As modern applications expand and their communications change to microservice to microservice, a service mesh can be introduced to provide control, security and visibility to that traffic. Since individual microservices can be written by different individuals or groups, the service mesh can be the intermediary that allows them to understand what is happening when one piece of code needs to speak to another piece of code. At the same time, trust and verification can happen between the microservices to ensure they are talking to what they should be talking to. In this next video we discuss Sidecar Proxies. As mentioned in the Service Mesh video, the sidecar proxy is a key piece of the mesh implementation. It is responsible for functions such as TLS termination, mutual TLS and authentication. It can also be used for tracing and other observability. This means these functions don't have to be performed by the microservice itself. In this final video, we review NGINX as a Production Grade Kubernetes Solution. While Modern Applications will adopt Open Source Solutions where possible, these applications can be mission critical ones that require the highest level of service. As mentioned in the previous videos in this series, there are a number of important pieces of a Kubernetes cluster that can be augmented, or replaced by enhanced services. NGINX can actually perform as an enhanced Ingress Controller, giving a high level of control as well as performance for inbound traffic to the cluster. NGINX with App Protect can also provide finer grain controlled web application security for the inbound web based components of the application. And finally, NGINX Service Mesh can help with the microservice to microservice control, security and visibility, offloading that function from the microservice itself. We hope that this video series has helped shed some light for those who are curious about modern application architecture. As you have questions, don't hesistate to ask in our Technical Forums!934Views2likes0CommentsAnnouncing F5 NGINX Gateway Fabric 2.0.0 with a New Distributed Architecture
Today, F5 NGINX Gateway Fabric is reaching an important milestone. The release of NGINX Gateway Fabric 2.0 marks our transition into a distributed architecture that is highly scalable, secure, and flexible. This architecture also easily enables more advanced capabilities and prepares us for integration for observability and fleet management with F5 NGINX One. Here are the big highlights for this major release: Control and Data Plane Separation Multiple Gateway Support HTTP Request Mirror Listener Isolation As always, bug fixes Data and Control Plane Separation Before 2.0, NGINX Gateway Fabric contained both the “control plane,” the container responsible for reading and applying configuration, and the “data plane,” the NGINX container where all traffic flows through, within the same pod. This meant if you scaled the NGINX Gateway Fabric pod, you would be forced to scale the data and control plane together. Now with our distributed architecture, the data plane deploys in a separate pod from the control plane. This allows us to enable more flexible use cases such as multiple gateways per control plane, a highly available control plane, and directly scaling NGINX replicas per Gateway. This makes NGINX Gateway Fabric much more resource efficient at scale. This change also improves NGINX Gateway Fabric’s security posture by limiting how much is accessible if a single pod is compromised. While NGINX Gateway Fabric and NGINX have always been secure by default, this architecture enables a two-tier defense against potential attacks: any security intrusion on the control plane has no way to directly access traffic, nor can the data plane directly access control plane interfaces. Multiple Gateways NGINX Gateway Fabric has historically been limited to a single Gateway object. We first chose this architecture for the short term because Routes are a good way to separate routing and access control for developers and cluster operators. As our product matures, we know that more advanced use cases require the isolation of infrastructure for separate teams, customers, or SLAs. Our new architecture enables you to do just that: In NGINX Gateway Fabric 2.0, every time you define a Gateway object, the control plane will provision an NGINX deployment, which can then be independently scaled by adding more replicas if needed. With this pattern, it’s easy to create a second, third, or many more Gateways. You can choose to give each customer or team in your cluster their own Gateway so they can each own their own infrastructure. Or you may want to separate infrastructure to apply separate policies based on hostname or product group. Each Gateway can also be scaled independently for varying levels of traffic, all managed by a single control plane. HTTP Request Mirrors Request mirrors are often useful when you want to mirror traffic to analyze traffic for security issues or to test a new version of an application with production traffic without impacting current users. All request responses to a mirrored location are ignored, so request mirrors serve as another useful tool for testing and analysis. These mirrors are added using a filter on the route rule. They will only affect a small group of traffic you choose. If you need more, you can add as many as you need. Listener Isolation We decided to also include the concept of listener isolation in this release to make advanced configurations more intuitive to work with and guard against accidental misconfiguration. Listener isolation means that any request should match at most one Listener within a Gateway. The Gateway API lists this example below: If Listeners are defined for "foo.example.com" and "*.example.com", a request to "foo.example.com" SHOULD only be routed using routes attached to the "foo.example.com" Listener and NOT the "*.example.com" Listener. The alternative is that request may match against multiple listeners unintentionally, which can become a problem when different policies are applied to the other listeners, or the request is routed to the wrong location. Now, NGINX Gateway Fabric will ensure that a Route will only match the most specific Listener on the Gateway it is attached to. What’s Next Our next release will be primarily focused around delivering more extended features from the Gateway API in an effort to support all extended features at the extended support level. If you are unfamiliar, the Gateway API has three separate support levels for every feature in the specification: Core: Portable features that all implementations should support. Extended: Portable features that are not universally supported across implementations. Implementations that support the feature should have the same behavior and semantics. Some extended features may eventually move to core. Implementation-specific: These features are not portable and vendor-specific. These features are far less defined in API and schema. You can see the full description of support levels here. So far, NGINX Gateway Fabric has supported all core features since 1.0 and has slowly been adding extended features over the past few releases. For 2.1, we will have a greater focus on these features to enable more advanced use cases. We plan to have all extended features available in NGINX Gateway Fabric by 2.2. For F5 NGINX Plus users, 2.1 will bring connection to the F5 NGINX One Console with basic fleet management capabilities. We plan on expanding these capabilities to include Observability for all NGINX deployments in your organization in the near future. 2.1 will also include support for F5 NGINX App Protect Web Application Firewall to protect your applications from any incoming malicious traffic. Specifically, NGINX Gateway Fabric will be implementing support for v5, with a new configuration experience coming later this year. Resources For the complete changelog for NGINX Gateway Fabric 2.0.0, see the Release Notes. To try NGINX Gateway Fabric for Kubernetes with NGINX Plus, start your free 30-day trial today or contact us to discuss your use cases. If you would like to get involved, see what is coming next, or see the source code for NGINX Gateway Fabric, check out our repository on GitHub! We have weekly community meetings on Tuesdays at 9:30AM Pacific/12:30PM Eastern/5:30PM GMT. Meeting links, updates, agenda, and notes are on the NGINX Gateway Fabric Meeting Calendar. Links are also always available from our GitHub readme.283Views3likes0CommentsF5 and MinIO: AI Data Delivery for the hybrid enterprise
Introduction Modern application architectures demand solutions that not only handle exponential data growth but also enable innovation and drive business results. As AI/ML workloads take center stage in industries ranging from healthcare to finance, application designers are increasingly turning to S3-compliant object storage because of its ability to provide scalable management of unstructured data. Whether it’s for ingesting massive datasets, running iterative training models, or delivering high-throughput predictions, S3-compatible storage systems play a foundational role in supporting these advanced data pipelines. MinIO has emerged as a leader in this space, offering high-performance, S3-compatible object storage built for modern-scale applications. MinIO is designed to easily work with AI/ML workflows. It is lightweight and cloud-based, so it is a good choice for businesses that are building infrastructure to support innovation. From storing petabyte-scale datasets to providing the performance needed for real-time AI pipelines, MinIO delivers the reliability and speed required for data-intensive work. While S3-compliant storage like MinIO forms the backbone of data workflows, robust traffic management and application delivery capabilities are essential for ensuring continuous availability, secure pipelines, and performance optimization. F5 BIG-IP, with its advanced suite of traffic routing, load balancing, and security tools, complements MinIO by enabling organizations to address these challenges. Together, F5 and MinIO create a resilient, scalable architecture where applications and AI/ML systems can thrive. This solution empowers businesses to: Build secure and highly-available storage pipelines for demanding workloads. Ensure fast and reliable delivery of data, even at exascale. Simplify and optimize their infrastructure to drive innovation faster. In this article, we’ll explore how to leverage F5 BIG-IP and MinIO AIStor clusters to enable results-driven application design. Starting with an architecture overview, we’ll cover practical steps to set up BIG-IP to enhance MinIO’s functionality. Along the way, we’ll highlight how this combination supports modern AI/ML workflows and other business-critical applications. Architecture Overview To validate the solution of F5 BIG-IP and MinIO AIStor effectively, this setup incorporates a functional testing environment that simulates real-world behaviors while remaining controlled and repeatable. MinIO’s warp benchmarking tool is used for orchestrating and running tests across the architecture. The addition of benchmarking tools ensures that the functional properties of the stack (traffic management, application-layer security, and object storage performance) are thoroughly evaluated in a way that is reproducible and credible. The environment consists of: A F5 VELOS chassis with BX110 blades, running BIG-IP instances configured using F5’s AS3 extension, for traffic management and security policies using LTM (Local Traffic Manager) and ASM (Application Security Manager). A MinIO AIStor cluster consisting of four bare-metal nodes equipped with high-performance NVMe drives, bringing the environment close to real-world customer deployments. Three benchmarking nodes for orchestrating and running tests: One orchestration node directs the worker nodes with benchmark test configuration and aggregates test results. Two worker nodes run warp in client mode to simulate workloads against the MinIO cluster. Warp Benchmarking Tool The warp benchmarking tool (https://github.com/minio/warp) from MinIO is designed to simulate real-world S3 workloads while also generating measurable metrics about the testing environment. In this architecture: A central orchestration node is used to coordinate the benchmarking process. This ensures that each test is consistent and runs under comparable conditions. Two worker nodes running warp in client mode send simulated traffic to the F5 BIG-IP virtual server. These nodes act as workload generators, allowing for the simulation of read-heavy, write-heavy, or mixed object storage exercises. Warp’s distributed design enables the scaling of workload generation, ensuring that the MinIO backend is tested under real-world-like conditions. This three-node configuration ensures that benchmarking tests are distributed effectively. It also provides insights into object storage behavior, traffic management, and the impact of security enforcement in the environment. Traffic Management and Security with BIG-IP At the center of this setup is the F5 VELOS chassis, running BIG-IP instances configured to handle both traffic management (LTM) and application-layer security (ASM). The addition of ASM (Application Security Manager) ensures that the MinIO cluster is protected from malicious or malformed requests while maintaining uninterrupted service for legitimate traffic. Key functions of BIG-IP in this architecture include: Load Balancing: Avoid overloading specific MinIO nodes by using adaptive algorithms, ensuring even traffic distribution, and preventing bottlenecks and hotspots. Apply advanced load balancing methods like least connections, dynamic ratio, and least response time. These methods intelligently account for backend load and performance in real time, ensuring reliable and efficient resource utilization. SSL/TLS Termination: Terminate SSL/TLS traffic to offload encryption workloads from backend clients. Re-encryption is optionally enabled for secure communication to MinIO nodes, depending on performance and security requirements. Health Monitoring: Intelligently performs continuous monitoring of the availability and health of the backend MinIO nodes. It reroutes traffic away from unhealthy nodes as necessary and restores traffic as service is restored. Application-Layer Security: Protect the environment via Web Application Firewall (WAF) policies that block malicious traffic, including injection attacks, malformed API calls, and DDoS-style app-layer threats. BIG-IP acts as the gateway for all requests coming from S3 clients, ensuring that security, health checks, and traffic policies are all applied before requests reach the MinIO nodes. Traffic Flow Through the Full Architecture The test traffic flows through several components in this architecture, with BIG-IP and warp playing vital roles in managing and generating requests, respectively: Benchmark Orchestration: The warp orchestration node initiates tests and distributes workload configurations to the worker nodes. The warp orchestration node also aggregates test data results from the worker nodes. Warp manages benchmarking scenarios, such as read-heavy, write-heavy, or mixed traffic patterns, targeting the MinIO storage cluster. Simulated Traffic from Worker Nodes: Two worker nodes, running warp in client mode, generate S3-compatible traffic such as object PUT, GET, DELETE, or STAT requests. These requests are transmitted through the BIG-IP virtual server. The load generation simulates the kind of requests an AI/ML pipeline or data-driven application might send under production conditions. BIG-IP Processing: Requests from the worker nodes are received by BIG-IP, where they are subjected to: Traffic Control: LTM distributes the traffic among the four MinIO nodes while handling SSL termination and monitoring node health. Security Controls: ASM WAF policies inspect requests for signs of application-layer threats. Only safe, valid traffic is routed to the MinIO environment. Environment Configuration Prerequisites BIG-IP (physical or virtual) Hosts for the MinIO cluster, including configured operating systems (and scheduling systems if optionally selected) Hosts for the warp worker nodes and warp orchestration node, including configured operating systems All required networking gear to connect the BIG-IP and the nodes A copy of the AS3 template at https://github.com/f5businessdevelopment/terraform-kvm-minio/blob/main/as3manualtemplate.json A copy of the warp configuration file at https://github.com/minio/warp/blob/master/yml-samples/mixed.yml Step 1: Set up MinIO Cluster Follow MinIO’s install instructions at https://min.io/docs/minio/linux/index.html The link is for a Linux deployment but choose the deployment target that’s appropriate for your environment. Record the addresses and ports of the MinIO consoles and APIs configured in this step for use as input to the next steps. Step 2: Configure F5 BIG-IP for Traffic Management and Security Following the steps documented in https://github.com/f5businessdevelopment/terraform-kvm-minio/blob/main/MANUALAS3.md and using the template file downloaded from GitHub, create and apply an AS3 declaration to configure your BIG-IP. Step 3: Deploy and Configure MinIO Warp for Benchmarking Retrieve API Access and Secret keys Log into your MinIO Cluster and click the 'Access' icon Once in 'Access', click the 'Access Keys' button In ‘Access Keys’, click the ‘Create Access Keys’ button and follow the steps to create and record your access and secret key values. Update warp key and secret value In your warp configuration file, find the access-key and secret-key fields and update the values with those you recorded in the previous step. Update warp client addresses In your warp configuration file, find the warp-client field and update the value with the addresses of the worker nodes. Update warp s3 host address In your warp configuration file, find the host field and update the value with the address and port of the VIP listener on the BIG-IP Step 4: Verify and Monitor the Environment Start the warp on each of the worker nodes with the command warp client Once the warp clients respond that they are listening on the warp orchestrator node, start the warp benchmark test warp run test.yaml Replace test.yaml with the name of your configuration file Summary of Test Results Functional tests done in F5’s lab, using the method described above, show how the F5 + MinIO solution works and behaves. These results highlight important considerations that apply to both AI/ML pipelines and data repatriation workflows. This enables organizations to make informed design choices when deploying similar architectures. The testing goals were: Validate that BIG-IP security and traffic management policies function properly with MinIO AIStor in a simulated real-world configuration. Compare the impact of various load-balancing, security, and storage strategies to determine best practices. Test Methodology Four test configurations were executed to identify the effects of: Threads Per Worker: Testing both 1-thread and 20-thread configurations for workload generation. Multi-Part GETs and PUTs: Comparing scenarios with and without multi-part requests for better parallelization. BIG-IP Profiles: Evaluating Layer 7 (ASM-enabled security) versus Layer 4 (performance-optimized) profiles. Test Results Test Configuration Throughput Benefits 20 threads, multi-part, Layer 7 28.1 Gbps Security, High-performance reliability 20 threads, multi-part, Layer 4 81.5 Gbps High-performance reliability 1 thread, no multi-part, Layer 7 3.7 Gbps Security, Reliability 1 thread, no multi-part, Layer 4 7.8 Gbps Reliability Note: The testing results provide insights into the solution and behavior of this setup, though they are not intended as production performance benchmarks. Key Insights Multi-Part GETs and PUTs Are Critical for Throughput Optimization: Multi-part operations split objects into smaller parts for parallel processing. This allows the architecture to better utilize MinIO’s distributed storage capabilities and worker thread concurrency. Without multi-part GETs/PUTs, single-threaded configurations experienced severely reduced throughput. Recommendation: Ensure multi-part operations are enabled in applications or tools interacting with MinIO when handling large objects or high IOPS workloads. Balance Security with Performance: Layer 7 security provided by ASM is essential for sensitive data and workloads that interact with external endpoints. However, it introduces processing overhead. Layer 4 performance profiles, while lacking application-layer security features, deliver significantly higher throughput. Recommendation: Choose BIG-IP profiles based on specific workload requirements. For AI/ML data ingest and model training pipelines, consider enabling Layer 4 optimization during bulk read/write phases. For workloads requiring external access or high-security standards, deploy Layer 7 profiles. In some cases, consider horizontal scaling of load balancing and object storage tiers to add throughput capacity. Threads Per Worker Impact Throughput: Scaling up threads at the worker level significantly increased throughput in the lab environment. This demonstrates the importance of concurrency for demanding workloads. Recommendation: Optimize S3 client configurations for higher connection counts where workloads permit, particularly when performing bulk data transfers or operationally intensive reads. Example Use Cases Use Case #1: AI/ML Pipeline AI and machine learning pipelines rely heavily on storage systems that can ingest, process, and retrieve vast amounts of data quickly and securely. MinIO provides the scalability and performance needed for storage, while F5 BIG-IP ensures secure, optimized data delivery. Pipeline Workflow An enterprise running a typical AI/ML pipeline might include the following stages: Data Ingestion: Large datasets (e.g., images, logs, training corpora) are collected from various sources and stored within the MinIO cluster using PUT operations. Model Training: Data scientists iterate on AI models using the stored training datasets. These training processes generate frequent GET requests to retrieve slices of the dataset from the MinIO cluster. Model Validation and Inference: During validation, the pipeline accesses specific test data objects stored in the cluster. For deployed models, inference may require low-latency reads to make predictions in real time. How F5 and MinIO Support the Workflow This combined architecture enables the pipeline by: Ensuring Consistent Availability: BIG-IP distributes PUT and GET requests across the four nodes in the MinIO cluster using intelligent load balancing. With health monitoring, BIG-IP proactively reroutes traffic away from any node experiencing issues, preventing delays in training or inference. Optimizing Performance: NVMe-backed storage in MinIO ensures fast read and write speeds. Together with BIG-IP's traffic management, the architecture delivers reliable throughput for iterative model training and inference. Securing End-to-End Communication: ASM protects the MinIO storage APIs from malicious requests, including malformed API calls. At the same time, SSL/TLS termination secures communications between AI/ML applications and the MinIO backend. Use Case #2: Enterprise Data Repatriation Organizations increasingly seek to repatriate data from public clouds to on-premises environments. Repatriation is often driven by the need to reduce cloud storage costs, regain control over sensitive information, or improve performance by leveraging local infrastructure. This solution supports these workflows by pairing MinIO’s high-performance object storage with BIG-IP’s secure and scalable traffic management. Repatriation Workflow A typical enterprise data repatriation workflow may look like this: Bulk Data Migration: Data stored in public cloud object storage systems (e.g., AWS S3, Google Cloud Storage) is transferred to the MinIO cluster running on on-premises infrastructure using tools like MinIO Gateway or custom migration scripts. Policy Enforcement: Once migrated, BIG-IP ensures that access to the MinIO cluster is secured, with ASM enforcing WAF policies to protect sensitive data during local storage operations. Ongoing Storage Optimization: The migrated data is integrated into workflows like backup and archival, analytics, or data access for internal applications. Local NVMe drives in the MinIO cluster reduce latency compared to cloud solutions. How F5 and MinIO Support the Workflow This architecture facilitates the repatriation process by: Secure Migration: MinIO Gateway, combined with SSL/TLS termination on BIG-IP, allows data to be transferred securely from public cloud object storage services to the MinIO cluster. ASM protects endpoints from exploitation during bulk uploads. Cost Efficiency and Performance: On-premises MinIO storage eliminates expensive cloud storage costs while providing faster access to locally stored data. NVMe-backed nodes ensure that repatriated data can be rapidly retrieved for internal applications. Scalable and Secure Access: BIG-IP provides secure access control to the MinIO cluster, ensuring only authorized users or applications can use the repatriated data. Health monitoring prevents disruptions in workflows by proactively managing node unavailability. The F5 and MinIO Advantage Both use cases reflect the flexibility and power of combining F5 and MinIO: AI/ML Pipeline: Supports data-heavy applications and iterative processes through secure, high-performance storage. Data Repatriation: Empowers organizations to reduce costs while enabling seamless local storage integration. These examples provide adaptable templates for leveraging F5 and MinIO to solve problems relevant to enterprises across various industries, including finance, healthcare, agriculture, and manufacturing. Conclusion The collaboration of F5 BIG-IP and MinIO provides a high-performance, secure, and scalable architecture for modern data-driven use cases such as AI/ML pipelines and enterprise data repatriation. Testing in the lab environment validates the functionality of this solution, while highlighting opportunities for throughput optimization via configuration tuning. To bring these insights to your environment: Test multi-part configurations using tools like MinIO warp benchmark or production applications. Match BIG-IP profiles (Layer 4 or Layer 7) with the specific priorities of your workloads. Use these findings as a baseline while performing further functional or performance testing in your enterprise. The flexibility of this architecture allows organizations to push the boundaries of innovation while securing critical workloads at scale. Whether driving new AI/ML pipelines or reducing costs in repatriation workflows, the F5 + MinIO solution is well-equipped to meet the demands of modern enterprises. Further Content For more information about F5's partnership with MinIO, consider looking at the informative overview by buulam on DevCentral's YouTube channel. We also have the steps outlined in this video.263Views2likes0CommentsAnnouncing F5 NGINX Gateway Fabric 2.0.0 with a New Distributed Architecture
Gateway Fabric 2.0 marks our transition into a distributed architecture that is highly scalable, secure, and flexible. This architecture also easily enables more advanced capabilities and prepares us for integration for observability and fleet management with F5 NGINX One. Here are the big highlights for this major release: Control and Data Plane Separation Multiple Gateway Support HTTP Request Mirror Listener Isolation As always, bug fixes153Views2likes1Comment