cloud
3807 TopicsUpdate OWASP score task failed.
Hello, Just realized that my LTM is filling with restjavad logs, I'm getting error: Update OWASP score task failed. OWASP Compliance Score generation task next iteration scheduled to run in 60 minutes from now. It seems that it's connected with WAF module, which I'm not using currently. Also, just checked for network issues, and it seems everything is ok. Does anyone have any idea? Thanks in advance!! (full log below). LTM version: 16.0.0 0.0.12 [I][458273][05 Feb 2021 04:40:05 UTC][8100/tm/asm/owasp/task OWASPTaskScheduleWorker] Update OWASP score task failed. [I][458274][05 Feb 2021 04:40:05 UTC][8100/tm/asm/owasp/task OWASPTaskScheduleWorker] OWASP Compliance Score generation task next iteration scheduled to run in 60 minutes from now. [SEVERE][458275][05 Feb 2021 05:40:05 UTC][com.f5.rest.workers.asm.AsmConfigWorker] nanoTime:[15761351310149764] threadId:[21] Exception:[org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused (Connection refused) at org.apache.thrift.transport.TSocket.open(TSocket.java:185) at com.f5.asmconfig.client.AsmClient.<init>(AsmClient.java:42) at com.f5.asmconfig.client.AsmClient.<init>(AsmClient.java:50) at com.f5.rest.workers.asm.WrapAsmClient.getClient(AsmConfigWorker.java:306) at com.f5.rest.workers.asm.AsmConfigWorker.restCallWithRetry(AsmConfigWorker.java:170) at com.f5.rest.workers.asm.AsmConfigWorker.forwardCall(AsmConfigWorker.java:200) at com.f5.rest.workers.asm.AsmConfigWorker$1.run(AsmConfigWorker.java:156) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:473) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622) at java.lang.Thread.run(Thread.java:748) Caused by: java.net.ConnectException: Connection refused (Connection refused) at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:580) at org.apache.thrift.transport.TSocket.open(TSocket.java:180) ... 13 more ]client:[7182705]1.4KViews0likes8CommentsF5 Distributed Cloud Security Service Insertion With BIG-IP Advanced WAF
In this article we will show you how to quickly deploy and operate external services of your choice across multiple public clouds. For this article I will select the BIG-IP Advanced WAF (PAYG), future articles will cover additional solutions. Co-Author: Anitha Mareedu, Sr. Security Engineer, F5 Introduction F5’s Distributed Cloud Securtiy Service Insertion solution allows enterprises to deploy and operate external services of their choice across multiple public clouds. Let's start by looking at a real-world customer example. The enterprise has standardized on an external firewall in their private data center. Their network and security team are very familiar with using BIG-IP AWAF. They want to deploy the same security firewall solution that they use in the private datacenter in the public cloud. The requirements are: a simple operational model to deploy these services a unified security policy consistency across different clouds simple deployments unified logging Challenges Customers have identified several challenges in moving to the cloud. Initallly, teams that are very familiar with supporting services in their private data center usually do not have the expertise in designing, deploying and supporting in public clouds. If the same team then is tasked with deploying to multiple clouds the gap widens, terminology, archtitecture tools and constructs are all unique. Second, the operational models are different across different clouds. In AWS, you use either a VPC or a transit gateway (TGW), in Azure you use a VNET and Google has VPC’s. Solution Description Let's look at how F5’s Distributed Cloud Security Service insertion solution helps simplify and unify security solution deployments in multi-cloud and hybrid cloud environments: Infrastructure-as-code: Implementation and policy configuration can be automated and run as infrastructure-as-code across clouds and regions, allowing policies to be repeatable in any major public or private cloud. Easy setup and management: This simplified setup and management extends across AWS, Azure, and other clouds, as the F5 Distributed Cloud Platform supports AWS Transit Gateway, virtual network peering in Azure, and use of VPC attachments. Define once and replicate models: No extra handcrafting is needed for consistent, straightforward operations and deployment. Unified traffic steering rules: With the Distributed Cloud Platform, traffic is rerouted from networks through the security service using the same steering rules across different public and private clouds. Using F5 Distributed Cloud Console, IT pros get granular visibility and single-pane-of-glass management of traffic across clouds and networks. Optional policy deployment routes: Policies can be deployed at either or both the network layer (using IP addresses) or the application layer (using APIs). Diagram Step by Step Process This walk thru assumes you already have an AWS VPC deployed. Have handy the VPC id. Log into the F5 Distributed Cloud Dashboard You are presented with the Dashboard where you can choose which deployment option you want to work with. We will be working with Cloud and Edge Sites. Select Cloud and Edge Sites > Manage > Site Management > AWS TWG Sites Click Add the AWS Transit Gateway (TWG) Under Metadata give your TWG site a Name, Label and Description Click on Configure under AWS Configuration This brings up the Services VPC Configuration Page Select your AWS region Select Services VPC, leave as New, let it genetrate a name or choose your own name and give the Primary CIDR block you want to assign to the VPC. Leave Transit Gateway as New TWG Leave BGP as Automatic Under Site Node Parameters, Ingress/ Egress select “Add Item” Move slider on upper right corner to Show Advanced Fields Fill in required configuration, AWS AZ Name and CIDR Blocks for each of the the subnets and click the “Add Item” You can let the system autogenerate these or assign the desired range. This will take you back to the last screen, where you need to either create or select your cloud credentials. These are Programmatic Access Credentials allowing API access. Click Apply This takes you to the previous screen where we connect your current VPC to the Service VPC we are creating. (have VPC id available) Click Configure under VPC attachments Click Add Item Supply VPC id Click Apply This takes you back once again to the AWS TWG Site Screen. Finish with clicking Save and Exit. In the UI you will then click Apply. You are now deploying your new Security VPC via Terraform. While that is deploying we will move on to the External Services. Manage > Site Management > External Services > Add External Service Give your Service a name, add a label and description. Click “Configure” under Select NFV Service Provider. For this article we will select the F5 BIG-IP Advanced WAF (PAYG), future articles will cover additional solutions. Provide the Admin Password Admin Username public SSH Key that you will use to access your BIG-IP deployment. Select the TWG site you created above. Finally click “Add Item“ under Service Nodes. Service nodes Enter a Node name and the Avilibilty Zones you wish to delpoy into. Then click “Add Item” This will take you back to the original screen. Enable HTTPS Management of Nodes, supply a delegated doman that will issue a Certificate. Under Select Service Type” Keep Inside VIP at Automatic and Set the Outside VIP to “Advertise On Outside Network”. Finally Click “Save and Exit” At the end, the External Security Service is deployed, and you are taken to all the External Services. Click the name of the External Service you deployed to expand the details From this screen you are able to access several items, the two I want to point out are the TGW stats and the BIG-IP you deployed by clicking the Management Dashboard URL. Click under Site the TWG Service you deployed Here you are able to see fine grained stats under all the tabs. System Metrics Application Metrics Site Status Nodes Interfaces Alerts Requests Top Talkers Connections TWG Flow tables DHCP Status Objects Tools Going back click the hyperlink to the BIG-IP if you wish to look at the configuration. F5 Distributed Cloud Service Insertion automatically configured your BIG-IP with the following information: • Interfaces • Self IPs • Routes • Management and credentials • VLANs • IPoIP tunnel SI<-> BIG-IP • VIP The following two items will need to be configured on your BIG-IP. This configuration Configure AWAF policies SecOps can access familiar BIG-IP UI using management link provided in F5 Cloud Console and set up and configure AWAF ploicies Define a Traffic Steering Policy Network traffic to define traffic steering policy at Network (L3/L4) layer Service policy to define traffic steering policy at App(L7) level. Below are the traffic steering control methods available: Network level – Ip address, port, etc App level – API, Method, etc At the end of this step, you can see traffic getting diverted to BIG-IP and getting inspected by BIG-IP. Summary As you can see, F5 Distributed Cloud Security Service Insertion dramatically reduces the operation complexity for deploying external services in public clouds, it greatly enhances the security posture and it vastly improves productivity for all the operations teams such as NetOps, SecOps or DevOps.2.1KViews3likes0CommentsIntroduction to OWASP API Security Top 10 2023
Introduction to API An Application Programming Interface (API) is a component that enables communication between two different systems by following certain rules. It also adds a layer of abstraction between the two systems where the requester does not know how the other system has derived the result and responded back. Over the past few years, developers have started relying more on APIs as it helps them to meet the needs of today’s rapid application deployment model. As the APIs started getting a wider acceptance it is highly critical to safeguard them by thoroughly testing their behavior and following best security practices. Learn API Security Best Practices. Overview of OWASP API Security The OWASP API Security project aims to help the organizations by providing a guide with a list of the latest top 10 most critical API vulnerabilities and steps to mitigate them. As part of updating the old OWASP API Security risk categories of 2019, recently OWASP API Security Top 10 2023 is released. What’s new in OWASP API Sec 2023? List of vulnerabilities: API1:2023 Broken Object Level Authorization Broken Object Level Authorization (BOLA) is a vulnerability that occurs when there is a failure in validation of user’s permissions to perform a specific task over an object which may eventually lead to leakage, updation or destruction of data. To prevent this vulnerability, proper authorization mechanism should be followed, proper checks should be made to validate user’s action on a certain record and security tests should be performed before deploying any production grade changes. API2:2023 Broken Authentication Broken Authentication is a critical vulnerability that occurs when application’s authentication endpoints fail to detect attackers impersonating someone else’s identity and allow partial or full control over the account. To prevent this vulnerability, observability and understanding of all possible authentication API endpoints is needed, re-authentication should be performed for any confidential changes, multi-factor authentication, captcha-challenge and effective security solutions should be applied to detect & mitigate credential stuffing, dictionary and brute force type of attacks. API3:2023 Broken Object Property Level Authorization Broken Object Property Level Authorization (Excessive Data Exposure, Mass Assignment) is one of the new risk categories of OWASP API Security Top 10 2023. This vulnerability occurs when a user is allowed to access an object’s property without validating his access permissions. Excessive Data Exposure and Mass Assignment which were initially a part of OWASP APISec 2019 are now part of this new vulnerability. To prevent this vulnerability, access privileges of users requesting for a specific object's property should be scrutinized before exposure by the API endpoints. Use of generic methods & automatically binding client inputs to internal objects or code variables should be avoided and schema-based validation should be enforced. API4:2023 Unrestricted Resource Consumption Unrestricted Resource Consumption vulnerability occurs when the system’s resources are being unnecessarily consumed which could eventually lead to degradation of services and performance latency issues. Although the name has changed, the vulnerability is still the same as that of Lack of Resources & Rate Limiting. To prevent this vulnerability, rate-limiting, maximum size for input payload/parameters and server-side validations of requests should be enforced. API5:2023 Broken Function Level Authorization Broken Function Level Authorization occurs when vulnerable API endpoints allow normal users to perform administrative actions or user from one group is allowed to access a function specific to users of another group. To prevent this vulnerability, access control policies and administrative authorization checks based on user’s group/roles should be implemented. API6:2023 Unrestricted Access to Sensitive Business Flows Unrestricted Access to Sensitive Business Flows is also a new addition to the list of API vulnerabilities. While writing API endpoints it is extremely critical for the developers to have a clear understanding of the business flows getting exposed by it. To avoid exposing any sensitive business flow and limit its excessive usage which if not considered, might eventually lead to exploitation by the attackers and cause some serious harm to the business. This also includes securing and limiting access to B2B APIs that are consumed directly and often integrated with minimal protection mechanism. By keeping automation to work, now-a-days attackers can bypass traditional protection mechanisms. APIs inefficiency in detecting automated bot attacks not only causes business loss but also it can adversely impact the services for real users as well. To overcome this vulnerability, enterprises need to have a platform to identify whether the request is from a real user or an automated tool by analyzing and tracking patterns of usage. Device fingerprinting, Integrating Captcha solution, blocking Tor requests, are a few methods which can help to minimize the impact of such automated attacks. For more details on automated threats, you can visit OWASP Automated Threats to Web Applications Note: Although the vulnerability is new but it contains some references of API10:2019 Insufficient Logging & Monitoring API7:2023 Server-Side Request Forgery After finding a place in OWASP Top 10 web application vulnerabilities of 2021, SSRF has now been included in OWASP API Security Top 10 2023 list as well, showing the severity of this vulnerability. Server-Side Request Forgery (SSRF) vulnerability occurs when an API fetches an internal server resource without validating the URL from the user. Attackers exploit this vulnerability by manipulating the URL, which in turn helps them to retrieve sensitive data from the internal servers. To overcome this vulnerability, Input data validations should be implemented to ensure that the client supplied input data obeys the expected format. Allow lists should be maintained so that only trusted requests/calls will be processed, and HTTP redirections should be disabled. API8:2023 Security Misconfiguration Security Misconfiguration is a vulnerability that may arise when security best practices are overlooked. Unwanted exposure of debug logs, unnecessary enabled HTTP Verbs, unapplied latest security patches, missing repeatable security hardening process, improper implementation of CORS policy etc. are a few examples of security misconfiguration. To prevent this vulnerability, systems and entire API stack should be maintained up to date without missing any security patches. Continuous security hardening and configurations tracking process should be carried out. Make sure all API communications take place over a secure channel (TLS) and all servers in HTTP server chain process incoming requests. Cross-Origin Resource Sharing (CORS) policy should be set up properly. Unnecessary HTTP verbs should be disabled. API9:2023 Improper Inventory Management Improper Inventory Management vulnerability occurs when organizations don’t have much clarity on their own APIs as well as third-party APIs that they use and lack proper documentation. Unawareness with regards to current API version, environment, access control policies, data shared with the third-party etc. can lead to serious business repercussions. Clear understanding and proper documentation are the key to overcome this vulnerability. All the details related to API hosts, API environment, Network access, API version, Integrated services, redirections, rate limiting, CORS policy should be documented correctly and maintained up to date. Documenting every minor detail is advisable and authorized access should be given to these documents. Exposed API versions should be secured along with the production version. A risk analysis is recommended whenever newer versions of APIs are available. API10:2023 Unsafe Consumption of APIs Unsafe Consumption of APIs is again a newly added vulnerability covering a portion of API8:2019 Injection vulnerability. This occurs when developers tend to apply very little or no sanitization on the data received from third-party APIs. To overcome this, we should make sure that API interactions take place over an encrypted channel. API data evaluation and sanitization should be carried out before using the data further. Precautionary actions should be taken to avoid unnecessary redirections by using Allow lists. How F5 XC can help? F5 Distributed Cloud (F5 XC) has a wide range of solutions for deploying, managing and securing application deployments in different environments. XC WAAP is a F5 SaaS offering. The 4 key components of WAAP are Web Application Firewall, API Security, Bot Defense, DDoS Mitigation. All these solutions are powered on top of the XC platform. In addition to WAAP, F5 XC has other solutions to offer such as Fraud and Abuse, AIP, CDN, MCN, DNS and so on. API security in XC WAAP simplifies operations with automated discovery of API transactions using AI/ML Engine along with insights of performance. It also provides API protection features like Rate Limiting, PII safeguard along with comprehensive security monitoring GUI dashboard. API security provides feasibility to import the inventory file in the form of swagger which helps to know exactly what endpoints, methods and payloads are valid, and this tightens security against abuse. F5 XC management console helps the customers to leverage the benefit of monitoring, managing, and maintaining their application’s traffic from a single place irrespective of its platform on which it is hosted, it could be multi-cloud, on prem or edge. Note: This is an initial article covering the overview of proposed most critical API vulnerabilities from OWASP API Security community for 2023. More articles covering detailed insight of each vulnerability and their mitigation steps using F5 XC platform will follow this article in coming days. Meanwhile, you can refer to overview article for OWASP API Security Top 10 2019 which contains link to detailed articles covering API vulnerabilities of 2019 and how F5 XC can help to mitigate them. Related OWASP API Security article series: Broken Authentication Excessive Data Exposure Mass Assignment Lack of Resources & Rate limiting Security Misconfiguration Improper Assets Management Unsafe consumption of APIs Server-Side Request Forgery Unrestricted Access to Sensitive Business Flows OWASP API Security Top 10 - 20197KViews5likes1CommentIntroduction to OWASP Top 10 API Security Risks - 2019 and F5 Distributed Cloud WAAP
Introduction to API: An application programming interface (API) is a combination of protocols, functions, etc. which we can utilize to get details about resources, services and features. APIs are fast, lightweight and reliable but they expose sensitive data and so they have become the targets of hackers. Overview of OWASP API Security: The simplicity of APIs has given hackers a chance to infiltrate them in plethora of ways to steal personal and sensitive details. Increase in demand of API security caused a need for a project to keep track of latest API vulnerabilities and security procedures called OWASP API Security Top 10. As per the above project below are the top ten issues and their overview in API security as of 2019. API1:2019 Broken Object Level Authorization APIs expose endpoints that manage objects using unique identifiers, providing hackers a chance to bypass access controls. To prevent this attacks authorized checks like credentials and API token should always be kept in place in the code if there is a request using a user input. API2:2019 Broken User Authentication Authentication mechanisms are sometimes implemented with less security, allowing attackers to compromise authentication tokens to take over other user's identities. For more information about this vulnerability, demonstration scenario and prevention steps using F5 XC refer to the article. API3:2019 Excessive Data Exposure In most of the recent attacks it was observed developers are exposing unnecessary and sensitive object properties providing illegal users a way to exploit them. For more information about this vulnerability, demonstration scenario and prevention steps using F5 XC refer to the article. API4:2019 Lack of Resources & Rate Limiting APIs do not have any restrictions on the size or number of resources that can be requested by the end user. Above mentioned scenarios sometimes lead to poor API server performance, Denial of Service (DoS) and brute force attacks. For more information about this vulnerability, demonstration scenario and prevention steps using F5 XC refer to the article. API5:2019 Broken Function Level Authorization Most applications are composed of different groups, users and roles. If configurations like access control are not applied, it will lead to authorization flaws allowing one user to access the resources of other users. API6:2019 Mass Assignment Code sanity should always be performed in response data, binding client data into code variables without filtering gives hackers a chance to guess object's properties by exploring the API endpoints, documentations, etc. For more information about this vulnerability, demonstration scenario and prevention steps using F5 XC refer to the article. API7:2019 Security Misconfiguration This attack is mostly caused because of misconfigured HTTP headers, unnecessary HTTP methods, permissive Cross-Origin resource sharing (CORS), and verbose error messages in logs containing sensitive information like usernames, PIN, IP addresses, etc. For more information about this vulnerability, demonstration scenario and prevention steps using F5 XC refer to the article. API8:2019 Injection OS commands, SQL, Command Injection, etc., occur if there are no restrictions on user requested schema as part of filter query. The malicious request can sometimes bypass these validations to execute unintended commands providing attackers access to sensitive information. For more information about this vulnerability, demonstration scenario and prevention steps using F5 XC refer to the article. API9:2019 Improper Assets Management A modern web application typically hosts thousands of requests. It is critical to update the documentation/swagger as per the latest changes and include information about newly implemented APIs. If they are not regularly updated hackers can explore and find any deprecated API which may sometimes expose debug endpoints. For more information about this vulnerability, demonstration scenario and prevention steps using F5 XC refer to the article. API10:2019 Insufficient Logging & Monitoring Any issues in logging and monitoring services will give attackers more ways to attack systems without being recognized. It’s always advised to configure the best monitoring solutions to keep track of all logs and to configure email alerts. Sometimes it’s the best practice to keep logging details in a different location to avoid malicious user activity erasing their log trails. For more information refer to the article. Overview of F5 Distributed Cloud WAAP: Web Application and API protection (WAAP) is a SAAS offering provided by F5 Distributed Cloud Services to protect applications and published APIs using Web Application Firewall (WAF), bot protection, API security, and DDoS mitigation. Once WAAP policy is applied on the load balancer, these service engines protect web applications and API endpoints with the latest automatic detection of WAF, Bot and DOS attack signatures. One of the key sections of Distributed Cloud WAAP is API security which focuses primarily on securing the API’s using different configurations like OpenAPI ingestion, automatic API discovery, service policies, rate limiting, Allow/Denied URLs, etc. Below diagram shows how Distributed Cloud WAAP protects APIs: Whenever there is a request originating from end users Distributed Cloud WAAP analyses the request metadata details like URL, filter parameters, Headers, etc. to find whether it’s a legitimate request. Once the request is screened, validated and approved then only the request is forwarded to the back-end servers. Back-end servers then return the requested details to the end user. If for any reason Distributed Cloud WAAP finds the request has discrepancies or is not valid the request will be blocked, and a security event will be generated in dashboard. Users or administrators can analyze the captured request details and can modify the existing Distributed Cloud WAAP configurations if needed to reach the business goals. Articles on OWASP API Security: Broken User Authentication Excessive Data Exposure Lack of Resources & Rate Limiting Mass Assignment Security Misconfiguration Injection Improper Assets Management Insufficient Logging & Monitoring Note: Articles on remaining OWASP API Security Top 10 2019 vulnerabilities are in pipeline and will get published shortly, stay tuned for the update New edition of OWASP API Security Top 10 risks - 2023 is released and you can check this link for more details Related Links: F5 Distributed Cloud WAAP F5 Distributed Cloud Services6.7KViews4likes0CommentsXC cloud - fleet
Hi, I am trying to learn XC stuff, going through video and other stuff and I noticed that there is so called Fleets being used to connect 2 multiple sites together. I understand it to be a bucket where I put my CE (customer edge) into. But why is it marked as Legacy and what is its successor? XC documentation would deserve some refresh, I understand XC has been evolving, but it is quite obvious that those learn video should be updated as well...Solved50Views0likes4CommentsSecure AI RAG using F5 Distributed Cloud in Red Hat OpenShift AI and NetApp ONTAP Environment
Introduction Retrieval Augmented Generation (RAG) is a powerful technique that allows Large Language Models (LLMs) to access information beyond their training data. The “R” in RAG refers to the data retrieval process, where the system retrieves relevant information from an external knowledge base based on the input query. Next, the “A” in RAG represents the augmentation of context enrichment, as the system combines the retrieved relevant information and the input query to create a more comprehensive prompt for the LLM. Lastly, the “G” in RAG stands for response generation, where the LLM generates a response with a more contextually accurate output based on the augmented prompt as a result. RAG is becoming increasingly popular in enterprise AI applications due to its ability to provide more accurate and contextually relevant responses to a wide range of queries. However, deploying RAG can introduce complexity due to its components being located in different environments. For instance, the datastore or corpus, which is a collection of data, is typically on-premise for enhanced control over data access and management due to data security, governance, and compliance with regulations within the enterprise. Meanwhile, inference services are often deployed in the cloud for their scalability and cost-effectiveness. In this article, we will discuss how F5 Distributed Cloud can simplify the complexity and securely connect all RAG components seamlessly for enterprise RAG-enabled AI applications deployments. Specifically, we will focus on Network Connect, App Connect, and Web App & API Protection. We will demonstrate how these F5 Distributed Cloud features can be leveraged to secure RAG in collaboration with Red Hat OpenShift AI and NetApp ONTAP. Example Topology F5 Distributed Cloud Network Connect F5 Distributed Cloud Network Connect enables seamless and secure network connectivity across hybrid and multicloud environments. By deploying F5 Distributed Cloud Customer Edge (CE) at site, it allows us to easily establish encrypted site-to-site connectivity across on-premises, multi-cloud, and edge environment. Jensen Huang, CEO of NVIDIA, has said that "Nearly half of the files in the world are stored on-prem on NetApp.”. In our example, enterprise data stores are deployed on NetApp ONTAP in a data center in Seattle managed by organization B (Segment-B: s-gorman-production-segment), while RAG services, including embedding Large Language Model (LLM) and vector database, is deployed on-premise on a Red Hat OpenShift cluster in a data center in California managed by Organization A (Segment-A: jy-ocp). By leveraging F5 Distributed Cloud Network Connect, we can quickly and easily establish a secure connection for seamless and efficient data transfer from the enterprise data stores to RAG services between these two segments only: F5 Distributed Cloud CE can be deployed as a virtual machine (VM) or as a pod on a Red Hat OpenShift cluster. In California, we deploy the CE as a VM using Red Hat OpenShift Virtualization — click here to find out more on Deploying F5 Distributed Cloud Customer Edge in Red Hat OpenShift Virtualization: Segment-A: jy-ocp on CE in California and Segment-B: s-gorman-production-segment on CE in Seattle: Simply and securely connect Segment-A: jy-ocp and Segment-B: s-gorman-production-segment only, using Segment Connector: NetApp ONTAP in Seattle has a LUN named “tbd-RAG”, which serves as the enterprise data store in our demo setup and contains a collection of data. After these two data centers are connected using F5 XC Network Connect, a secure encrypted end-to-end connection is established between them. In our example, “test-ai-tbd” is in the data center in California where it hosts the RAG services, including embedding Large Language Model (LLM) and vector database, and it can now successfully connect to the enterprise data stores on NetApp ONTAP in the data center in Seattle: F5 Distributed Cloud App Connect F5 Distributed Cloud App Connect securely connects and delivers distributed applications and services across hybrid and multicloud environments. By utilizing F5 Distributed Cloud App Connect, we can direct the inference traffic through F5 Distributed Cloud's security layers to safeguard our inference endpoints. Red Hat OpenShift on Amazon Web Services (ROSA) is a fully managed service that allows users to develop, run, and scale applications in a native AWS environment. We can host our inference service on ROSA so that we can leverage the scalability, cost-effectiveness, and numerous benefits of AWS’s managed infrastructure services. For instance, we can host our inference service on ROSA by deploying Ollama with multiple AI/ML models: Or, we can enable Model Serving on Red Hat OpenShift AI (RHOAI). Red Hat OpenShift AI (RHOAI) is a flexible and scalable AI/ML platform builds on the capabilities of Red Hat OpenShift that facilitates collaboration among data scientists, engineers, and app developers. This platform allows them to serve, build, train, deploy, test, and monitor AI/ML models and applications either on-premise or in the cloud, fostering efficient innovation within organizations. In our example, we use Red Hat OpenShift AI (RHOAI) Model Serving on ROSA for our inference service: Once inference service is deployed on ROSA, we can utilize F5 Distributed Cloud to secure our inference endpoint by steering the inference traffic through F5 Distributed Cloud's security layers, which offers an extensive suite of features designed specifically for the security of modern AI/ML inference endpoints. This setup would allow us to scrutinize requests, implement policies for detected threats, and protect sensitive datasets before they reach the inferencing service hosted within ROSA. In our example, we setup a F5 Distributed Cloud HTTP Load Balancer (rhoai-llm-serving.f5-demo.com), and we advertise it to the CE in the datacenter in California only: We now reach our Red Hat OpenShift AI (RHOAI) inference endpoint through F5 Distributed Cloud: F5 Distributed Cloud Web App & API Protection F5 Distributed Cloud Web App & API Protection provides comprehensive sets of security features, and uniform observability and policy enforcement to protect apps and APIs across hybrid and multicloud environments. We utilize F5 Distributed Cloud App Connect to steer the inference traffic through F5 Distributed Cloud to secure our inference endpoint. In our example, we protect our Red Hat OpenShift AI (RHOAI) inference endpoint by rate-limiting the access, so that we can ensure no single client would exhaust the inference service: A "Too Many Requests" is received in the response when a single client repeatedly requests access to the inference service at a rate higher than the configured threshold: This is just one of the many security features to protect our inference service. Click here to find out more on Securing Model Serving in Red Hat OpenShift AI (on ROSA) with F5 Distributed Cloud API Security. Demonstration In a real-world scenario, the front-end application could be hosted on the cloud, or hosted at the edge, or served through F5 Distributed Cloud, offering flexible alternatives for efficient application delivery based on user preferences and specific needs. To illustrate how all the discussed components work seamlessly together, we simplify our example by deploying Open WebUI as the front-end application on the Red Hat OpenShift cluster in the data center in California, which includes RAG services. While a DPU or GPU could be used for improved performance, our setup utilizes a CPU for inferencing tasks. We connect our app to our enterprise data stores deployed on NetApp ONTAP in the data center in Seattle using F5 Distributed Cloud Network Connect, where we have a copy of "Chapter 1. About the Migration Toolkit for Virtualization" from Red Hat. These documents are processed and saved to the Vector DB: Our embedding Large Language Model (LLM) is Sentence-Transformers/all-MiniLM-L6-v2, and here is our RAG template: Instead of connecting to the inference endpoint on Red Hat OpenShift AI (RHOAI) on ROSA directly, we connect to the F5 Distributed Cloud HTTP Load Balancer (rhoai-llm-serving.f5-demo.com) from F5 Distributed Cloud App Connect: Previously, we asked, "What is MTV?“ and we never received a response related to Red Hat Migration Toolkit for Virtualization: Now, let's try asking the same question again with RAG services enabled: We finally received the response we had anticipated. Next, we use F5 Distributed Cloud Web App & API Protection to safeguard our Red Hat OpenShift AI (RHOAI) inference endpoint on ROSA by rate-limiting the access, thus preventing a single client from exhausting the inference service: As expected, we received "Too Many Requests" in the response on our app upon requesting the inference service at a rate greater than the set threshold: With F5 Distributed Cloud's real-time observability and security analytics from the F5 Distributed Console, we can proactively monitor for potential threats. For example, if necessary, we can block a client from accessing the inference service by adding it to the Blocked Clients List: As expected, this specific client is now unable to access the inference service: Summary Deploying and securing RAG for enterprise RAG-enabled AI applications in a multi-vendor, hybrid, and multi-cloud environment can present complex challenges. In collaboration with Red Hat OpenShift AI (RHOAI) and NetApp ONTAP, F5 Distributed Cloud provides an effortless solution that secures RAG components seamlessly for enterprise RAG-enabled AI applications.168Views1like0CommentsAccelerate Your Initiatives: Secure & Scale Hybrid Cloud Apps on F5 BIG-IP & Distributed Cloud DNS
It's rare now to find an application that runs exclusively in one homogeneous environment. Users are now global, and enterprises must support applications that are always-on and available. These applications must also scale to meet demand while continuing to run efficiently, continuously delivering a positive user experience with minimal cost. Introduction In F5’s 2024 State of Application Strategy Report, Hybrid and Multicloud deployments are pervasive. With the need for flexibility and resilience, most businesses will deploy applications that span multiple clouds and use complex hybrid environments. In the following solution, we walk through how an organization can expand and scale an application that has matured and now needs to be highly-available to internal users while also being accessible to external partners and customers at scale. Enterprises using different form-factors such as F5 BIG-IP TMOS and F5 Distributed Cloud can quickly right-size and scale legacy and modern applications that were originally only available in an on-prem datacenter. Secure & Scale Applications Let’s consider the following example. Bookinfo is an enterprise application running in an on-prem datacenter that only internal employees use. This application provides product information and details that the business’ users access from an on-site call center in another building on the campus. To secure the application and make it highly-available, the enterprise has deployed an F5 BIG-IP TMOS in front of each of endpoint An endpoint is the combination of an IP, port, and service URL. In this scenario, our app has endpoints for the frontend product page and backend resources that only the product page pulls from. Internal on-prem users access the app with internal DNS on BIG-IP TMOS. GSLB on the device sends another class of internal users, who aren’t on campus and access by VPN, to the public cloud frontend in AWS. The frontend that runs in AWS can scale with demand, allowing it to expand as needed to serve an influx of external users. Both internal users who are off-campus and external users will now always connect to the frontend in AWS through the F5 Global Network and Regional Edges with Distributed Cloud DNS and App Connect. Enabling the frontend for the app in AWS, it now needs to pull data from backend services that still run on-prem. Expanding the frontend requires additional connectivity, and to do that we first deploy an F5 Distributed Cloud Customer Edge (CE) to the on-prem datacenter. The CE connects to the F5 Global Network and it extends Distributed Cloud Services, such as DNS and Service Discovery, WAF, API Security, DDoS, and Bot protection to apps running on BIG-IP. These protections not only secure the app but also help reduce unnecessary traffic to the on-prem datacenter. With Distributed Cloud connecting the public cloud and on-prem datacenter, Service Discovery is configured on the CE on-prem. This makes a catalog of apps (virtual servers) on the BIG-IP available to Distributed Cloud App Connect. Using App Connect with managed DNS, Distributed Cloud automatically creates the fully qualified domain name (FQDN) for external users to access the app publicly, and it uses Service Discovery to make the backend services running on the BIG-IP available to the frontend in AWS. Here are the virtual servers running on BIG-IP. Two of the virtual servers, “details” and “reviews,” need to be made available to the frontend in AWS while continuing to work for the frontend that’s on-prem. To make the virtual servers on BIG-IP available as upstream servers in App Connect, all that’s needed is to click “Add HTTP Load Balancer” directly from the Discovered Services menu. To make the details and reviews sevices that are on-prem available to the frontend product page in AWS, we advertise each of their virtual servers on BIG-IP to only the CE running in AWS. The menu below makes this possible with only a few clicks as service discovery eliminates the need to find the virtual IP and port for each virtual server. Because the CE in AWS runs within Kubernetes, the name of the new service being advertised is recognized by the frontend product page and is automatically handled by the CE. This creates a split-DNS situation where an internal client can resolve and access both the internal on-prem and external AWS versions of the app. The subdomain “external.f5-cloud-demo.com” is now resolved by Distributed Cloud DNS, and “on-prem.f5-cloud-demo.com” is resolved by the BIG-IP. When combined with GSLB, internal users who aren’t on campus and use a VPN will be redirected to the external version of the app. Demo The following video explains this solution in greater detail, showing how to configure connectivity to each service the app uses, as well as how the app looks to internal and external users. (Note: it looks and works identically! Just the way it should be and with minimal time needed to configure it). Key Takeaways BIG-IP TMOS has long delivered best-in-class service with high-availability and scale to enterprise and complex applications. When integrated with Distributed Cloud, freely expand and migrate application services regardless of the deployment model (on-prem, cloud, and edge). This combination leverages cloud environments for extreme scale and global availability while freeing up resources on-prem that would be needed to scrub and sanitize traffic. Conclusion Using the BIG-IP platform with Distributed Cloud services addresses key challenges that enterprises face today: whether it's making internal apps available globally to workforces in multiple regions or scaling services without purchasing more fixed-cost on-prem resources. F5 has the products to unlock your enterprise’s growth potential while keeping resources nimble. Check out the select resources below to explore more about the products and services featured in this solution. Additional Resources Solution Overview: Distributed Cloud DNS Solution Overview: One DNS – Four Expressions Interactive Demo: Distributed Cloud DNS at F5 DevCentral: The Power of &: F5 Hybrid DNS solution95Views0likes0CommentsUse F5 Distributed Cloud to control Primary and Secondary DNS
Overview Domain Name Service (DNS); it's how humans and machines discover where to connect. DNS on the Internet is the universal directory of addresses to names. If you need to get support for the product Acme, you go to support.acme.com. Looking for the latest headlines in News, try www.aonn.com or www.npr.org. DNS is the underlying feature that nearly every service on the Internet depends on. Having a robust and reliable DNS provider is critical to keeping your organization online and working, and especially so during a DDoS attack. "Nature is a mutable cloud, which is always and never the same." - Ralph Waldo Emerson We might not wax that philosophically around here, but our heads are in the cloud nonetheless! Join the F5 Distributed Cloud user group today and learn more with your peers and other F5 experts. F5 Distributed Cloud DNS (F5 XC DNS) can function as both Primary or Secondary nameservers, and it natively includes DDoS protection. Using F5 XC DNS, it’s possible to provision and configure primary or secondary DNS securely in minutes. Additionally, the service uses a global anycast network and is built to scale automatically to respond to large query volumes. Dynamic security is included and adds automatic failover, DDoS protection, TSIG authentication support, and when used as a secondary DNS—DNSSEC support. F5 Distributed Cloud allows you to manage all of your sites as a single “logical cloud” providing: - A portable platform that spans multiple sites/clouds - A private backbone connects all sites - Connectivity to sites through its nodes (F5 Distributed Cloud Mesh and F5 Distributed Cloud App Stack) - Node flexibility, allowing it to be virtual machines, live on hardware within data centers, sites, or in cloud instances (e.g. EC2) - Nodes provide vK8s (virtual K8s), network and security services - Services managed through F5 Distributed Cloud’s SaaS base console Scenario 1 – F5 Distributed Cloud DNS: Primary Nameserver Consider the following; you're looking to improve the response time of your app with a geo-distributed solution, including DNS and app distribution. With F5 XC DNS configured as the primary nameserver, you’ll automatically get DNS DDoS protection, and will see an improvement in the response the time to resolve DNS just by using Anycast with F5’s global network’s regional point of presence. To configure F5 XC DNS to be the Primary nameserver for your domain, access the F5 XC Console, go to DNS Management, and then Add Zone. Alternately, if you're migrating from another DNS server or DNS service to F5 XC DNS, you can import this zone directly from your DNS server. Scenario 1.2 below illustrates how to import and migrate your existing DNS zones to F5 XC DNS. Here, you’ll write in the domain name (your DNS zone), and then View Configuration for the Primary DNS. On the next screen, you may change any of the default SOA parameters for the zone, and any type of resource record (RR) or record sets which the DNS server will use to respond to queries. For example, you may want to return more than one A record (IP address) for the frontend to your app when it has multiple points of presence. To do this, enter as many IP addresses of record type A as needed to send traffic to all the points of ingress to your app. Additional Resource Record Sets allows the DNS server to return more than a single type of RR. For example, the following configurations, returns two A (IPv4 address) records and one TXT record to the query of type ANY for “al.demo.internal”. Optionally, if your root DNS zone has been configured for DNSSEC, then enabling it for the zone is just a matter of toggling the default setting in the F5 XC Console. Scenario 1.2 - Import an Existing Primary Zone to Distributed Cloud using Zone Transfer (AXFR) F5 XC DNS can use AXFR DNS zone transfer to import an existing DNS zone. Navigate to DNS Management > DNS Zone Management, then click Import DNS Zone. Enter the zone name and the externally accessible IP of the primary DNS server. ➡️ Note: You'll need to configure your DNS server and any firewall policies to allow zone transfers from F5. A current list of public IP's that F5 uses can be found in the following F5 tech doc. Optionally, configure a transaction signature (TSIG) to secure the DNS zone transfer. When you save and exit, F5 XC DNS executes a secondary nameserver zone AXFR and then transitions itself to be the zone's primary DNS server. To finish the process, you'll need to change the NS records for the zone at your domain name registrar. In the registrar, change the name servers to the following F5 XC DNS servers: ns1.f5clouddns.com ns2.f5clouddns.com Scenario 1.3 - Import Existing (BIND format) Primary Zones directly to Distributed Cloud F5 XC DNS can directly import BIND formatted DNS zone files in the Console, for example, db.2-0-192.in-addr.arpa and db.foo.com. Enterprises often use BIND as their on-prem DNS service, importing these files to Distributed Cloud makes it easier to migrate existing DNS records. To import existing BIND db files, navigate to DNS Management > DNS Zone Management, click Import DNS Zone, then "BIND Import". Now click "Import from File" and upload a .zip with one or more BIND db zone files. The import wizard accepts all primary DNS zones and ignores other zones and files. After uploading a .zip file, the next screen reports any warnings and errors At this poing you can "Save and Exit" to import the new DNS zones or cancel to make any changes. For more complex zone configurations, including support for using $INCLUDE and $ORIGIN directives in BIND files, the following open source tool will convert BIND db files to JSON, which can then be copied directly to the F5 XC Console when configuring records for new and existing Primary DNS zones. BIND to XC-DNS Converter Scenario 2 - F5 Distributed Cloud DNS: Primary with Delegated Subdomains An enhanced capability when using Distributed Cloud (F5 XC) as the primary DNS server for your domains or subdomains, is to have services in F5 XC dynamically create their own DNS records, and this can be done either directly in the primary domain or the subdomains. Note that before July 2023, the delegated DNS feature in F5 XC required the exclusive use of subdomains to dynamically manage DNS records. As of July 2023, organizations are allowed to have both F5 XC managed and user-managed DNS resource records in the same domain or subdomain. When "Allow HTTP Load Balancer Managed Records" is checked, DNS records automatically added by F5 XC appear in a new RR set group called x-ves-io-managed which is read-only. In the following example, I've created an HTTP Load Balanacer with the domain "www.example.f5-cloud-demo.com" and F5 XC automatically created the A resource record (RR) in the group x-ves-io-managed. Scenario 3 – F5 Distributed Cloud DNS: Secondary Nameserver In this scenario, say you already have a primary DNS server in your on-prem datacenter, but due to security reasons, you don’t want it to be directly accessible to queries from the Internet. F5 XC DNS can be configured as a secondary DNS server and will both zone transfer (AXFR, IXFR) and receive (NOTIFY) updates from your primary DNS server as needed. To configure F5 XC DNS to be a secondary DNS server, go to Add Zone, then choose Secondary DNS Configuration. Next, View Configuration for it, and add your primary DNS server IP’s. To enhance the security of zone transfers and updates, F5 XC DNS supports TSIG encrypted transfers from the primary DNS server. To support TSIG, ensure your primary DNS server supports encryption, and enable it by entering the pre-shared key (PSK) name and its value. The PSK itself can be blindfold-encrypted in the F5 XC Console to prevent other Console admins from being able to view it. If encryption is desired, simply plug in the remaining details for your TSIG PSK and Apply. Once you’ve saved your new secondary DNS configuration, the F5 XC DNS will immediately transfer your zone details and begin resolving queries on the F5 XC Global Network with its pool of Anycast-reachable DNS servers. Conclusion You’ve just seen how to configure F5 XC DNS both as a primary DNS as well as a secondary DNS service. Ensure the reachability of your company with a robust, secure, and optimized DNS service by F5. A service that delivers the lowest resolution latency with its global Anycast network of nameservers, and one that automatically includes DDoS protection, DNSSEC, TSIG support for secondary DNS. Watch the following demo video to see how to configure F5 XC DNS for scenarios #1 and #3 above. Additional Resources For more information about using F5 Distributed Cloud DNS: https://www.f5.com/cloud/products/dns For technical documentation: https://docs.cloud.f5.com/docs/how-to/app-networking/manage-dns-zones DNS Demo Guide and step-by-step walkthrough: https://github.com/f5devcentral/f5xc-dns BIND to XC-DNS Converter (open source tool): https://github.com/Mikej81/BINDtoXCDNS9.5KViews6likes0CommentsDemo Guide & Video Series for F5 Distributed Cloud Network Connect (Multi-Cloud Networking)
Exploring the core networking use-cases for F5 Distributed Cloud, the following demo guide and 3-part video series shows how to connect sample networks at Layer 3 (L3) and a sample application using Layer 7 (L7). If you've ever come across one of our Distributed Cloud, multi-cloud solutions and thought about trying it out for yourself but lack some of the precursor tools or awareness to do it, then this demo guide and video series is perfect for you. This shows every step to deploy a modern app in multiple cloud locations and make it available using the Distributed Cloud L7 HTTP LB, and also the L3 transit global network. Before going through the steps yourself, watch this 3-part video series illustrating the process in each location. Video: Module 1 - Distributed Cloud L7 Networking to AWS Video: Module 2 - Distributed Cloud L7 Networking to Azure Video: Module 3 - Distributed Cloud L3 between AWS and Azure via Global Network Ready to get your hands on? The following GitHub repository contains all the scripts and instructions performed in the videos above. https://github.com/f5devcentral/xcmcndemoguide Conclusion Deploying modern applications with services running in multiple locations and on different cloud providers is straight forward with Distributed Cloud Platform services. HTTP LB makes connects apps direct at the L7 Application Layer, and L3 transit routing via the Global Network makes it possible for apps to connect directly between multiple cloud locations. Resources Product Page: F5 Distributed Cloud - Multi-Cloud Networking GitHub repository: https://github.com/f5devcentral/xcmcndemoguide Interactive Product Experiences App Connect (Hybrid and Multicloud Layer7 load balancing) Network Connect (Hybrid and Multicloud Layer3 routing) Get Started - https://www.f5.com/cloud/pricing3.6KViews7likes1CommentUse F5 Distributed Cloud to Connect Apps Running in Multiple Clusters and Sites
Introduction Modern apps are comprised of many smaller components and can take advantage of today’s agile computing landscape. One of the challenges IT Admins and Security Operations face is securely controlling access to all the compofnents of distributed apps while business development grows or changes hands with mergers and acquisitions, or as contracts change. F5 Distributed Cloud (F5 XC) makes it very easy to provide uniform access to distributed apps regardless of where the components live. Solution Overview Arcadia Finance is a distributed app with modules that run in multiple Kubernetes clusters and in multiple locations. To expedite development in a key part of the Arcadia Finance distributed app, the business has decided to outsource work on the Refer A Friend module. IT Ops must now relocate the Refer A Friend module to a separate location exclusive to the new contractor where its team of developers have access to work on it. Because the app is modular, IT has shared a copy of the Refer A Friend container to the developer, and now that it is up and running in the new site, traffic to the module needs to transition away from the one that had been developed in house to the one now managed by the contractor. Logical Topology Distributed App Overview The Refer A Friend endpoint is called by the Arcadia Finance frontend pod in a Kubernetes (K8s) cluster when a user of the service wants to invite a friend to join. The pod does this by making an HTTP request to the location “refer-a-friend.demo.internal/app3/”. The endpoint “refer-a-friend.demo.internal” is registered as a discoverable service in the K8s cluster using an F5 XC App Connect HTTP Load Balancer with its VIP configured to be advertised internally to specific sites, including the K8s cluster. F5 XC uses the cluster’s K8s management API to register service names and make them available anywhere within a global network belonging to a customer's tenant. Three sites are used by the company that owns Arcadia Finance to deliver the distributed app. The core of the app lives in a K8s cluster in Azure, the administration and monitoring of the app is in the customer’s legacy site in AWS. To maintain security, the new contractor only has access to GCP where they’ll continue developing the Refer A Friend module. An F5 XC global virtual network connects all three sites, and all three sites are in a site mesh group to streamline communication between the different app modules. Steps to deploy To reach the app externally, an App Connect HTTP Load Balancer policy is configured using an origin pool that connects to the K8s “frontend” service, and the origin pool uses a Kubernetes Site in F5 XC to access the frontend service. A second HTTP Load Balancer policy is configured with its origin pool, a static IP that lives in Azure and is accessed via a registered Azure VNET Site. When the Refer A Friend module is needed, a pod in the K8s cluster connects to the Refer A Friend internal VIP advertised by the HTTP Load Balancer policy. This connection is then tunneled by F5 XC to an endpoint where the module runs. With development to the Refer A Friend module turned over to the contractor, we only need to change the HTTP Load Balancer policy to use an origin pool located in the contractor’s Cloud GCP VPC Site. The origin policy for the GCP located module is nearly identical to the one used in Azure. Now when a user in the Arcadia App goes to refer a friend, the callout the app makes is now routed to the new location where it is managed and run by the new contractor. Demo Watch the following video for information about this solution and a walkthrough using the steps above in the F5 Distributed Cloud Console. Conclusion Using Distributed Cloud with modern day distributed apps, it’s almost too easy to route requests intended for a specific module to a new location regardless of the provider and provider specific requirements or the IP space the new module runs in. This is the true power of using Distributed Cloud to glue together modern day distributed apps.4KViews4likes0Comments