cloud
3792 TopicsSetting up BIG-IP with AWS CloudHSM
Recently I was working on a project and there was a requirementfor using AWS CloudHSM. F5 has documented the process to install the AWS CloudHSM client in the implementation guide. I found it light on details of what a config should look like and showing examples. So let's pickup where the article leaves you on having installed the client software what does a working configuration look like?1.2KViews2likes1CommentMigration on AWS
Hello We'll move one of our customers F5 cluster to another cluster due to the license type changes. The former cluster which will be replaced have BYOL license and new cluster will use utility (aka PAYG). We have already deployed a new pair of devices and migrate the configuration from older to new cluster using UCS files. Now we only need to reassign EIP and secondary IP addresses to the new cluster to be able to move everything. And this step is the job for another day. Both clusters coexists in same networks and they have same amount of resources. The newest cluster is currently shutdown becouse, if they do failover they manipulates the EIPs on the former cluster and this causes the traffic disruptions. Since new cluster can manipulate the IP mappings on old cluster through AWS, i should had a new CFE definition along with the key elements IAM, S3bucket and tags for this new cluster. I guest that i misscalculate this step. My first question is: Can somebody guide me about this? While restoring the config we used UCS files and thus CFE config came along with the original config. Hence, we lost original CFE declerations came with the initial configurations when we deployed from cloudformations. But i have UCS files that created right before the the migration. Probably you know that cloudformations dynamically creates these cfe declerations, tags, s3buckets, and iam definitions during deployment. The second question is: Does somebody know where f5 store cfe configurations? Since the CFE config can be applied with the ucs files there must be some sort of configuration file that hold the cfe declerations.11Views0likes1CommentSecuring Model Serving in Red Hat OpenShift AI (on ROSA) with F5 Distributed Cloud API Security
Learn how Red Hat OpenShift AI on ROSA and F5 Distributed Cloud API Security work together to protect generative AI model inference endpoints. This integration ensures robust API discovery, schema enforcement, LLM-aware threat detection, bot mitigation, sensitive data redaction, and continuous observability—enabling secure, compliant, and high-performance AI-driven experiences at scale.266Views3likes0CommentsIrule using a data group to bypass header injection
Trying to do a basic irule that looks at a data group and bypasses the header injection based on the data group uris. Been messing with the below but getting multiple errors when adding the top lines to bypass the existing irule posted below. Datagroup would be the uribypass when HTTP_REQUEST { if { ([class match [HTTP::path] starts_with "uribypass"]) } { exit else { if { !([HTTP::header exists "test-Proxied" ]) } { HTTP::uri /test[HTTP::uri] # Inject custom header HTTP::header insert test-Proxied 1 } } } }2Views0likes0CommentsSteps to create custom curl monitor
Hi Everyone I tried to make a health monitor check proxy by following this kb https://my.f5.com/manage/s/article/K31435017, but the results still failed when I curled towards the destination has anyone ever been able to? please advise & suggest26Views0likes1CommentMulti-port support for HTTP/TCP load balancers in F5 Distributed Cloud (XC)
Overview: In the ever-evolving landscape of the digital world driven by innovation, catering to the new requirements is vital for modern application scalability, adaptability, and longevity. Multi-port support refers to the capability of a system to handle and manage multiple application ports simultaneously. This flexibility is particularly important in scenarios where a single device needs to serve diverse services. Multi-port support is essential for various reasons, including some of the below: Parallel Processing: It allows the system to process multiple app streams concurrently, enhancing efficiency and reducing latency. Diverse Services: Different applications or services often require dedicated ports to function. Multi-port support enables a system to accommodate a variety of services simultaneously. Load Balancing: Distributing application traffic across multiple ports helps balance the load, preventing bottlenecks and optimizing resource utilization. Security: Sometimes SecOps want to have testing ports opened, which allow access to applications for testing, scanning, monitoring, and addressing potential security vulnerabilities. Flexibility: Systems with multi-port support are adaptable to modern micro-service-based architectures, supporting a diverse range of applications and services. IP limitations: Since IP’s are limited, customers don’t want to use a different IP for each user, so instead they want to reserve a single IP and want to distribute load on different ports. Note: For today’s demonstration, we have deployed multiple demo applications like JuiceShop, DVWA, NGINX, F5 Air asmicro-serviceson multiple systems/ports to showcase the capabilities of multi-port support and their deployment steps are out of scope in this article. Let’s unravel three below real-world use cases of multi-port support and how it can be implemented in F5 Distributed Cloud (F5 XC) in easy-to-follow steps. Use case I – Multiple Ports In this use case, let’s assume the customer already has onboarded his backend application as an origin pool in XC. Next, the customer wants to access the same application using multiple ports, either for genuine access or for testing. For achieving this use case, follow below steps: Login to F5 XC console and navigate to “Distributed Apps” --> “Manage Load balancer” section For this use case, create a HTTP load balancer with your backend application, needed ports in csv format, type as HTTP, name, domain name as shown below. NOTE: Provide only unused ports or you will run into port conflict errors. Also configure DNS records as per your setup. Once load balancer is created successfully, validate your application is accessible on the configured port and LB domain name Use case II – Port Range In this scenario, customers have the requirement to access an application in a range of ports either for parallel processing or load balancing. For configuration, follow below steps: Login to F5 XC console and navigate to “Distributed Apps” section For this use case, create a HTTPS load balancer with your backend application, needed port range and domain name as shown below. NOTE: Provide only unused port range to avoid port conflict error. Validate your application is accessible on configured ports just like below Use case III – Origin Pool Dynamic port In this requirement, the backend application port should be dynamic and is dependent on the load balancer access port number. Let’s say a customer has multiple services running on multiple ports and wants users to access these services using a single TCP load balancer. To meet this solution, follow steps below: Login to F5 XC console and navigate to “Distributed Apps” section Next, move to “Origin Pool” section and onboard your basic backend application details and select the "origin server port" option as the "loadbalancer port" (as shown below). We can also configure health checks to LB ports instead of endpoints for better visibility. We are halfway there!!. Move to “TCP Load balancer” section and create a TCP load balancer with required port ranges and your application origin pool. Your configuration will look something like below Finally for the fun part: Once load balancer comes to a READY state, open a browser and make sure different services are accessible on configured domain name and ports shown below NOTE: For above solution to work,multiple services should be running on the configured ports of backend system and this port range should be unused by other services on the XC platform We have just scratched the surface of the the wide range of use cases of multi-port and there is a lot of demand in the market for many scenarios combining different functionalities of HTTP/HTTPS/TCP, single/multi services on same system or multiple backend systems and can also be routed to appropriate backends using port range filters in routes. As percustomer requirements, appropriate configurations can be done on F5 XC for seamless integration and to leverage the pervasive WAAP security ecosystem. Conclusion: Winding up, this article pondered the market demand for the support of multi-port range in HTTP/TCP load balancers and then we took you on a roller coaster ride of different use cases. Finally, we also demonstrated how F5 XC can foster in shaping and optimizing your application versatile multi-port requirements. Ever wondered what is F5 XC and how it acts as a “Guardian of Applications”, check below links: F5 Distributed Cloud Services F5 Distributed Cloud WAAP1KViews4likes1CommentF5xC Migration
Hey Amigos, Need some advice.. I am implementing F5xC on our infra and migrating applications, however, ran into a small problem and need guidance.. There's an on-prem application sitting behind Citrix LB with the SSL offloaded directly on to the backend members i.e. SSL passthrough configured.. We have to migrate this app behind F5xC with SSL certificate on the F5xC as well.. Have below concerns ; Would this solution work if we get the SSL cert from the server itself and deploy it on the F5xC ? Has anyone implemented this sort of solution before, if yes, can anyone share their observations ? There's no test env so I can't really test this in non-prod.. This has to be implemented in prod directly and hence the precautions :)28Views0likes1CommentLet's Encrypt with Cloudflare DNS and F5 REST API
Hi all This is a followup on the now very old Let's Encrypt on a Big-IP article. It has served me, and others, well but is kind of locked to a specific environment and doesn't scale well. I have been going around it for some time but couldn't find the courage (aka time) to get started. However, due to some changes to my DNS provider (they were aquired and shut down) I finally took the plunges and moved my domains to a provider with an API and that gave me the opportunity to make a more nimble solution. To make things simple I chose Cloudflare as the community proliferation is enormous and it is easy to find examples and tools. I though think that choosing another provide with an open API isn't such a big deal. After playing around with different tools I realized that I didn't need them as it ended up being much easier to just use curl. So, if the other providers have just a somewhat close resemblance it shouldn't be such a big task converting the scripts to fit. There might be finer and more advanced solutions out there, but my goal was that I needed a solution that had as few dependencies as possible and if I could make that only Bash and Curl it would be perfect. And that is what I ended up with 😎 Just put 5 files in the same directory, adjust the config to your environment, and BAM you're good to go!!😻 And if you need to run it somewhere else just copy the directory over and continue like nothing was changed. That is what I call portability 😁 Find all the details here: Let's Encrypt with Cloudflare DNS and F5 REST API Please just drop me a line if you have any questions or feedback or find any bugs.2.3KViews1like8CommentsThe sooner the better: Web App Scanning Without Internet Exposure
In the fast-paced world of app development, security often takes a backseat to feature delivery and tight deadlines. Many organizations rely on external teams to perform penetration testing on their web applications and APIs, but this typically happens after the app has been live for some time and is driven by compliance or regulatory requirements. Waiting until this stage can leave vulnerabilities unaddressed during critical early phases, potentially leading to costly fixes, reputational damage, or even breaches. Early-stage application security testing is key to building a strong foundation and mitigating risks before they escalate. Wouldn't it be cool if there was a way you could scan your apps in a proactive, automated way while they are still in beta? Since you are reading this article, here in the F5 community, you probably already know, that F5 Distributed Cloud Web App Scanning allows you to dynamically and continuously scan your external attack surface to uncover exposed web apps and APIs. We all know that exposing apps, which are at an early stage of their development, to the internet is risky because they may contain unfinished or untested code that could lead to unintended data leaks, privacy violations, or other risks. Therefore you want to keep access to your beta-stage apps restricted. Scanning but not exposing your apps At this point in time XC Web App Scanning can only scan apps that are exposed on the internet. But with some configuration tweaks, you can ensure that only WAS has access to your apps. I want to show a real-world example of how you can restrict access to your application solely to the XC WAS scan engine. Let's take a look at the beta-stage application we aim to perform penetration testing on. It is hosted on an EC2 instance in AWS. Of course we don't plan to expose our application directly to the internet with a Web Application Firewall. Hence F5 Distributed Cloud Web App & API Protection (WAAP) will be positioned as a cloud-proxy in front of our app. Therefore we must make sure only traffic from F5 Distributed Cloud Services has access to our app. Next we want to make sure that only the scan engine of F5 Distributed Cloud Web App Scanning can reach our app, again we want to block the rest of the internet from accessing our app. A pictures of says more then words, we want to achieve something like this: How to set it up Let's take a look how we can satisfy our requirements. ... in AWS In AWS Security Groups are used to control which traffic is allowd to reach reach and leave the resources that it is associated with. Since our application is hosted on an EC2 instance, the Security Group controlls the ingress and egress traffic for the instance. One can think of it like a virtual packet filter firewall. Usually protocol, port and a source IP address range in CIDR notation for an inbound Security Group. We want to allow access only from F5 Distributed Cloud Services to our EC2 instance. Creating hundreds of ingress rules inside of a Security Group did not seem very efficient to me. Hence I used a customer-managed prefix lists and added all F5 Regional Edges. Prefix lists are configured in the VPC section of AWS. The IPv4 address list of all F5 Regional Edges is available here: Public IPv4 Subnet Ranges for F5 Regional Edges After you created you prefix list, you can use it in a Security Group This way we met our first goal. Only F5 Regional Edges can reach our app. ... in XC In the F5 Distributed Cloud a similar kind of access control can be achieved by using Service Policies. Service Policies are a mechanism to control and enforce fine-grained access control for applications deployed in XC. I created a Service Policy that allows access only from the list of ephemeral IP addresses associated with XC Web App Scanning, while blocking all other traffic. First create a Service Policy, in the Rules section select Allowed Sources. In the XC Console Service Policies are created in Security > Service Policies > Service Policies. Then add the IP ranges to the IPv4 Prefix List. The list of all IP addresses associated with XC Web App Scanning is available here: Use Known IPs in Web App Scanning The Service Policy is then applied in the section Common Security Controls of a HTTP Load Balancer configuration. Conclusion By combining AWS Security Groups and XC Security Policies, I can ensure that my beta app (or beta API) is accessible exclusively to the scan engine of XC Web App Scanning, while blocking access from malicious actors on the internet.70Views1like1Comment