devops
21589 TopicsF5 XC and Azure FrontDoor
Hi all, I would like to ask for some advice how to configure application behind Azure FrontDoor on F5 XC solution. As Azure FD required valid SSL certificate to forward traffic we used on BIG-IP workaround with "WAF domain" where valid certificate was used and we were able to connect to the application from FD via AWAF. XC create DNS for virtual host (loadbalancer) in format ves-io-uuid.ac.vh.ves.io but address has not valid SSL certificate so FD is not able to connect. Does someone has certain experiences with such implementation? Another question is how do you recognize clients in such cases? Especially when one domain is behind Azure FD or another CDN and another is not. By default client is recognized by IP address so in case that traffic is forwarded via CDN/FD we need to change User Identifier from "Client IP Address" to different object, for example some header. But what about application what is not behind CDN. How it will be recognized? Can I combine it somehow on one vhost (loadbalancer)? Thank you.34Views0likes5CommentsURI Redirect
I thought it would be simple, but i guess i am just too thick. I need to have a redirect done, only if the URI does not contain a certain URI. If the URI contains /vss-df, or /vvs-df/appoitnment, etc then there is no redirect. all other URI would redirect to the URL in the irule. Do not redirect if the URI contains vvs-df https://website/vvs-df, https://website/vvs-df/appointments, etc Direct: https://website/, https://website/ if { ([HTTP::host] contains "<hostname>") } if { [HTTP::uri]] ne "/vvs-**") } { HTTP::redirect "<redirect URL>" } }Solved41Views0likes2CommentsNGINX App Protect v5 Signature Notifications
When working with NAP (NGINX App Protect) you don't have an easy way of knowing when any of the signatures are updated. As an old BigIP guy I find that rather strange. Here you have build-in automatic updates and notifications. Unfortunately there isn't any API's you can probe which would have been the best way of doing it. Hopefully it will come one day. However, "friction" and "hard" will not keep me from finding a solution 😆 I have previously made a solution for NAPv4 and I have tried mentally to get me going on a NAPv5 version. The reason for the delay is in the different way NAPv4 and NAPv5 are designed. Where NAPv4 is one module loaded in NGINX, NAPv5 is completely detached from NGINX (well almost, you still need to load a small module to get the traffic from NGINX to NAP) and only works with containers. NAPv5 has moved the signature "storage" from the actual host it is running on (e.g. an installed package) to the policy. This has the consequence that finding a valid "source of truth", for the latest signature versions, is not as simple as building a new image and see which versions got installed. There are very good reasons for this design that I will come back to later. When you fire up NAPv5 you got three containers for the data plane (NGINX, waf-enforcer and waf-config-mgr) and one for the "control plane" (waf-compiler). For this solution the "control plane" is the useful one. It isn't really a control plane but it gives a nice picture of how it is detached from the actual processing of traffic. When you update your signatures you are actually doing it through the waf-compiler. The waf-compiler is a container hosting the actual signature databases and every time a new verison is released you need to rebuild this container and compile your policies into a new version and reload NGINX. And this is what I take advantage of when I look for signature updates. It has the upside that you only need the waf-compiler to get the information you need. My solution will take care of the entire process and make sure that you are always running with the latest signatures. Back to the reason why the split of functions is a very good thing. When you build a new version of the NGINX image and deploy it into production, NAP needs to compile the policies as they load. During the compilation NGINX is not moving any traffic! This becomes a annoying problem even when you have a low number of policies. I have installations where it takes 5 to 10 minutes from deployment of the new image until it starts moving traffic. That is a crazy long time when you are used to working with micro-services and expect everything to flip within seconds. If you have your NAPv4 hooked up to a NGINX Instance Manager (NIM) the problem is somewhat mitigated as NIM will compile the policies before sending them to the gateways. NIM is not a nimble piece of software so it doesn't always fit into the environment. And now here is my hack to the notification problem: The solution consist of two bash scripts and one html template. The template is used when sending a notification mail. I wanted it to be pretty and that was easiest with html. Strictly speaking you could do with just a simple text based mail. Save all three in the same directory. The main script is called "waf_policy_auto_compile.sh"and is the one you put into crontab. The main script will build a new waf-compiler image and compile a test policy. The outcome of that is information about what versions are the newest. It will then extract versions from an old policy and simply see if any of the versions differ. For this to work you need to have an uncompiled policy (you can just use the default one) and a compiled version of it ready beforehand. When a diff has been identified the notification logic is executed and a second script is called: "compile_waf_policies.sh". It basically just trawls through the directory of you policies and logging profiles and compiles a new version of them all. It is not necessary to recompile the logging profiles, so this will probably change in the next version. As the compilation completes the main script will nudge NGINX to reload thus implement all the new versions. You can run "waf_policy_auto_compile.sh" with a verbose flag (-v) and a debug flag (-d). The verbose flag is intended to be used when you run it on a terminal and want the information displayed there. Debug is, well, for debug 😝 The construction of the scripts are based on my own needs but they should be easy to adjust for any need. I will be happy for any feedback, so please don't hold back 😄26Views0likes0CommentsSMPP IRULES that insert destination port in One Vip, as Port in Second Vip pool member
I have a 2-step situation with SMPP traffic. Aggregator Traffic---F5 (VIP 1) 10.1.1.1:* (eg 5102) F5 (SNAT IP) 10.1.1.2--- SMS FW 10.1.1.3:10000 SMS FW 10.1.1.3--- F5 (VIP 2) 10.1.1.4:10000 F5 (SNAT IP) 10.1.1.2--- SMSC 10.100.114.129:F5 VIP1 destination port (10.100.114.129:5102) How do i solve this using irules46Views0likes4CommentsCase (in)sensitivity for JSON schema in ASM policy
Hi all, I would like to know if follow behaviour is correct or it's bug. I have ASM policy where JSON profiles are created from swagger file with JSON Schema Files. Global policy setting "Policy is Case Sensitive" is defined to "No". However payload in requests is strictly checked and if in schema file is defined parameter "username" then request with parameter "Username" is not valid and is against security policy. It mean that Json Schema has higher priority than global settings of policy? Part of the JSON schema: "required":["password","username"] Valid request with payload {"username":"myuser","password":"mypass"} Request what report violation "JSON data does not comply with JSON schema" {"Username":"myuser","Password":"mypass"} In details it reports that parameter username is missing and Illegal additional property Username is defined.782Views0likes4CommentsURL Redirect ? URL ReWrite ?
im still on my journey leanring nginx so forgive the stupid question. my goal is as follows: i want my clients to be able to browse to https://www.john.com/Greenlight in the clients browser i dont want the above to change, but i want to get the page load to populate actually from here. https://dev-assets.john.net/cdn/html2canvas/1.4.1/license.html i tried this, but its not working.. i think im close..but maybe not.. ############################################################ Greenlight redirect location /Greenlight { rewrite ^/Greenlight(/.*)$ $1 break; rewrite ^/Greenlight$ / break; proxy_pass https://dev-assets.john.net/cdn/html2canvas/1.4.1; proxy_set_header Host john-assets.alkami.net; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; try_files $uri $uri/ /license.html; } im thinking maybe i need a re-write statement... any guidance would be apprecited.66Views0likes2CommentsUpgrading BIG-IP VE from version 14.x to 16.x
Hello, As our BIG-IP is in oracle cloud VM and our bigip is running 14.1.5.6V and we need to upgrade 16.x.x .kindly need your support and suggetion how to upgrade as im little bit new for upgrade in VM please share us procedure points step by step it will easy for me and suggest which version we have to go we need mature one Appriciate for your feedback51Views0likes5CommentsCan SSM Agent run on Ec2 with BEST license?
I am setting up F5 on AWS, using a BEST licensed AMI from the Marketplace. I wanted to be able to manage the instance via Systems Manager. In order for ec2 instances to communicate via SSM, I must install the ssm-agent, which is not installed on the marketplace AMI. However, I have discovered that the BEST AMI has FIPS protection, installing the ssm-agent triggers critical warnings, and my system becomes unavailable after a reboot. So far, the articles here have pointed to "downgrading" to a license that does not have FIPS as the only way to disable it entirely. However, WAF is a requirement for me, and only appears to be available in the BEST license. Is there a license that has Web Application Firewall but no(or a less restrictive) FIPS, or a way to allow SSM on a FIPS protected machine? It is the ssm commands that are installed in /usr/bin that trigger the alert.79Views0likes3Comments