cloud
66 TopicsMicrosoft 365 IP Steering python Script
Hello! Hola! I have created a small and rudimentary script that generates a datagroup with MS 365 IPv4 and v6 addresses to be used by an iRule or policy. There are other scripts that solve this same issue but either they were: based on iRulesLX which forces you to enable iRuleLX only for this, and made me run into issues when upgrading (memory table got filled with nonsense) based on the XML version of the list which MS changed to a JSON file. This script is a super simple bash script that calls another super simple python file, and a couple of helper files. The biggest To Do are: Add a more secure approach to password usage. Right now, it is stored in a parameters file locked away with permissions. There should be a better way. Add support for URLs. You can find the contents here:https://github.com/teoiovine-novared/fetch-office365/tree/main I appreciate advice, (constructive) criticism and questions all the same! Thank you for your time.18Views0likes0CommentsF5 XC vk8s open source nginx deployment on RE
Code is community submitted, community supported, and recognized as โUse At Your Own Riskโ. Short Description This an example for F5 XC virtual kubernetes (vk8s) workload on Regional Edges for rewriting the URL requests and response body. Problem solved by this Code Snippet The XC Distributed Cloud rewrite option under the XC routes is sometimes limited in dynamically replacing a specific sting like for example to replace the string "ne" with "da" no matter where in the url the string is located. location ~ .*ne.* { rewrite ^(.*)ne(.*) $1da$2; } Other than that in XC there is no default option to replace a string in the payload like rewrite profile in F5 LTM or iRule stream option. sub_filter 'Example' 'NIKI'; sub_filter_types *; sub_filter_once off; Open source NGINX can also be used to return custom error based on the server error as well: error_page 404 /custom_404.html; location = /custom_404.html { return 404 'gangnam style!'; internal; } Now with proxy protocol support in XC the Nginx can see real client ip even for non-HTTP traffic that does not have XFF HTTP headers. log_format niki '$proxy_protocol_addr - $remote_addr - $remote_user [$time_local] - $proxy_add_x_forwarded_for' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent"'; #limit_req_zone $binary_remote_addr zone=mylimit:10m rate=1r/s; server { listen 8080 proxy_protocol; server_name localhost; How to use this Code Snippet Read the description readme file in the github link and modify the nginx default.conf file as per your needs. Code Snippet Meta Information Version: 1.25.4 Nginx Coding Language: nginx config Full Code Snippet https://github.com/Nikoolayy1/xc_nginx/tree/main166Views0likes3CommentsF5 XC Session tracking with User Identification Policy
With F5 AWAF/ASM there is feature called session tracking that allows tracking and blocking users that do too many violations not only based on IP address but also things like the BIG-IP AWAF/ASM session cookie. What about F5 XC Distributed Cloud? Well now we will answer that question ๐ Why tracking on ip addresses some times is not enough? XC has a feature called malicious users that allows to block users if they generate too many service policy, waf , bot or other violations. By default users are tracked based on source IP addresses but what happens if there are proxies before the XC Cloud or NAT devices ? Well then all traffic for many users will come from a single ip address and when this IP address is blocked many users will get blocked, not just the one that did the violation. Now that we answered this question lets see what options we have. Reference: AI/ML detection of Malicious Users using F5 Distributed Cloud WAAP Trusted Client IP header This option is useful when the client real ip addresses are in something like a XFF header that the proxy before the F5 XC adds. By enabling this option automatically XC will use this header not the IP packet to get the client ip address and enforce Rate Limiting , Malicious Users blocking etc. Even in the XC logs now the ip address in the header will be shown as a source IP and if there is no such header the ip address in the packet will be used as backup. Reference: How to setup a Client IP as the Source IP on the HTTP Load Balancer headers? โ F5 Distributed Cloud Services (zendesk.com) Overview of Trusted Client IP Headers in F5 Distributed Cloud Platform User Identification Policies The second more versatile feature is the XC user identification policies that by default is set to "Client IP" that will be the client ip from the IP packet or if "Trusted Client IP header" is configured the IP address from the configured header will be used. When customizing the feature allows the use of TLS fingerprints , HTTP headers like the "Authorization" header and more options to track the users and enforce rate limiters on them or if they make too many violations and Malicious users is enabled to block them based on the configured identifier if they make too many waf violations and so much more. The user identification will failover to the ip address in the packet if it can't identify the source user but multiple identification rules could be configured and evaluated one after another, as to only failover to the packet ip address if an identification rule can't be matched! If the backend upstream origin server application cookie is used for user identification and XC WAF App firewall is enabled and you can also use Cookie protection to protect the cookie from being send from another IP address! The demo juice shop app at https://demo.owasp-juice.shop/ can be used for such testing! References Lab 3: Malicious Users (f5.com) Malicious Users | F5 Distributed Cloud Technical Knowledge Configuring user session tracking (f5.com) How to configure Cookie Protection โ F5 Distributed Cloud Services (zendesk.com)109Views0likes0CommentsF5 XC vk8s workload with Open Source Nginx
I have shared the code in the link below under Devcentral code share: F5 XC vk8s open source nginx deployment on RE | DevCentral Here I will desribe the basic steps for creating a workload object that is F5 XC custom kubernetes object that creates in the background kubernetes deployments, pods and Cluster-IP type services. The free unprivileged nginx image nginxinc/docker-nginx-unprivileged: Unprivileged NGINX Dockerfiles (github.com) Create a virtual site that groups your Regional Edges and Customer Edges. After that create the vk8s virtual kubernetes and relate it to the virtual site."Note": Keep in mind for the limitations of kubernetes deployments on Regional Edges mentioned in Create Virtual K8s (vK8s) Object | F5 Distributed Cloud Tech Docs. First create the workload object and select type service that can be related to Regional Edge virtual site or Customer Edge virtual site. After select the container image that will be loaded from a public repository like github or private repo. You will need to configure advertise policy that will expose the pod/container with a kubernetes cluster-ip service. If you are deploying test containers, you will not need to advertise the container . To trigger commands at a container start, you may need to use /bin/bash -c -- and a argument."Note": This is not related for this workload deployment and it is just an example. Select to overwrite the default config file for the opensource nginx unprivileged with a file mount."Note": the volume name shouldn't have a dot as it will cause issues. For the image options select a repository with no rate limit as otherwise you will see the error under the the events for the pod. You can also configure command and parameters to push to the container that will run on boot up. You can use empty dir on the virtual kubernetes on the Regional Edges for volume mounts like the log directory or the Nginx Cache zone but the unprivileged Nginx by default exports the logs to the XC GUI, so there is no need. "Note": This is not related for this workload deployment and it is just an example. The Logs and events can be seen under the pod dashboard and even the container/pod can accessed. "Note": For some workloads to see the logs from the XC GUI you will need to direct the output to stderr but not for nginx. After that you can reference the auto created kubernetes Cluster-IP service in a origin pool, using the workload name and the XC namespace (for example niki-nginx.default). "Note": Use the same virtual-site where the workload was attached and the same port as in the advertise cluster config. Deployments and Cluster-IP services can be created directly without a workload but better use the workload option. When you modify the config of the nginx actually you are modifying a configmap that the XC workload has created in the background and mounted as volume in the deployment but you will need to trigger deployment recreation as of now not supported by the XC GUI. From the GUI you can scale the workload to 0 pod instances and then back to 1 but a better solution is to use kubectl. You can log into the virtual kubernetes like any other k8s environment using a cert and then you can run the command "kubectl rollout restart deployment/niki-nginx". Just download the SSL/TLS cert. You can automate the entire process using XC API and then you can use normal kubernetes automation to run the restart command F5 Distributed Cloud Services API for ves.io.schema.views.workload | F5 Distributed Cloud API Docs! F5 XC has added proxy_protocol support and now the nginx container can work directly with the real client ip addresses without XFF HTTP headers or non-http services like SMTP that nginx supports and this way XC now can act as layer 7 proxy for email/smpt traffic ๐. You just need to add "proxy_protocol" directive and to log the variable "$proxy_protocol_addr". Related resources: For nginx Plus deployments for advanced functions like SAML or OpenID Connect (OIDC) or the advanced functions of the Nginx Plus dynamic modules like njs that is allowing java scripting (similar to F5 BIG-IP or BIG-IP Next TCL based iRules), see: Enable SAML SP on F5 XC Application Bolt-on Auth with NGINX Plus and F5 Distributed Cloud Dynamic Modules | NGINX Documentation njs scripting language (nginx.org) Accepting the PROXY Protocol | NGINX Documentation334Views2likes1CommentF5 XC reviewing API requests which the GUI sends and a backup of the config with Python/Ansible
Short Description The F5 XC Distributed Cloud GUI in the background sends API requests with JSON body to the system and those requests can be easily reviewed. Problem solved by this Code Snippet If someone wonders how to do some tasks that the XC GUI does the same way but with automation through the API and JSON then this article will help them. Also at the end I have shown how to retrive XC json data with API. How to use this Code Snippet Reviewing the API requests that are generated by the XC GUI. Full Code Snippet There are 3 ways to review the API requests that the GUI generates. On each XC element like for example the load balancer you can click on the JSON and see the JSON code. The JSON code can even be directly edited from the GUI Dashboard! The API documentation can be reviewed directly from the XC GUI. The final option is just to use the browser developer tools and to see what API requests are send by the F5 XC. This feature is now present on most F5 new products like F5OS(Velos/rSeries) and F5 NEXT๐ The XC JSON created objects from the API are a form of a backup configuration. Even if the objects were created from the GUI then API GET requests can be used to retrive their JSON data and this can be saved to a backup file in the form of snapshot. I have used Python with requests library and the url and API key are added as user input arguments. The script can be used to get information like the XC LB or service policies. As example "/api/config/namespaces/default/service_policys". The script will first call an API endpoint to get for example all the load balancer or service policy names and then it will use the names get the config for each individual service policy or load balancer, using a for loop. There is time.sleep(1) to add 1 second slowness between each api request. The code can have the full url like https://{tenant_name}.console.ves.volterra.io/api/config/namespaces/{namespace}service_policysand the api token be added during script execution or the arguments can be input at the start of script execution by commenting outurl = sys.argv[1] and api_token = sys.argv[2] and executing the script like python3 service_policy.py {argument1} {argument2}. The api token by default is hidden using the getpass library for extra security. See Github for the code: Nikoolayy1/xc_api_script: XC API script to retrieve basic json config (github.com) In some cases using Terraform for XC will be best as XC has strong Terraform support as seen at the link below. Ansible can also be used but XC does not have many developed Ansible modules, so in many cases the Ansible URI module will need to be used and the Ansible URI module (ansible.builtin.uri module โ Interacts with webservices โ Ansible Documentation) in the backgroung is just the python requests or http.client module, as Ansible is python in the background, so better use python directly in that case. XC Terraform: Terraform | F5 Distributed Cloud Tech Docs F5 Distributed Cloud WAAP deployment with Terrafor... - DevCentral Using Terraform and F5ยฎ Distributed Cloud Mesh to ... - DevCentral Example Ansible code (even if I said python is better in this case ๐) xc_api_script/xc_ansible at main ยท Nikoolayy1/xc_api_script (github.com)706Views0likes2CommentsCreate an internal HTTP Load-Balancer on Volterra with Terraform
Problem this snippet solves: How to create an internal HTTP Load-Balancer with VoltMesh where the Origin is reachable through a Volterra node. Two steps are needed: Creation of the Origin (1-origin.tf file) Creation of the Load-Balancer (2-http-lb.tf file) How to use this snippet: Pre-Requirements: Have a Volterra API Certificate. Please see this page for the API Certificate generation: https://volterra.io/docs/how-to/user-mgmt/credentials Extract the certificate and the key from the .p12: openssl pkcs12 -info -in certificate.p12 -out private_key.key -nodes -nocerts openssl pkcs12 -info -in certificate.p12 -out certificate.cert -nokeys Create a variables.tf Terraform variables file: variable "api_cert" { type = string default = "/<full path to>/certificate.cert" } variable "api_key" { type = string default = "/<full path to>/private_key.key" } variable "api_url" { type = string default = "https://<tenant_name>.console.ves.volterra.io/api" } Create a main.tf Terraform file: terraform { required_version = ">= 0.12.9, != 0.13.0" required_providers { volterra = { source = "volterraedge/volterra" version = ">=0.0.6" } } } provider "volterra" { api_cert = var.api_cert api_key = var.api_key url = var.api_url } In the directory where your terraform files are, run: terraform init Then: terraform apply Code : //========================================================================== //Definition of the Origin, 1-origin.tf //Start of the TF file resource "volterra_origin_pool" "sample-http-origin-pool" { name = "sample-http-origin-pool" //Name of the namespace where the origin pool must be deployed namespace = "mynamespace" origin_servers { private_ip { ip = "10.17.20.13" //From which interface of the node onsite the IP of the service is reachable. Value are inside_network / outside_network or both. outside_network = true //Site definition site_locator { site { name = "name-of-the-site" namespace = "system" tenant = "name-of-the-tenant" } } } labels = { } } no_tls = true port = "80" endpoint_selection = "LOCALPREFERED" loadbalancer_algorithm = "LB_OVERRIDE" } //End of the file //========================================================================== //========================================================================== //Definition of the Load-Balancer, 2-http-lb.tf //Start of the TF file resource "volterra_http_loadbalancer" "sample-http-lb" { depends_on = [volterra_origin_pool.sample-http-origin-pool] //Mandatory "Metadata" name = "sample-http-lb" //Name of the namespace where the origin pool must be deployed namespace = "mynamespace" //End of mandatory "Metadata" //Mandatory "Basic configuration" domains = ["mydomain.internal"] http { dns_volterra_managed = false } //End of mandatory "Basic configuration" //Optional "Default Origin server" default_route_pools { pool { name = "sample-http-origin-pool" namespace = "mynamespace" } weight = 1 } //End of optional "Default Origin server" //Mandatory "VIP configuration" advertise_on_public_default_vip = true //End of mandatory "VIP configuration" //Mandatory "Security configuration" no_service_policies = true no_challenge = true disable_rate_limit = true disable_waf = true //End of mandatory "Security configuration" //Mandatory "Load Balancing Control" source_ip_stickiness = true //End of mandatory "Load Balancing Control" } //End of the file //========================================================================== Tested this on version: No Version Found757Views0likes1CommentBypass Azure Login Page by adding a login hint in the SAML Request
Problem this snippet solves: Enhance the login experience between F5 (SAML SP) and Azure (SAML IDP) by injecting the "email address" as a login hint on behalf of the user. This enhances the user experience because it allows to bypass the Azure Login Page and avoids the user to type two times his login/email address. Example of use Your application need to be accessed by both "domain users" and "federated users". Your application is protected by the F5 APM with a "Login Page" that asks for the user "email address". Based on the "email address" value you determine the domain: if the user is a "domain user", you authenticate him on the local directory (AD Auth, LDAP Auth or ...) if the user is a "federated user" (such as xxx@gmail.com), you send him to the Azure IDP that will manage all federated access This snippet is particularly interesting for the "federated user" scenario because: without this code, a "federated user" will need to type his "login" twice. First time on "F5 Login Page" and the second time on "Azure Login Page" with this code, a "federated user" will need to type his "login" only on the F5 Login Page How to use this snippet: Go to "Access > Federation > SAML Service Provider > External IDP Connectors" and edit the "External IdP Connectors" object that match with the Azure IDP app. On the "Single Sign On Service Settings" add at the end of the "Single Sign On Service URL" the following string "?login_hint=" as shown in the picture below. The string "?login_hint=" is added here only to be able to uniquely identify it later by the iRule and replaced it. 3. Finally, apply the iRule below on the VS that has the Access Policy enabled and for which the SAML SP role is attributed and is binded to the Azure IDP application. The iRule will simply catch the "Single Sign On Service URL" and replace it with "?login_hint=xxxx@gmail.com". Code : when CLIENT_ACCEPTED { ACCESS::restrict_irule_events disable } when HTTP_RESPONSE_RELEASE { if { [string tolower [HTTP::header value "Location"]] contains "/saml2/?login_hint="} { set user_login [ACCESS::session data get "session.logon.last.mail"] #log local0. "Before adding the hint [HTTP::header value "Location"]" set locationWithoutHint "?login_hint=" set locationWithHint "?login_hint=$user_login" HTTP::header replace Location [string map -nocase "${locationWithoutHint} ${locationWithHint}" [HTTP::header Location]] #log local0. "After adding the hint [HTTP::header value "Location"]" } } Tested this on version: No Version Found4.9KViews1like5CommentsF5 XC Distributed Cloud HTTP Header manipulations and matching of the client ip/user HTTP headers
1 . F5 XC distributed cloud HTTP Header manipulations In the F5 XC Distributed Cloud some client information is saved to variables that can be inserted in HTTP headers similar to how F5 Big-IP saves some data that can after that be used in a iRule or Local Traffic Policy. By default XC will insert XFF header with the client IP address but what if the end servers want an HTTP header with another name to contain the real client IP. Under the HTTP load balancer under "Other Options" under "More Options" the "Header Options" can be found. Then the the predefined variables can be used for this job like in the example below the $[client_address] is used. A list of the predefined variables for F5 XC: https://docs.cloud.f5.com/docs/how-to/advanced-security/configure-http-header-processing There is $[user] variable and maybe in the future if F5 XC does the authentication of the users this option will be insert the user in a proxy chaining scenario but for now I think that this just manipulates data in the XAU (X-Authenticated-User) HTTP header. 2. Matching of the real client ip HTTP headers You can also match a XFF header if it is inserted by a proxy device before the F5 XC nodes for security bypass/blocking or for logging in the F5 XC. For User logging from the XFF Under "Common Security Controls" create a "User Identification Policy". You can also match a regex that matches the ip address and this is in case there are multiple IP addresses in the XFF header as there could have been many Proxy devices in the data path and we want see if just one is present. For Security bypass or blocking based based on XFF Under "Common Security Controls" create a "Trusted Client Rules" or "Client Blocking Rules". Also if you have "User Identification Policy" then you can just use the "User Identifier" but it can't use regex in this case. To match a regex value in the header that is just a single IP address, even when the header has many ip addresses, use the regex (1\.1\.1\.1) as an example to mach address 1.1.1.1. To use the client IP address as a source Ip address to the backend Origin Servers in the TCP packet after going through the F5 XC (similar to removing the SNAT pool or Automap in F5 Big-IP) use the option below: The same way the XAU (X-Authenticated-User) HTTP header can be used in a proxy chaining topology, when there is a proxy before the F5 XC that has added this header. Edit: Keep in mind that in some cases in the XC Regex for example (1\.1\.1\.1) should be written without () as 1\.1\.1\.1 , so test it as this could be something new and I have seen it in service policy regex matches, when making a new custom signature that was not in WAAP WAF XC policy. I could make a seperate article for this ๐2.7KViews8likes1CommentConvert curl command to BIG-IP Monitor Send String
Problem this snippet solves: Convert curl commands into a HTTP or HTTPS monitor Send String. How to use this snippet: Save this code into a Python script and run as a replacement for curl. Related Article: How to create custom HTTP monitors with Postman, curl, and Python An example would be: % python curl_to_send_string.py -X POST -H "Host: api.example.com" -H "User-Agent: Custom BIG-IP Monitor" -H "Accept-Encoding: identity" -H "Connection: Close" -H "Content-Type: application/json" -d '{ "hello": "world" } ' "http://10.1.10.135:8080/post?show_env=1" SEND STRING: POST /post?show_env=1 HTTP/1.1\r\nHost: api.example.com\r\nUser-Agent: Custom BIG-IP Monitor\r\nAccept-Encoding: identity\r\nConnection: Close\r\nContent-Type: application/json\r\n\r\n{\n \"hello\":\n \"world\"\n}\n Code : Python 2.x import getopt import sys import urllib optlist, args = getopt.getopt(sys.argv[1:], 'X:H:d:') flat_optlist = dict(optlist) method = flat_optlist.get('-X','GET') (host,uri) = urllib.splithost(urllib.splittype(args[0])[1]) protocol = 'HTTP/1.1' headers = ["%s %s %s" %(method, uri, protocol)] headers.extend([h[1] for h in optlist if h[0] == '-H']) if not filter(lambda x: 'host:' in x.lower(),headers): headers.insert(1,'Host: %s' %(host)) send_string = "\\r\\n".join(headers) send_string += "\\r\\n\\r\\n" if '-d' in flat_optlist: send_string += flat_optlist['-d'].replace('\n','\\n') send_string = send_string.replace("\"", "\\\"") print "SEND STRING:" print send_string Python 3.x import getopt import sys from urllib.parse import splittype, urlparse optlist, args = getopt.getopt(sys.argv[1:], 'X:H:d:') flat_optlist = dict(optlist) method = flat_optlist.get('-X','GET') parts = urlparse(splittype(args[0])[1]) (host,uri) = (parts.hostname,parts.path) protocol = 'HTTP/1.1' headers = ["%s %s %s" %(method, uri, protocol)] headers.extend([h[1] for h in optlist if h[0] == '-H']) if not filter(lambda x: 'host:' in x.lower(),headers): headers.insert(1,'Host: %s' %(host)) send_string = "\\r\\n".join(headers) send_string += "\\r\\n\\r\\n" if '-d' in flat_optlist: send_string += flat_optlist['-d'].replace('\n','\\n') send_string = send_string.replace("\"", "\\\"") print("SEND STRING:") print(send_string) Tested this on version: 11.54.7KViews0likes17Comments