cloud
57 TopicsMicrosoft 365 IP Steering python Script
Hello! Hola! I have created a small and rudimentary script that generates a datagroup with MS 365 IPv4 and v6 addresses to be used by an iRule or policy. There are other scripts that solve this same issue but either they were: based on iRulesLX which forces you to enable iRuleLX only for this, and made me run into issues when upgrading (memory table got filled with nonsense) based on the XML version of the list which MS changed to a JSON file. This script is a super simple bash script that calls another super simple python file, and a couple of helper files. The biggest To Do are: Add a more secure approach to password usage. Right now, it is stored in a parameters file locked away with permissions. There should be a better way. Add support for URLs. You can find the contents here:https://github.com/teoiovine-novared/fetch-office365/tree/main I appreciate advice, (constructive) criticism and questions all the same! Thank you for your time.18Views0likes0CommentsF5 XC vk8s open source nginx deployment on RE
Code is community submitted, community supported, and recognized as ‘Use At Your Own Risk’. Short Description This an example for F5 XC virtual kubernetes (vk8s) workload on Regional Edges for rewriting the URL requests and response body. Problem solved by this Code Snippet The XC Distributed Cloud rewrite option under the XC routes is sometimes limited in dynamically replacing a specific sting like for example to replace the string "ne" with "da" no matter where in the url the string is located. location ~ .*ne.* { rewrite ^(.*)ne(.*) $1da$2; } Other than that in XC there is no default option to replace a string in the payload like rewrite profile in F5 LTM or iRule stream option. sub_filter 'Example' 'NIKI'; sub_filter_types *; sub_filter_once off; Open source NGINX can also be used to return custom error based on the server error as well: error_page 404 /custom_404.html; location = /custom_404.html { return 404 'gangnam style!'; internal; } Now with proxy protocol support in XC the Nginx can see real client ip even for non-HTTP traffic that does not have XFF HTTP headers. log_format niki '$proxy_protocol_addr - $remote_addr - $remote_user [$time_local] - $proxy_add_x_forwarded_for' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent"'; #limit_req_zone $binary_remote_addr zone=mylimit:10m rate=1r/s; server { listen 8080 proxy_protocol; server_name localhost; How to use this Code Snippet Read the description readme file in the github link and modify the nginx default.conf file as per your needs. Code Snippet Meta Information Version: 1.25.4 Nginx Coding Language: nginx config Full Code Snippet https://github.com/Nikoolayy1/xc_nginx/tree/main166Views0likes3CommentsF5 XC reviewing API requests which the GUI sends and a backup of the config with Python/Ansible
Short Description The F5 XC Distributed Cloud GUI in the background sends API requests with JSON body to the system and those requests can be easily reviewed. Problem solved by this Code Snippet If someone wonders how to do some tasks that the XC GUI does the same way but with automation through the API and JSON then this article will help them. Also at the end I have shown how to retrive XC json data with API. How to use this Code Snippet Reviewing the API requests that are generated by the XC GUI. Full Code Snippet There are 3 ways to review the API requests that the GUI generates. On each XC element like for example the load balancer you can click on the JSON and see the JSON code. The JSON code can even be directly edited from the GUI Dashboard! The API documentation can be reviewed directly from the XC GUI. The final option is just to use the browser developer tools and to see what API requests are send by the F5 XC. This feature is now present on most F5 new products like F5OS(Velos/rSeries) and F5 NEXT😉 The XC JSON created objects from the API are a form of a backup configuration. Even if the objects were created from the GUI then API GET requests can be used to retrive their JSON data and this can be saved to a backup file in the form of snapshot. I have used Python with requests library and the url and API key are added as user input arguments. The script can be used to get information like the XC LB or service policies. As example "/api/config/namespaces/default/service_policys". The script will first call an API endpoint to get for example all the load balancer or service policy names and then it will use the names get the config for each individual service policy or load balancer, using a for loop. There is time.sleep(1) to add 1 second slowness between each api request. The code can have the full url like https://{tenant_name}.console.ves.volterra.io/api/config/namespaces/{namespace}service_policysand the api token be added during script execution or the arguments can be input at the start of script execution by commenting outurl = sys.argv[1] and api_token = sys.argv[2] and executing the script like python3 service_policy.py {argument1} {argument2}. The api token by default is hidden using the getpass library for extra security. See Github for the code: Nikoolayy1/xc_api_script: XC API script to retrieve basic json config (github.com) In some cases using Terraform for XC will be best as XC has strong Terraform support as seen at the link below. Ansible can also be used but XC does not have many developed Ansible modules, so in many cases the Ansible URI module will need to be used and the Ansible URI module (ansible.builtin.uri module – Interacts with webservices — Ansible Documentation) in the backgroung is just the python requests or http.client module, as Ansible is python in the background, so better use python directly in that case. XC Terraform: Terraform | F5 Distributed Cloud Tech Docs F5 Distributed Cloud WAAP deployment with Terrafor... - DevCentral Using Terraform and F5® Distributed Cloud Mesh to ... - DevCentral Example Ansible code (even if I said python is better in this case 😀) xc_api_script/xc_ansible at main · Nikoolayy1/xc_api_script (github.com)706Views0likes2CommentsCreate an internal HTTP Load-Balancer on Volterra with Terraform
Problem this snippet solves: How to create an internal HTTP Load-Balancer with VoltMesh where the Origin is reachable through a Volterra node. Two steps are needed: Creation of the Origin (1-origin.tf file) Creation of the Load-Balancer (2-http-lb.tf file) How to use this snippet: Pre-Requirements: Have a Volterra API Certificate. Please see this page for the API Certificate generation: https://volterra.io/docs/how-to/user-mgmt/credentials Extract the certificate and the key from the .p12: openssl pkcs12 -info -in certificate.p12 -out private_key.key -nodes -nocerts openssl pkcs12 -info -in certificate.p12 -out certificate.cert -nokeys Create a variables.tf Terraform variables file: variable "api_cert" { type = string default = "/<full path to>/certificate.cert" } variable "api_key" { type = string default = "/<full path to>/private_key.key" } variable "api_url" { type = string default = "https://<tenant_name>.console.ves.volterra.io/api" } Create a main.tf Terraform file: terraform { required_version = ">= 0.12.9, != 0.13.0" required_providers { volterra = { source = "volterraedge/volterra" version = ">=0.0.6" } } } provider "volterra" { api_cert = var.api_cert api_key = var.api_key url = var.api_url } In the directory where your terraform files are, run: terraform init Then: terraform apply Code : //========================================================================== //Definition of the Origin, 1-origin.tf //Start of the TF file resource "volterra_origin_pool" "sample-http-origin-pool" { name = "sample-http-origin-pool" //Name of the namespace where the origin pool must be deployed namespace = "mynamespace" origin_servers { private_ip { ip = "10.17.20.13" //From which interface of the node onsite the IP of the service is reachable. Value are inside_network / outside_network or both. outside_network = true //Site definition site_locator { site { name = "name-of-the-site" namespace = "system" tenant = "name-of-the-tenant" } } } labels = { } } no_tls = true port = "80" endpoint_selection = "LOCALPREFERED" loadbalancer_algorithm = "LB_OVERRIDE" } //End of the file //========================================================================== //========================================================================== //Definition of the Load-Balancer, 2-http-lb.tf //Start of the TF file resource "volterra_http_loadbalancer" "sample-http-lb" { depends_on = [volterra_origin_pool.sample-http-origin-pool] //Mandatory "Metadata" name = "sample-http-lb" //Name of the namespace where the origin pool must be deployed namespace = "mynamespace" //End of mandatory "Metadata" //Mandatory "Basic configuration" domains = ["mydomain.internal"] http { dns_volterra_managed = false } //End of mandatory "Basic configuration" //Optional "Default Origin server" default_route_pools { pool { name = "sample-http-origin-pool" namespace = "mynamespace" } weight = 1 } //End of optional "Default Origin server" //Mandatory "VIP configuration" advertise_on_public_default_vip = true //End of mandatory "VIP configuration" //Mandatory "Security configuration" no_service_policies = true no_challenge = true disable_rate_limit = true disable_waf = true //End of mandatory "Security configuration" //Mandatory "Load Balancing Control" source_ip_stickiness = true //End of mandatory "Load Balancing Control" } //End of the file //========================================================================== Tested this on version: No Version Found757Views0likes1CommentBypass Azure Login Page by adding a login hint in the SAML Request
Problem this snippet solves: Enhance the login experience between F5 (SAML SP) and Azure (SAML IDP) by injecting the "email address" as a login hint on behalf of the user. This enhances the user experience because it allows to bypass the Azure Login Page and avoids the user to type two times his login/email address. Example of use Your application need to be accessed by both "domain users" and "federated users". Your application is protected by the F5 APM with a "Login Page" that asks for the user "email address". Based on the "email address" value you determine the domain: if the user is a "domain user", you authenticate him on the local directory (AD Auth, LDAP Auth or ...) if the user is a "federated user" (such as xxx@gmail.com), you send him to the Azure IDP that will manage all federated access This snippet is particularly interesting for the "federated user" scenario because: without this code, a "federated user" will need to type his "login" twice. First time on "F5 Login Page" and the second time on "Azure Login Page" with this code, a "federated user" will need to type his "login" only on the F5 Login Page How to use this snippet: Go to "Access > Federation > SAML Service Provider > External IDP Connectors" and edit the "External IdP Connectors" object that match with the Azure IDP app. On the "Single Sign On Service Settings" add at the end of the "Single Sign On Service URL" the following string "?login_hint=" as shown in the picture below. The string "?login_hint=" is added here only to be able to uniquely identify it later by the iRule and replaced it. 3. Finally, apply the iRule below on the VS that has the Access Policy enabled and for which the SAML SP role is attributed and is binded to the Azure IDP application. The iRule will simply catch the "Single Sign On Service URL" and replace it with "?login_hint=xxxx@gmail.com". Code : when CLIENT_ACCEPTED { ACCESS::restrict_irule_events disable } when HTTP_RESPONSE_RELEASE { if { [string tolower [HTTP::header value "Location"]] contains "/saml2/?login_hint="} { set user_login [ACCESS::session data get "session.logon.last.mail"] #log local0. "Before adding the hint [HTTP::header value "Location"]" set locationWithoutHint "?login_hint=" set locationWithHint "?login_hint=$user_login" HTTP::header replace Location [string map -nocase "${locationWithoutHint} ${locationWithHint}" [HTTP::header Location]] #log local0. "After adding the hint [HTTP::header value "Location"]" } } Tested this on version: No Version Found4.9KViews1like5CommentsConvert curl command to BIG-IP Monitor Send String
Problem this snippet solves: Convert curl commands into a HTTP or HTTPS monitor Send String. How to use this snippet: Save this code into a Python script and run as a replacement for curl. Related Article: How to create custom HTTP monitors with Postman, curl, and Python An example would be: % python curl_to_send_string.py -X POST -H "Host: api.example.com" -H "User-Agent: Custom BIG-IP Monitor" -H "Accept-Encoding: identity" -H "Connection: Close" -H "Content-Type: application/json" -d '{ "hello": "world" } ' "http://10.1.10.135:8080/post?show_env=1" SEND STRING: POST /post?show_env=1 HTTP/1.1\r\nHost: api.example.com\r\nUser-Agent: Custom BIG-IP Monitor\r\nAccept-Encoding: identity\r\nConnection: Close\r\nContent-Type: application/json\r\n\r\n{\n \"hello\":\n \"world\"\n}\n Code : Python 2.x import getopt import sys import urllib optlist, args = getopt.getopt(sys.argv[1:], 'X:H:d:') flat_optlist = dict(optlist) method = flat_optlist.get('-X','GET') (host,uri) = urllib.splithost(urllib.splittype(args[0])[1]) protocol = 'HTTP/1.1' headers = ["%s %s %s" %(method, uri, protocol)] headers.extend([h[1] for h in optlist if h[0] == '-H']) if not filter(lambda x: 'host:' in x.lower(),headers): headers.insert(1,'Host: %s' %(host)) send_string = "\\r\\n".join(headers) send_string += "\\r\\n\\r\\n" if '-d' in flat_optlist: send_string += flat_optlist['-d'].replace('\n','\\n') send_string = send_string.replace("\"", "\\\"") print "SEND STRING:" print send_string Python 3.x import getopt import sys from urllib.parse import splittype, urlparse optlist, args = getopt.getopt(sys.argv[1:], 'X:H:d:') flat_optlist = dict(optlist) method = flat_optlist.get('-X','GET') parts = urlparse(splittype(args[0])[1]) (host,uri) = (parts.hostname,parts.path) protocol = 'HTTP/1.1' headers = ["%s %s %s" %(method, uri, protocol)] headers.extend([h[1] for h in optlist if h[0] == '-H']) if not filter(lambda x: 'host:' in x.lower(),headers): headers.insert(1,'Host: %s' %(host)) send_string = "\\r\\n".join(headers) send_string += "\\r\\n\\r\\n" if '-d' in flat_optlist: send_string += flat_optlist['-d'].replace('\n','\\n') send_string = send_string.replace("\"", "\\\"") print("SEND STRING:") print(send_string) Tested this on version: 11.54.7KViews0likes17CommentsGoogle Analytics script injection
Problem this snippet solves: Add google analytics script in the html content of the HTTP response. Works also for other Analytics providers like Piwik. How to use this snippet: Installation Files The code below has to be imported as an ifile. By default, you must name this ifile google.js but you can change it in the irule if required. Google Analytics code : <!-- Google Analytics --> <script> window.ga=window.ga||function(){(ga.q=ga.q||[]).push(arguments)};ga.l=+new Date; ga('create', '$static::tracking_id', 'auto'); ga('send', 'pageview'); </script> <script async src='https://www.google-analytics.com/analytics.js'></script> <!-- End Google Analytics --> Piwik javascript code : <!-- Piwik --> <script type="text/javascript"> var _paq = _paq || []; _paq.push(['trackPageView']); _paq.push(['enableLinkTracking']); (function() { var u="//$static::piwik_url/"; _paq.push(['setTrackerUrl', u+'piwik.php']); _paq.push(['setSiteId', {$static::siteid}]); var d=document, g=d.createElement('script'), s=d.getElementsByTagName('script')[0]; g.type='text/javascript'; g.async=true; g.defer=true; g.src=u+'piwik.js'; s.parentNode.insertBefore(g,s); })(); </script> <!-- End Piwik Code --> irule You need to install the irule on your Virtual Server. Variables set static::tracking_id "UA-XXXXX-Y" # replace the Google Tracking ID by your own set static::siteid "UA-XXXXX-Y" # replace the Piwik Site ID by your own set static::piwik_url "https://www.mypiwik.com/piwik/piwik" # replace the Piwik URL by your own Features Version 1.0 Insert Google Analytics JS code within html response support for Piwik JS insertion Manage Multiple TrackingID by hostname (see Multiple "hostname and TrackingID section") Backlog Add logging External links Github : https://github.com/e-XpertSolutions/f5 BONUS : Multiple hostname and TrackingID Prerequisite You need to add a string based Datagroup named HOST_TRACKING_MAPPING. ltm data-group internal HOST_TRACKING_MAPPING { records { blog.e-xpertsolutions.com { data UA-XXXXX-Z } www.e-xpertsolutions.com { data UA-XXXXX-Y } } type string } The google.js ifile need to be replaced by the following example : <!-- Google Analytics --> <script> window.ga=window.ga||function(){(ga.q=ga.q||[]).push(arguments)};ga.l=+new Date; ga('create', '$tracking_id', 'auto'); ga('send', 'pageview'); </script> <script async src='https://www.google-analytics.com/analytics.js'></script> <!-- End Google Analytics --> Irule when RULE_INIT { set static::default_trackingid "UA-XXXXX-Y" } when HTTP_REQUEST { HTTP::header remove "Accept-Encoding" set host [HTTP::host] } when HTTP_RESPONSE { if { [HTTP::header Content-Type] contains "text/html" } { if { [HTTP::header exists "Content-Length"] } { set content_length [HTTP::header "Content-Length"] } else { set content_length 1000000 } if { $content_length > 0 } { HTTP::collect $content_length } } } when HTTP_RESPONSE_DATA { set search "</head>" set tracking_id [class match -value -- $host equals HOST_TRACKING_MAPPING] if { $tracking_id eq "" } { set tracking_id $static::default_trackingid } HTTP::payload replace 0 $content_length [string map [list $search "[subst -nocommands -nobackslashes [ifile get google.js]]</head>"] [HTTP::payload]] HTTP::release } Code : when RULE_INIT { set static::tracking_id "UA-XXXXX-Y" set static::siteid "XXXXX" set static::piwik_url "https://www.piwik.url/piwik/piwik" } when HTTP_REQUEST { HTTP::header remove "Accept-Encoding" } when HTTP_RESPONSE { if { [HTTP::header Content-Type] contains "text/html" } { if { [HTTP::header exists "Content-Length"] } { set content_length [HTTP::header "Content-Length"] } else { set content_length 1000000 } if { $content_length > 0 } { HTTP::collect $content_length } } } when HTTP_RESPONSE_DATA { set search "" HTTP::payload replace 0 $content_length [string map [list $search "[subst -nocommands -nobackslashes [ifile get google.js]]"] [HTTP::payload]] HTTP::release } Tested this on version: 11.51.6KViews0likes3CommentsDynamic DNS registration of APM SSL VPN Client to AWS route53
Problem this snippet solves: This iRules LX code dynamically registers a clients obtained IP from APM when they initiate a VPN tunnel to an AWS route 53 domain. How to use this snippet: Import attached tgz(zip will need to be re-packaged to tgz) to a new iRules LX workspace, create an iRule LX plugin, and attached the editRecord iRule to your virtual server. You can also manually create a new workspace and add the iRules LX iRule and extension with the code below. You will also have to install the aws-sdk module using npm. Eric Flores has a good write-up on how to manage npm, https://devcentral.f5.com/articles/getting-started-with-irules-lx-part-4-npm-best-practices-20426. iRule Code: when CLIENT_ACCEPTED { ACCESS::restrict_irule_events disable } when HTTP_REQUEST { if { [ACCESS::policy result] eq "allow" && [HTTP::uri] starts_with "/myvpn?sess="} { after 5000 { set apmSessionId [ACCESS::session data get session.user.sessionid] set apmProfileName [ACCESS::session data get session.access.profile] set apmPart [ACCESS::session data get session.partition_id] set DNSName "<your zone with trailing dot>" set action "UPSERT" set name "<your hostname>" if {not ($name ends_with ".[string trimright ${DNSName} {.}]")}{ log -noname local1.err "${apmProfileName}:${apmPart}:${apmSessionId}: proper hostname was not provided ($name): 01490000: A Record will not be updated for this session." event disable all return } set TTL 1200 set ip [ACCESS::session data get "session.assigned.clientip"] if {[catch {IP::addr $ip mask 255.255.255.255}]}{ log -noname local1.err "${apmProfileName}:${apmPart}:${apmSessionId}: proper IP addressed was not assigned ($ip): 01490000: A Record will not be updated for this session." event disable all return } set RPC_HANDLE [ILX::init route53] if {[catch {set rpc_response [ILX::call $RPC_HANDLE route53_nodejs $DNSName $action $name $TTL $ip]}]}{ set rpc_response "Check LTM log for execution errors from sdmd. A Record may not have been updated for this session." } log -noname local1. "${apmProfileName}:${apmPart}:${apmSessionId}: Client connected: 01490000: Params sent = $DNSName $action $name $TTL $ip\r\nRPC response = $rpc_response" } } } when ACCESS_SESSION_CLOSED { set apmSessionId [ACCESS::session data get session.user.sessionid] set apmProfileName [ACCESS::session data get session.access.profile] set apmPart [ACCESS::session data get session.partition_id] set DNSName "<your zone with trailing dot>" set action "DELETE" set name "<your hostname>" if {not ($name ends_with ".[string trimright ${DNSName} {.}]")}{ log -noname local1.err "${apmProfileName}:${apmPart}:${apmSessionId}: proper hostname was not provided ($name): 01490000: A Record will not be removed for this session." event disable all return } set TTL 1200 set ip [ACCESS::session data get "session.assigned.clientip"] if {[catch {IP::addr $ip mask 255.255.255.255}]}{ log -noname local1.err "${apmProfileName}:${apmPart}:${apmSessionId}: proper IP addressed was not assigned ($ip): 01490000: A Record will not be removed for this session." event disable all return } set RPC_HANDLE [ILX::init route53] if {[catch {set rpc_response [ILX::call $RPC_HANDLE route53_nodejs $DNSName $action $name $TTL $ip]}]}{ set rpc_response "Check LTM log for execution errors from sdmd. A Record may not have been removed for this session." } log -noname local1. "${apmProfileName}:${apmPart}:${apmSessionId}: Client disconnected: 01490000: Params sent = $DNSName $action $name $TTL $ip\r\nRPC response = $rpc_response" } node.js code: var AWS = require('aws-sdk'); var f5 = require('f5-nodejs'); /* AWS Config */ //AWS.config.update({accessKeyId: "<your access key>", secretAccessKey: "<your secret>"}); /* get the Route53 library */ var route53 = new AWS.Route53(); /* Create a new rpc server for listening to TCL iRule calls. */ var ilx = new f5.ILXServer(); /* Start listening for ILX::call and ILX::notify events. */ ilx.listen(); /* Add a method and expect DNSName, action, name, TTL, and ip parameters and reply with response */ ilx.addMethod('route53_nodejs', function(req, response) { var DNSName = req.params()[0]; var params = { DNSName: DNSName }; var action = req.params()[1]; var name = req.params()[2]; var TTL = req.params()[3]; var ip = req.params()[4]; route53.listHostedZonesByName(params, function(err,data) { if (err) { //console.log(err, err.stack); response.reply(err.toString()); } else if (data.HostedZones[0].Name !== params.DNSName) { response.reply(params.DNSName + " is not a zone defined in Route53"); } else { var zoneId = data.HostedZones[0].Id; var recParams = { "HostedZoneId": zoneId, "ChangeBatch": { "Changes": [ { "Action": action, "ResourceRecordSet": { "Name": name, "Type": "A", "TTL": TTL, "ResourceRecords": [ { "Value": ip } ] } } ] } }; /* edit records using aws sdk */ route53.changeResourceRecordSets(recParams, function(err,data) { if (err) {response.reply(err.toString());} else if (data.ChangeInfo.Status === "PENDING") {response.reply("Record is being updated");} else {response.reply(data);} }); } }); }); This is also on my github, https://github.com/bepsoccer/iRulesLX Code : 75409 Tested this on version: 12.1490Views0likes0CommentsFun with x509: extract RSA Public from SSL cert for use with iRule CRYPTO commands
Problem this snippet solves: iRule CRYPTO commands do not allow the use of SSL / TLS certificates for encryption and decription operations. This snippet provides a simple way to extract the RSA Public key from a x509 certificate by parsing its ASN.1 structure. The output key can then be given as [-key] input to the CRYPTO::[encrypt|decrypt] commands in combination with the use of the rsa-pub and rsa-priv algorithms [-alg]. The example only encrypts a test string with the Client's SSL certificate to demonstrate the functionality but you have unlimited options here. For instance, you are not limited to using your client's certificate (e.g. [SSL::cert 0]): any PEM (base64) or DER representation of a public x509 cert will also work. When using the PEM format just don't forget to perform a b64decode before feeding it to the getRSAPubFromX509 proc. Credits to Kevin Stewart for the original idea and code for performing Extended X509 Certificate Parsing How to use this snippet: Create Virtual Server Select "http" under "HTTP Profile" Create SSL Profile (Client) 3.1 Configure the "Certificate Key Chain" with a proper or self-signed ssl key pair 3.2 Select "require" under "Client Certificate" Assign the SSL Profile to the Virtual Server Create an iRule out of the snippet below and assign it to the Virtual Server Generate traffic by use of an appropriate client SSL key pair JSON Parse the response. Test decryption with openssl: Copy the "encryptedMsg" JSON value within the response payload (e.g. using JSON.Parse()) in a file called enc.txt, then decode it from base64 into the encrypted binary string and use the client's private key to decrypt. base64 -d enc.txt > encbin.txt openssl rsautl -inkey key.pem -decrypt -in encbin.txt Code : when HTTP_REQUEST { set cert [SSL::cert 0] set rsaFromX509 [call getRSAPubFromX509 $cert] log local0. "PEM encoded RSA Public: $rsaFromX509" set testString "test string to encrypt" set encBytes [CRYPTO::encrypt -alg rsa-pub -key $rsaFromX509 $testString] set encBytesPrintable [b64encode $encBytes] log local0. "Encrypted b64encoded string: $encBytesPrintable" set jsonResponse "{ \"encryptedMsg\": \"$encBytesPrintable\" }" HTTP::respond 200 content $jsonResponse } proc getRSAPubFromX509 { bincert } { if { [catch { # rsaEncryption OID=1.2.840.113549.1.1.1 - HEX=\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01 set offset [string first \x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01 $bincert] # Shift the offset before the ASN.1 sequence length set offset [expr { $offset - 4 }] binary scan "[string index $bincert $offset][string index $bincert [expr $offset + 1]]" S seqlen # Convert the ASN.1 length to unsigned integer set seqlen [expr {$seqlen & 0xffff}] # Set a pointer to the beginning of the sequence set startSeq [expr $offset - 2] # Set a pointer to the end of the sequence set endSeq [expr $startSeq + 3 + $seqlen] set rsaEncryption [string range $bincert $startSeq $endSeq] } error] } { set rsaEncryption 0 } if { $rsaEncryption == 0 } { return 0 } binary scan $rsaEncryption a* rsaEncryption set encodedKey [b64encode $rsaEncryption] set formattedKey "-----BEGIN PUBLIC KEY-----\x0A" for {set i 0} {$i < [string length $encodedKey]} {incr i 64} { set formattedKey "${formattedKey}\x0A[string range $encodedKey $i [expr {$i + 63}]]" } set formattedKey "${formattedKey}\x0A-----END PUBLIC KEY-----\x0A" return $formattedKey } Tested this on version: 12.1887Views0likes0CommentsMicrosoft Office 365 IP intelligence
Problem this snippet solves: This snippet adds Microsoft Office 365 IP intelligence. This snippet parses the O365IPAddresses.xml file that Microsoft supplies to help identify Microsoft URLs and IP address ranges. With this snippet you can check if an IP address belongs to Microsoft and let the BIG-IP decide to allow or deny traffic. For more info see: Office 365 URLs and IP address ranges DISCLAIMER: This code has never been tested outside a lab environment. Consider this snippet to be a proof-of-concept. How to use this snippet: Prepare BIG-IP Create LX Workspace: office365_ipi Add iRule: office365_ipi_irule Add Extension: office365_ipi_extension Add LX Plugin: office365_ipi_plugin -> From Workspace: office365_ipi Install node.js modules # cd /var/ilx/workspaces/Common/office365_ipi/extensions/office365_ipi_extension # npm install xml2js https repeat lokijs ip-range-check --save office365_ipi_irule ### ### Name : office365_ipi_irule ### Author : Niels van Sluis, (niels@van-sluis.nl) ### Version: 0.1 ### Date : 2017-07-25 ### when RULE_INIT { # set table timeout to 1 hour set static::office365_ipi_timeout 3600 set static::office365_ipi_lifetime 3600 } when CLIENT_ACCEPTED { # Valid product names are: # o365, LYO, Planner, Teams, ProPlus, OneNote, Yammer, # EXO, Identity, Office365Video, WAC, SPO, RCA, Sway, # EX-Fed, OfficeMobile, CRLs, OfficeiPad, EOP # # Use 'any' to match an IP address in all products. set productName "o365" set ipAddress [IP::client_addr] set key $productName:$ipAddress set verdict [table lookup -notouch $key] if { $verdict eq "" } { log local0. "Need to retrieve verdict via iruleslx" set rpc_handle [ILX::init office365_ipi_plugin office365_ipi_extension] if {[catch {ILX::call $rpc_handle checkProductIP $productName $ipAddress} verdict]} { log local0.error "Client - [IP::client_addr], ILX failure: $verdict" return } log local0. "The verdict for $ipAddress: $verdict"; # cache verdict table set $key $verdict $static::office365_ipi_timeout $static::office365_ipi_lifetime } # verdict is 0 (reject) or 1 (allow) if { !($verdict) } { log local0. "rejected IP address: $ipAddress" reject } } Code : /** *** Name : office365_ipi_extension *** Author : Niels van Sluis, *** Version: 0.1 *** Date : 2017-07-25 **/ 'use strict'; // Import the f5-nodejs module and others. var f5 = require('f5-nodejs'); var parseString = require('xml2js').parseString; var https = require('https'); var repeat = require('repeat'); var loki = require('lokijs'); var ipRangeCheck = require('ip-range-check'); // Create (in-memory) LokiJS database. var db = new loki('db.json'); var products = db.addCollection('products'); // Create a new rpc server for listening to TCL iRule calls. var ilx = new f5.ILXServer(); // URL to Microsoft Office 365 XML file. var url = "https://support.content.office.net/en-us/static/O365IPAddresses.xml"; // Function to get XML file and convert to JSON object. function xmlToJson(url, callback) { var req = https.get(url, function(res) { var xml = ''; res.on('data', function(chunk) { xml += chunk; }); res.on('error', function(e) { callback(e, null); }); res.on('timeout', function(e) { callback(e, null); }); res.on('end', function() { if(res.statusCode == 200) { parseString(xml, function(err, result) { callback(null, result); }); } }); }); } // Function that uses the data in the XML file to create a database // that can be used to perform IP address and URL lookups. function getOffice365() { xmlToJson(url, function(err,data) { if(err) { console.log("Error: xmlToJson failed"); return; } // if xml happens to be empty due to an error, do not continue. if(!data) { console.log("Error: No data in XML file"); return; } // Get date updated: 7/13/2017 var productsUpdated = data.products.$.updated; // Only update if version changed var versionCheck = products.findObject({'name':'any'}); if(versionCheck && versionCheck.version == productsUpdated) { console.log("Info: product version didn't changed; No update required"); return; } var allIpAdresses = []; var allUrls = []; data.products.product.forEach(function (product) { var ipAddresses = []; var urls = []; product.addresslist.forEach(function (addresslist) { if(addresslist.$.type == "IPv4" || addresslist.$.type == "IPv6") { if ( typeof addresslist.address !== 'undefined' && addresslist.address ) { addresslist.address.forEach(function (address) { ipAddresses.push(address); allIpAdresses.push(address); }); } } else if(addresslist.$.type == "URL") { if ( typeof addresslist.address !== 'undefined' && addresslist.address ) { addresslist.address.forEach(function (address) { urls.push(address); allUrls.push(address); }); } } }); var p = products.findObject({'name':product.$.name.toLowerCase()}); if(!p) { products.insert({ name: product.$.name.toLowerCase(), ipAddresses: ipAddresses, urls: urls, version: productsUpdated }); } else { p.ipAddresses = ipAddresses; p.urls = urls; p.version = productsUpdated; } }); var p = products.findObject({'name':'any'}); if(!p) { products.insert({ name: 'any', ipAddresses: allIpAdresses, urls: allUrls, version: productsUpdated }); } else { p.ipAddresses = Array.from(new Set(allIpAdresses)); p.urls = Array.from(new Set(allUrls)); p.version = productsUpdated; } console.log("Info: update finished; " + products.data.length + " product records in database."); }); } // refresh Microsoft Office 365 XML every hour repeat(getOffice365).every(1, 'hour').start.now(); // ILX::call to check if an IP address is part of Office365 ilx.addMethod('checkProductIP', function(objArgs, objResponse) { var productName = objArgs.params()[0]; var ipAddress = objArgs.params()[1]; // fail-open = true, fail-close = false var verdict = true; var req = products.findObject( { 'name':productName.toLowerCase()}); if(req) { verdict = ipRangeCheck(ipAddress, req.ipAddresses); } // return AuthnRequest to Tcl iRule objResponse.reply(verdict); }); // Start listening for ILX::call and ILX::notify events. ilx.listen(); Tested this on version: 13.0911Views0likes5Comments