application delivery
43250 TopicsHow to get a F5 BIG-IP VE Developer Lab License
(applies to BIG-IP TMOS Edition) To assist operational teams teams improve their development for the BIG-IP platform, F5 offers a low cost developer lab license. This license can be purchased from your authorized F5 vendor. If you do not have an F5 vendor, and you are in either Canada or the US you can purchase a lab license online: CDW BIG-IP Virtual Edition Lab License CDW Canada BIG-IP Virtual Edition Lab License Once completed, the order is sent to F5 for fulfillment and your license will be delivered shortly after via e-mail. F5 is investigating ways to improve this process. To download the BIG-IP Virtual Edition, log into my.f5.com (separate login from DevCentral), navigate down to the Downloads card under the Support Resources section of the page. Select BIG-IP from the product group family and then the current version of BIG-IP. You will be presented with a list of options, at the bottom, select the Virtual-Edition option that has the following descriptions: For VMware Fusion or Workstation or ESX/i: Image fileset for VMware ESX/i Server For Microsoft HyperV: Image fileset for Microsoft Hyper-V KVM RHEL/CentoOS: Image file set for KVM Red Hat Enterprise Linux/CentOS Note: There are also 1 Slot versions of the above images where a 2nd boot partition is not needed for in-place upgrades. These images include _1SLOT- to the image name instead of ALL. The below guides will help get you started with F5 BIG-IP Virtual Edition to develop for VMWare Fusion, AWS, Azure, VMware, or Microsoft Hyper-V. These guides follow standard practices for installing in production environments and performance recommendations change based on lower use/non-critical needs for development or lab environments. Similar to driving a tank, use your best judgement. Deploying F5 BIG-IP Virtual Edition on VMware Fusion Deploying F5 BIG-IP in Microsoft Azure for Developers Deploying F5 BIG-IP in AWS for Developers Deploying F5 BIG-IP in Windows Server Hyper-V for Developers Deploying F5 BIG-IP in VMware vCloud Director and ESX for Developers Note: F5 Support maintains authoritative Azure, AWS, Hyper-V, and ESX/vCloud installation documentation. VMware Fusion is not an official F5-supported hypervisor so DevCentral publishes the Fusion guide with the help of our Field Systems Engineering teams.112KViews14likes153CommentsWhat Is BIG-IP?
tl;dr - BIG-IP is a collection of hardware platforms and software solutions providing services focused on security, reliability, and performance. F5's BIG-IP is a family of products covering software and hardware designed around application availability, access control, and security solutions. That's right, the BIG-IP name is interchangeable between F5's software and hardware application delivery controller and security products. This is different from BIG-IQ, a suite of management and orchestration tools, and F5 Silverline, F5's SaaS platform. When people refer to BIG-IP this can mean a single software module in BIG-IP's software family or it could mean a hardware chassis sitting in your datacenter. This can sometimes cause a lot of confusion when people say they have question about "BIG-IP" but we'll break it down here to reduce the confusion. BIG-IP Software BIG-IP software products are licensed modules that run on top of F5's Traffic Management Operation System® (TMOS). This custom operating system is an event driven operating system designed specifically to inspect network and application traffic and make real-time decisions based on the configurations you provide. The BIG-IP software can run on hardware or can run in virtualized environments. Virtualized systems provide BIG-IP software functionality where hardware implementations are unavailable, including public clouds and various managed infrastructures where rack space is a critical commodity. BIG-IP Primary Software Modules BIG-IP Local Traffic Manager (LTM) - Central to F5's full traffic proxy functionality, LTM provides the platform for creating virtual servers, performance, service, protocol, authentication, and security profiles to define and shape your application traffic. Most other modules in the BIG-IP family use LTM as a foundation for enhanced services. BIG-IP DNS - Formerly Global Traffic Manager, BIG-IP DNS provides similar security and load balancing features that LTM offers but at a global/multi-site scale. BIG-IP DNS offers services to distribute and secure DNS traffic advertising your application namespaces. BIG-IP Access Policy Manager (APM) - Provides federation, SSO, application access policies, and secure web tunneling. Allow granular access to your various applications, virtualized desktop environments, or just go full VPN tunnel. Secure Web Gateway Services (SWG) - Paired with APM, SWG enables access policy control for internet usage. You can allow, block, verify and log traffic with APM's access policies allowing flexibility around your acceptable internet and public web application use. You know.... contractors and interns shouldn't use Facebook but you're not going to be responsible why the CFO can't access their cat pics. BIG-IP Application Security Manager (ASM) - This is F5's web application firewall (WAF) solution. Traditional firewalls and layer 3 protection don't understand the complexities of many web applications. ASM allows you to tailor acceptable and expected application behavior on a per application basis . Zero day, DoS, and click fraud all rely on traditional security device's inability to protect unique application needs; ASM fills the gap between traditional firewall and tailored granular application protection. BIG-IP Advanced Firewall Manager (AFM) - AFM is designed to reduce the hardware and extra hops required when ADC's are paired with traditional firewalls. Operating at L3/L4, AFM helps protect traffic destined for your data center. Paired with ASM, you can implement protection services at L3 - L7 for a full ADC and Security solution in one box or virtual environment. BIG-IP Hardware BIG-IP hardware offers several types of purpose-built custom solutions, all designed in-house by our fantastic engineers; no white boxes here. BIG-IP hardware is offered via series releases, each offering improvements for performance and features determined by customer requirements. These may include increased port capacity, traffic throughput, CPU performance, FPGA feature functionality for hardware-based scalability, and virtualization capabilities. There are two primary variations of BIG-IP hardware, single chassis design, or VIPRION modular designs. Each offer unique advantages for internal and collocated infrastructures. Updates in processor architecture, FPGA, and interface performance gains are common so we recommend referring to F5's hardware page for more information.92KViews3likes3CommentsGet Started with BIG-IP and BIG-IQ Virtual Edition (VE) Trial
Welcome to the BIG-IP and BIG-IQ trials page! This will be your jumping off point for setting up a trial version of BIG-IP VE or BIG-IQ VE in your environment. As you can see below, everything you’ll need is included and organized by operating environment — namely by public/private cloud or virtualization platform. To get started with your trial, use the following software and documentation which can be found in the links below. Upon requesting a trial, you should have received an email containing your license keys. Please bear in mind that it can take up to 30 minutes to receive your licenses. Don't have a trial license? Get one here. Or if you're ready to buy, contact us. Looking for other Resources like tools, compatibility matrix... BIG-IP VE and BIG-IQ VE When you sign up for the BIG-IP and BIG-IQ VE trial, you receive a set of license keys. Each key will correspond to a component listed below: BIG-IQ Centralized Management (CM) — Manages the lifecycle of BIG-IP instances including analytics, licenses, configurations, and auto-scaling policies BIG-IQ Data Collection Device (DCD) — Aggregates logs and analytics of traffic and BIG-IP instances to be used by BIG-IQ BIG-IP Local Traffic Manager (LTM), Access (APM), Advanced WAF (ASM), Network Firewall (AFM), DNS — Keep your apps up and running with BIG-IP application delivery controllers. BIG-IP Local Traffic Manager (LTM) and BIG-IP DNS handle your application traffic and secure your infrastructure. You’ll get built-in security, traffic management, and performance application services, whether your applications live in a private data center or in the cloud. Select the hypervisor or environment where you want to run VE: AWS CFT for single NIC deployment CFT for three NIC deployment BIG-IP VE images in the AWS Marketplace BIG-IQ VE images in the AWS Marketplace BIG-IP AWS documentation BIG-IP video: Single NIC deploy in AWS BIG-IQ AWS documentation Setting up and Configuring a BIG-IQ Centralized Management Solution BIG-IQ Centralized Management Trial Quick Start Azure Azure Resource Manager (ARM) template for single NIC deployment Azure ARM template for three NIC deployment BIG-IP VE images in the Azure Marketplace BIG-IQ VE images in the Azure Marketplace BIG-IQ Centralized Management Trial Quick Start BIG-IP VE Azure documentation Video: BIG-IP VE Single NIC deploy in Azure BIG-IQ VE Azure documentation Setting up and Configuring a BIG-IQ Centralized Management Solution VMware/KVM/Openstack Download BIG-IP VE image Download BIG-IQ VE image BIG-IP VE Setup BIG-IQ VE Setup Setting up and Configuring a BIG-IQ Centralized Management Solution Google Cloud Google Deployment Manager template for single NIC deployment Google Deployment Manager template for three NIC deployment BIG-IP VE images in Google Cloud Google Cloud Platform documentation Video: Single NIC deploy in Google Other Resources AskF5 Github community (f5devcentral, f5networks) Tools to automate your deployment BIG-IQ Onboarding Tool F5 Declarative Onboarding F5 Application Services 3 Extension Other Tools: F5 SDK (Python) F5 Application Services Templates (FAST) F5 Cloud Failover F5 Telemetry Streaming Find out which hypervisor versions are supported with each release of VE. BIG-IP Compatibility Matrix BIG-IQ Compatibility Matrix Do you have any comments or questions? Ask here82KViews9likes24CommentsBIG-IP Geolocation Updates – Part 7
BIG-IP Geolocation Updates – Part 7 Introduction Management of geolocation services within the BIG-IP require updates to the geolocation database so that the inquired IP addresses are correctly characterized for service delivery and security enforcement. Traditionally managed device, where the devices are individually logged into and manually configured can benefit from a bit of automation without having to describe to an entire CI/CD pipeline and change in operational behavior. Additionally, a fully fledged CI/CD pipeline that embraces a full declarative model would also need a strategy around managing and performing the updates. This could be done via BIG-IQ; however, many organizations prefer BIG-IQ to monitor rather than manage their devices and so a different strategy is required. This article series hopes to demonstrate some techniques and code that can work in either a classically managed fleet of devices or fully automated environment. If you have embraced BIG-IQ fully, this might not be relevant but is hopefully worth a cursory review depending on how you leverage BIG-IQ. Assumptions and prerequisites There are a few technology assumptions that will be imposed onto the reader that should be mentioned: The solution will be presented in Python, specifically 3.10.2 although some lower versions could be supported. The use of the ‘walrus operator” ( := ) was made in a few places which requires version 3.8 or greater. Support for earlier versions would require some porting. Visual Studio Code was used to create and test all the code. A modest level of expertise would be valuable, but likely not required by the reader. An understanding of BIG-IP is necessary and assumed. A cursory knowledge of the F5 Automation Toolchain is necessary as some of the API calls to the BIG-IP will leverage their use, however this is NOT a declarative operation. Github is used to store the source for this article and a basic understanding of retrieving code from a github repository would be valuable. References to the above technologies are provided here: Python 3.10.2 Visual Studio Code F5 BIG-IP F5 Automation and Orchestration GitHub repository for this article Lastly, an effort was made to make this code high-quality and resilient. I ran the code base through pylint until it was clean and handle most if not all exceptional cases. However, no formal QA function or load testing was performed other than my own. The code is presented as-is with no guarantees expressed or implied. That being said, it is hoped that this is a robust and usable example either as a script or slightly modified into a library and imported into the reader’s project. Credits and Acknowledgements Mark_Menger , for his continued review and support in all things automation based. Mark Hermsdorfer, who reviewed some of my initial revisions and showed me the proper way to get http chunking to work. He also has an implementation on github that is referenced in the code base that you should look at. Article Series DevCentral places a limit on the size of an article and having learned from my previous submission I will try to organize this series a bit more cleanly. This is an overview of the items covered in each section Part 1 - Design and dependencies Basic flow of a geolocation update The imports list The API library dictionary The status_code_to_msg dictionary Custom Exceptions Method enumeration Part 2 – Send_Request() Function - send_request Part 3 - Functions and Implementation Function – get_auth_token Function – backup_geo_db Function – get_geoip_version Part 4 - Functions and Implementation Continued Function – fix_md5_file Part 5 - Functions and Implementation Continued Function – upload_geolocation_update Part 6 - Functions and Implementation Conclusion Function – install_geolocation_update Part 7 (This article) - Pulling it together Function – compare_versions Function – validate_file Function – print_usage Command Line script Pulling it together With the completion of the main functional routines, we are now ready to pull everything together. There are a few additional routines that we will add that will simplify the use of the library/script so we will start there first. compare_versions() First up is a simple function that allows us to compare two version strings from our geolocation lookup tool to determine if indeed an update was successful, and the database is in use. def compare_versions(start, end): """ Helper function to compare two geolocation db version and output message Parameters ---------- start : str Beginning version string of geolocation db end : str Ending version string of geolocation db Returns ------- 0 on success 1 on failure """ The routine takes a start and end, both strings, that represent the two strings to compare. The function returns a simple 0 on success, meaning that the end string represents a later date than the start string. Otherwise, a 1 is returned for the other cases. print(f"Starting GeoIP Version: {start}\nEnding GeoIP Version: {end}") if int(start) < int(end): print("GeoIP DB updated!") return 0 print("ERROR GeoIP DB NOT updated!") return 1 Looking at the body of the function, it will first print out what the starting and ending versions are to the console and then check to see if start is less than end. Notice that the strings are casted to an int in both cases. If this statement is true, it prints out that the DB was updated and returns 0. Otherwise, it prints out that the DB was not updated and returns 1. validate_file() stuff def validate_file(path, file): """ Verifies that the file exists and if in the same directory, keeps the basename. If its in a relative or different directory, returns the full path resolving links and so on. Parameters ---------- path : str Argument 0 from sys.argv.. the passed current working directory and exe name file : str Name of the file to check Returns ------- Corrected file with full path Raises ------ FileNotFoundError if file doesn't exist """ The routine accepts a path and a file name as arguments. The path should be the path passed from sys.argv in most cases although depending on how this is being integrated it may be from a working directory. The file name is the name of the file to check. The routine will return a corrected file with the full path. If the file doesn’t exist, it will raise a FileNotFoundError. assert path is not None assert file is not None # unlikely to raise, but there could be an errno.xx for oddly linked CWDs cwd = os.path.dirname(os.path.realpath(path)) # Verify the zip exists, if its in the same directory, clean up the path if not os.path.exists(file): raise FileNotFoundError(f"Unable to find file {file}") # If cwd and file is in same location, just use the basename for the file if cwd == os.path.dirname(os.path.realpath(file)): retval = os.path.basename(file) # otherwise use the full path (and resolve links) to the file else: retval = os.path.realpath(file) return retval Moving onto the body of the function, it first asserts the path and file arguments are not None. Next, we get the current working directory by taking the path and running it through realpath() to deal with any oddly linked directories and then return only the directory name. Next, we check if the file exists, notice it doesn’t matter where it is. If it cannot be found, we raise a FileNotFoundError and the exception leaves the routine. Next, we do the same operation on the file as we did the current working directory, saved in cwd, and compare them. If they are the same, then the file resides in the same current working directory, and we just return the base name of the file (the filename with no path). Otherwise, we return the file ensuring we have the real path which should handle relative paths and ensure we don’t get odd issues when trying to access the file. We the return the return value. print_usage() def print_usage(): """ Prints out the correct way to call the script """ This routine doesn’t take any arguments and is only meant to simplify returning the usage for the script. print("Usage: geolocation-update.py <hostname/ip> <credentials> <zip> <md5>") print("\t<hostname/ip> is the resolvable address of the F5 device") print("\t<credentials> are the username and password formatted as username:password") print("\t<zip> is the name, and path if not in the same directory, to the geolocation zip package") print("\t<md5> is the name, and path if not in the same directory, to the geolocation zip md5 file") print("\nNOTE: You can omit the password and instead put it in an env variable named BIGIP_PASS") There is not much to explain here as it just prints out usage to the console. Obviously, depending on how you integrate these routines, you may need to change this appropriately. Command Line Script Now we need a way to wrap all this together. Thus far, this code has been presented in a somewhat library-like fashion, although you would need to do some things to make it a module of course. However, to illustrate how to use it all we can set up this code so it can be executed standalone. ############################################################################### # main() entry point if run from cmdline as script ############################################################################### if __name__ == "__main__": # Disable/suppress warnings about unverified SSL: import urllib3 requests.packages.urllib3.disable_warnings() urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) For a standalone execution, we check if __name__ is equivalent to the string “__main__”. This is a pythonic “trick” which allows the interpreter to figure out if the module being run is the main program. If it is, its basically being run as a script. Otherwise, __name__ will be set to the modules name and we know its being imported into another piece of code. We import urllib3 which we need in a moment and then disable some annoying warnings that will tell us we are potentially connecting to unsafe web sources. try: if len(sys.argv) < 5: raise ValueError # Extract cmd line arguments and massage them accordingly g_path = sys.argv[0] g_bigip = f"https://{sys.argv[1]}" g_creds = sys.argv[2] g_zip_file = validate_file(g_path, sys.argv[3]) g_md5_file = validate_file(g_path, sys.argv[4]) # Handle username/password from creds or environment variable if ('BIGIP_PASS' in os.environ ) and (os.environ['BIGIP_PASS'] is not None) and (not ":" in g_creds): g_username = g_creds g_password = os.environ['BIGIP_PASS'] else: creds = g_creds.split(':',1) g_username = creds[0] g_password = creds[1] except ValueError: print("Wrong number of arguments.") print_usage() sys.exit(-1) except FileNotFoundError as e: print(f"{e}. Exiting..") print_usage() sys.exit(-1) The first part of the script, we wand to verify the command line arguments and massage a few things. We do a quick sanity check on the number of arguments passed and if they are less than 5 we know we are missing some data to run correctly. Next, we extract the command line arguments into some global variables. The variable g_bigip is formatted slightly to save us some time putting the protocol later on. Some better checking could be performed to ensure its not already formatted that way. The username and potentially password is put into g_creds, which we will clean up in a moment. The last two, g_zip_file and g_md5_file hold the file names for the respective after being processed by validate_file(). Next, we check and see if the password for the passed username is in an environment variable. If there is no “:” in the string, meaning the caller did not pass <username>:<password> to us and an environment variable is set then we can set g_username and g_password. Otherwise, we extract g_username and g_password from g_creds and move forward. We handle exceptional cases for not enough arguments and if validate_file raises FileNotFoundError and exit in both cases. # Get the access token print("Getting access token") if( g_token := get_auth_token(g_bigip, g_username, g_password) ) is None: print("Problem getting access token, exiting") sys.exit(-1) # Attempt to backup existing db print("Backing up existing db") backup_geo_db(g_bigip, g_token) # Get starting date/version of geolocation db for comparison startVersion = get_geoip_version(g_bigip, g_token) # Upload geolocation update zip file print("Uploading geolocation updates") if False is upload_geolocation_update(g_bigip, g_token, g_zip_file, g_md5_file): print("Unable to upload zip and/or md5 file. Exiting.") sys.exit(-1) # Install geolocation update print("Installing geolocation updates") if False is install_geolocation_update(g_bigip, g_token, g_zip_file): print("Unable to install the geolocation updates. Exiting.") sys.exit(-1) # Get end date/version of geolocation db for comparison endVersion = get_geoip_version(g_bigip, g_token) sys.exit( compare_versions(startVersion, endVersion) ) Finally, we construct what could be considered the “main loop” which, because of all the code we have written thus far, is quite pithy. First, we attempt to get an access token which we will need for authorization going forward. Next, we attempt to back up the db on the BIG-IP. Notice we don’t verify that and simply go forward if that were to fail. A more conservative approach would want to handle this differently. Next, we get the starting version so we can compare that after we process the update. Then, we upload the geolocation update files. If this fails, we do catch it and exit with an error. Next, we install the geolocation update and again, if it fails exit the script with an error. Lastly, we get the version string again and then compare versions as we exit the routine. And that, finally, concludes the project. Wrap up This concludes part 7 of the series, and the conclusion of the series overall. Hopefully, this provides a suitable framework for performing geolocation updates that you can either use as is or incorporate into your toolset or CI/CD pipeline. It should pass a lint check and was vigorously tested by myself, but I would encourage a more rigorous and formal review for production purposes. Hopefully this has provided some insight and ideas to solving geolocation database maintenance in your environment. You can access the entire series here: Part 1 Part 2 Part 3 Part 4 Part 5 Part 6 Part 770KViews0likes0CommentsHTTPS Monitor: receive string with quotation mark
Hello, I need to create a HTTPS Monitor to a status page of the application, the answer is like this: portalStatus: "AVAILABLE" (I get this answer from a browser) when I set the receive string to portalStatus:"AVAILABLE" the Monitor doesn't get the string. Do I have to use some brackets or backslashes? could someone help me with this? ThanksSolved66KViews0likes3CommentsTransparent Load Balancing in Azure
When deploying a BIG-IP load balancer in Azure you often need to apply both source and destination address translation to work with the Azure load balancers. The following is how to reduce the amount of address translation to make it possible for a backend application to see the original client IP address and make it easier to correlate logs to public IP addresses. Deployment Scenario The following example makes use of several Azure and BIG-IP features to reduce the amount of address translation that is required. Use Azure “Floating IP” to preserve the destination IP address to the BIG-IP Set SNAT (source network address translation) to None on the BIG-IP to preserve the source IP address Make use of Azure “HA Ports” to send the return traffic from the backend application to the correct BIG-IP device How the pieces fit together Preserving Destination IP Address When you use the Azure Load Balancer there is an option to configure a “Floating IP”. Enabling this option causes ALB to no longer rewrite the destination IP address to the private address of the BIG-IP. Preserving the Client IP Address ALB will forward the connection with the original client IP address. The BIG-IP can be configured to also forward the client IP address by selecting the option of setting SNAT to None. This will cause the BIG-IP to still rewrite the destination IP to the pool member (otherwise how would the packet get there?), but preserve the source IP. Getting the return traffic via ALB Earlier we did not rewrite the client IP address, that creates a new problem of how do we respond to the original client with the correct source IP address (previously rewritten by the BIG-IP). In Azure you can configure a route table (or UDR) to point to a private address on an internal Azure Load Balancer (no public IP address). The ALB can then be configured to forward the connection back to the BIG-IP. This is the Azure equivalent of pointing the default gateway of a backend server to the BIG-IP (common “two-arm” deployment of BIG-IP on-prem). Staying Active A potential issue is that Azure load balancer will send to all the backends. To keep traffic symmetric (only flow through a single device) we configure Azure health monitors to monitor a health probe on the BIG-IP that will only respond on the "active" device. Unusual Traffic Flow Taking a look at the final picture of traffic flow we can see that we have created a meandering path of traffic. First stop, external ALB, next stop BIG-IP, on to the backend app, back to an internal ALB, return to the BIG-IP, and all the way back to the client. This “works” because the traffic always knows where to go next. Should I be doing this? This solution works well in situations where the backend application MUST see the original client IP address. Otherwise, it’s a bit complicated and reduces you to an Active/Standby architecture instead of a more scalable Active/Active deployment. Alternate approaches would be to use an X-Forwarded-For header and/or Proxy Protocol. You can also do this architecture without using an Azure load balancer and using the Cloud Failover Extension to move Azure Public IPs and Route Tables via API calls. Using the Azure load balancer makes it simpler to do this type of topology. I hope this cleared things up for you.58KViews0likes3CommentsHow to tell nginx to use a forward proxy to reach a specific destination
Hello. I accidentally closed my previous post, so I recreate this discussion because of the following problem I'm encountering. Here is the situation : I have multiple servers which are in a secure network zone I have another server where nginx is installed and is used as a reverse proxy. The NGINX server has access to a remote destination (a gitlab server) through a forward proxy (squid) So the flow is the following : Servers in secure zone --> Server Nginx as reverse proxy --> Server squid as forward proxy --> an internal gitlab in another network zone. Is it possible to tell nginx to use the squid forward proxy to reach the gitlab server, please ? For the moment, I have this configuration : server { listen 443 ssl; server_name <ALIAS DNS OF NGINX SERVER>; ssl_certificate /etc/nginx/certs/mycert.crt; ssl_certificate_key /etc/nginx/certs/mykey.key; ssl_session_cache shared:SSL:1m; ssl_prefer_server_ciphers on; access_log /var/log/nginx/mylog.access.log; error_log /var/log/nginx/mylog.error.log debug; location / { proxy_pass https://the-gitlab-host:443; } } But it does not work. When I try to perform a git command from a server in secure zone, it fails and in the nginx logs I see a timeout, which is normal, because nginx does not use the squid forward proxy to reach the gitlab server. Thank you in advance for your help ! Best regards.Solved51KViews0likes12CommentsConfigure the F5 BIG-IP as an Explicit Forward Web Proxy Using LTM
In a previous article, I provided a guide on using F5's Access Policy Manager (APM) and Secure Web Gateway (SWG) to provide forward web proxy services. While that guide was for organizations that are looking to provide secure internet access for their internal users, URL filtering as well as securing against both inbound and outbound malware, this guide will use only F5's Local Traffic Manager to allow internal clients external internet access. This week I was working with F5's very talented professional services team and we were presented with a requirement to allow workstation agents internet access to known secure sites to provide logs and analytics. Of course, this capability can be used to meet a number of other use cases, this was a real-world use case I wanted to share. So with that, let's get to it! Creating a DNS Resolver Navigate to Network > DNS Resolvers > click Create Name: DemoDNSResolver Leave all other settings at their defaults and click Finished Click the newly created DNS resolver object Click Forward Zones Click Add In this use case, we will be forwarding all requests to this DNS resolver. Name: . Address: 8.8.8.8 Note: Please use the correct DNS server for your use case. Service Port: 53 Click Add and Finished Creating a Network Tunnel Navigate to Network > Tunnels > Tunnel List > click Create Name: DemoTunnel Profile: tcp-forward Leave all other settings default and click Finished Create an http Profile Navigate to Local Traffic > Profiles > Services > HTTP > click Create Name: DemoExplicitHTTP Proxy Mode: Explicit Parent Profile: http-explict Scroll until you reach Explicit Proxy settings. DNS Resolver: DemoDNSResolver Tunnel Name: DemoTunnel Leave all other settings default and click Finish Create an Explicit Proxy Virtual Server Navigate to Local Traffic > Virtual Servers > click Create Name: explicit_proxy_vs Type: Standard Destination Address/Mask: 10.1.20.254 Note: This must be an IP address the internal clients can reach. Service Port: 8080 Protocol: TCP Note: This use case was for TCP traffic directed at known hosts on the internet. If you require other protocols or all, select the correct option for your use case from the drop-down menu. Protocol Profile (Client): f5-tcp-progressive Protocol Profile (Server): f5-tcp-wan HTTP Profile: DemoExplicitHTTP VLAN and Tunnel Traffic Enabled on: Internal Source Address Translation: Auto Map Leave all other settings at their defaults and click Finish. Create a Fast L4 Profile Navigate to Local Traffic > Profiles: Protocol: Fast L4 > click Create Name: demo_fastl4 Parent Profile: fastL4 Enable Loose Initiation and Loose Close as shown in the screenshot below. Click Finished Create a Wild Card Virtual Server In order to catch and forward all traffic to the BIG-IP's default gateway, we will create a virtual server to accept traffic from our explicit proxy virtual server created in the previous steps. Navigate to Local Traffic > Virtual Servers > Virtual Server List > click Create Name: wildcard_VS Type: Forwarding (IP) Source Address: 0.0.0.0/0 Destination Address: 0.0.0.0/0 Protocol: *All Protocols Service Port: 0 *All Ports Protocol Profile: demo_fastl4 VLAN and Tunnel Traffic: Enabled on...DemoTunnel Source Address Translation: Auto Map Leave all other settings at their defaults and click Finished. Testing and Validation Navigate to a workstation on your internal network. Launch Internet Explorer or the browser of your preference. Modify the proxy settings to reflect the explicit_proxy_VS created in previous steps. Attempt to access several sites and validate you are able to reach them. Whether successful or unsuccessful, navigate to Local Traffic > Virtual Servers > Virtual Server List > click the Statistics tab. Validate traffic is hitting both of the virtual servers created above. If it is not, for troubleshooting purposes only configure to the virtual servers to accept traffic on All VLANs and Tunnels as well as useful tools such as curl and tcpdump. You have now successfully configured your F5 BIG-IP to act as an explicit forward web proxy using LTM only. As stated above, this use case is not meant to fulfill all forward proxy use cases. If URL filtering and malware protection are required, APM and SWG integration should be considered. Until next time!44KViews9likes34CommentsSSL Client Certification Alert 46 Unknown CA
We are seeing 'Alert 46 Unknown CA' as part of the initial TLS handshake between client & server. From a wireshark capture, the 1st Client Hello is visible, followed by the 'server hello, certificate, server key exchange, certificate request, hello done'. As part of this exchange, TLS version 1.2 is agreed, along with the agreed cypher. The next packet in the flow is an ACK from the source, followed by Alert (Fatal), Description: Certificate Unknown. I cannot see anywhere in the capture a certificate provided by the client This behaviour occurs regardless of the client authentication/client certificate setting (ignore/request/require). I have ran openssl s_client -connect x.x.x.x:443 as a test (from the BIG-IP) and I see the server side certs and 'No client certificate CA names sent' which is expected as no client cert sent. The end client has not reinstalled the client certificate as yet (3 day lead time). Are there any additional troubleshooting steps I can undertake to confirm the client is either rejecting the server certificate and therefore not returning the client certificate? Kind RegardsSolved32KViews0likes17CommentsAutomate Let's Encrypt Certificates on BIG-IP
To quote the evil emperor Zurg: "We meet again, for the last time!" It's hard to believe it's been six years since my first rodeo with Let's Encrypt and BIG-IP, but (uncompromised) timestamps don't lie. And maybe this won't be my last look at Let's Encrypt, but it will likely be the last time I do so as a standalone effort, which I'll come back to at the end of this article. The first project was a compilation of shell scripts and python scripts and config files and well, this is no different. But it's all updated to meet the acme protocol version requirements for Let's Encrypt. Here's a quick table to connect all the dots: Description What's Out What's In acme client letsencrypt.sh dehydrated python library f5-common-python bigrest BIG-IP functionality creating the SSL profile utilizing an iRule for the HTTP challenge The f5-common-python library has not been maintained or enhanced for at least a year now, and I have an affinity for the good work Leo did with bigrest and I enjoy using it. I opted not to carry the SSL profile configuration forward because that functionality is more app-specific than the certificates themselves. And finally, whereas my initial project used the DNS challenge with the name.com API, in this proof of concept I chose to use an iRule on the BIG-IP to serve the challenge for Let's Encrypt to perform validation against. Whereas my solution is new, the way Let's Encrypt works has not changed, so I've carried forward the process from my previous article that I've now archived. I'll defer to their how it works page for details, but basically the steps are: Define a list of domains you want to secure Your client reaches out to the Let’s Encrypt servers to initiate a challenge for those domains. The servers will issue an http or dns challenge based on your request You need to place a file on your web server or a txt record in the dns zone file with that challenge information The servers will validate your challenge information and notify you You will clean up your challenge files or txt records The servers will issue the certificate and certificate chain to you You now have the key, cert, and chain, and can deploy to your web servers or in our case, to the BIG-IP Before kicking off a validation and generation event, the client registers your account based on your settings in the config file. The files in this project are as follows: /etc/dehydrated/config # Dehydrated configuration file /etc/dehydrated/domains.txt # Domains to sign and generate certs for /etc/dehydrated/dehydrated # acme client /etc/dehydrated/challenge.irule # iRule configured and deployed to BIG-IP by the hook script /etc/dehydrated/hook_script.py # Python script called by dehydrated for special steps in the cert generation process # Environment Variables export F5_HOST=x.x.x.x export F5_USER=admin export F5_PASS=admin You add your domains to the domains.txt file (more work likely if signing a lot of domains, I tested the one I have access to). The dehydrated client, of course is required, and then the hook script that dehydrated interacts with to deploy challenges and certificates. I aptly named that hook_script.py. For my hook, I'm deploying a challenge iRule to be applied only during the challenge; it is modified each time specific to the challenge supplied from the Let's Encrypt service and is cleaned up after the challenge is tested. And finally, there are a few environment variables I set so the information is not in text files. You could also move these into a credential vault. So to recap, you first register your client, then you can kick off a challenge to generate and deploy certificates. On the client side, it looks like this: ./dehydrated --register --accept-terms ./dehydrated -c Now, for testing, make sure you use the Let's Encrypt staging service instead of production. And since I want to force action every request while testing, I run the second command a little differently: ./dehydrated -c --force --force-validation Depicted graphically, here are the moving parts for the http challenge issued by Let's Encrypt at the request of the dehydrated client, deployed to the F5 BIG-IP, and validated by the Let's Encrypt servers. The Let's Encrypt servers then generate and return certs to the dehydrated client, which then, via the hook script, deploys the certs and keys to the F5 BIG-IP to complete the process. And here's the output of the dehydrated client and hook script in action from the CLI: # ./dehydrated -c --force --force-validation # INFO: Using main config file /etc/dehydrated/config Processing example.com + Checking expire date of existing cert... + Valid till Jun 20 02:03:26 2022 GMT (Longer than 30 days). Ignoring because renew was forced! + Signing domains... + Generating private key... + Generating signing request... + Requesting new certificate order from CA... + Received 1 authorizations URLs from the CA + Handling authorization for example.com + A valid authorization has been found but will be ignored + 1 pending challenge(s) + Deploying challenge tokens... + (hook) Deploying Challenge + (hook) Challenge rule added to virtual. + Responding to challenge for example.com authorization... + Challenge is valid! + Cleaning challenge tokens... + (hook) Cleaning Challenge + (hook) Challenge rule removed from virtual. + Requesting certificate... + Checking certificate... + Done! + Creating fullchain.pem... + (hook) Deploying Certs + (hook) Existing Cert/Key updated in transaction. + Done! This results in a deployed certificate/key pair on the F5 BIG-IP, and is modified in a transaction for future updates. This proof of concept is on github in the f5devcentral org if you'd like to take a look. Before closing, however, I'd like to mention a couple things: This is an update to an existing solution from years ago. It works, but probably isn't the best way to automate today if you're just getting started and have already started pursuing a more modern approach to automation. A better path would be something like Ansible. On that note, there are several solutions you can take a look at, posted below in resources. Resources https://github.com/EquateTechnologies/dehydrated-bigip-ansible https://github.com/f5devcentral/ansible-bigip-letsencrypt-http01 https://github.com/s-archer/acme-ansible-f5 https://github.com/s-archer/terraform-modular/tree/master/lets_encrypt_module (Terraform instead of Ansible) https://community.f5.com/t5/technical-forum/let-s-encrypt-with-cloudflare-dns-and-f5-rest-api/m-p/292943 (Similar solution to mine, only slightly more robust with OCSP stapling, the DNS instead of HTTP challenge, and with bash instead of python)31KViews6likes19Comments