ltm
19124 TopicsSSL Forward Proxy, iRules and Client Hello
Hi all, I am seeing odd behaviour using SSL fwd proxy (SSLO): My intention is to use the client hello (SNI) to influence SSSL profile selection. I have 2 SSSL profiles setup, let call them A and B For trusted connections (i.e. certs issuers in SSSL CA bundle) is am unable to extract the SNI from the initial CH, using the CLIENTSSL_CLIENTHELLO event and [SSL::extensions -type 0]. These are send to profile A based on SNI. I have pcaps showing the CH incoming to the F5. I assume this may have something to do with the 'verified handshake' functionality. It appears the test client browser keeps attempting connection and I see inconsistent results (some connections are reset, some succeed). In irule logs its apparent the SNI does eventually become available in the CLIENTSSL_CLIENTHELLO event. For untrusted/self signed etc this doesn't appear to happen, these are sent to Profile B (identical to A for testing purposes) so my assumption is the F5 is doing some kind of SNI processing (compare to CN's in trust store?) and then connecting to the server for 'verified handshake' before releasing the SNI into the CLIENTSSL_CLIENTHELLO event? I have seen an iRule that effectively disables SSL then parses the raw client hello for SNI, I expect this may work as it would intercept the raw CH so the F5 cannot interfere or do any server-side preamble, but I'd rather do this within the realms of defined events if possible... :-) Any suggestions or comments welcome! thanks14Views0likes0CommentsRecommendation for Adv. Lab
Hi Everyone, I'm relatively new to F5 BIG-IP and want to improve my hands-on skills. I have a chance to build a good lab, but I'm struggling to find real-world use cases and troubleshooting scenarios. Currently, I can only run basic tests with DVWA, but I want to simulate a complex environment. Could you recommend any resources (videos, docs, or lab guides or anything can help) specifically for LTM, AWAF, DNS and APM, use-case scenarios, troubleshooting exercises, architectures etc. Any guidance to help me bridge the gap between basic setup and professional practice would be greatly appreciated. Thanks in advance!283Views0likes7CommentsGTM Pool Members Gone After Maintenance? It's Probably This One Setting
You finish a maintenance window, everything looks good on LTM, and then someone notices Wide IPs are resolving to fewer destinations than before. You check the GTM pools and the members are just... gone. The virtual servers are fine on LTM. GTM just doesn't know about them anymore — and more importantly, it doesn't remember if they were ever pool members. This happens more often than it should, and it almost always comes back to the same thing: virtual-server-discovery enabled doing exactly what it was designed to do, at exactly the wrong moment. What's Actually Going On When virtual-server-discovery is set to enabled on a GTM server object, GTM keeps its view of LTM virtual servers in sync via iQuery. It automatically adds new virtual servers, updates existing ones, and — this is the part that causes problems — deletes virtual servers that LTM stops reporting on. That delete behavior is the issue. Any time iQuery reports zero virtual servers, even temporarily, GTM treats it as a mass deletion event. The virtual servers get pulled from the server object, and with them, their pool memberships. When LTM eventually reports on those virtual servers again, GTM re-discovers them as brand new objects with no memory of which pools they belonged to. Two scenarios trigger this consistently. Scenario 1: LTM Software Upgrade This is the one that catches most people. During an upgrade, LTM reboots and goes through a phase where iQuery can connect but the full configuration hasn't finished loading yet. From GTM's perspective, LTM is reachable but reporting no virtual servers. GTM interprets that as a deletion event, clears out the discovered virtual servers, and empties the pools. When LTM finishes loading and the virtual servers come back, GTM re-discovers them — but the pool memberships are gone. You're left manually rebuilding what was there before the maintenance window started. The telltale sign is pool members coming back in blue/CHECKING state. That only happens to newly discovered objects. GTM treated a returning virtual server as a brand new one — because as far as it's concerned, it is. The GTM log won't show a deletion event, only the re-add. That gap in the logs is a known blind spot with virtual-server-discovery enabled, and it's exactly why the problem is hard to diagnose after the fact. What you'll typically see in /var/log/gtm after the LTM comes back: alert gtmd[xxxxx]: 011a1005:1: SNMP_TRAP: Pool your_pool state change green --> red (No enabled pool members available) alert gtmd[xxxxx]: 011a3004:1: SNMP_TRAP: Wide IP your.wideip.example.com state change green --> red (No enabled pools available) And then shortly after, the virtual servers re-appear in CHECKING state as GTM re-discovers them — but with no pool bindings. Scenario 2: LTM HA Failover This one surprises people because the LTM pair is still running — it's just switching active units. After a failover, the new active device may not have its iQuery connections fully re-established yet. GTM sees the iQuery state as inconsistent, virtual server status updates stop coming through, and members disappear from the discovered list. What makes this harder to diagnose is that tmsh show gtm iquery may show "connected" — but connected doesn't mean the config sync is working correctly. In a GTM sync group, only the device assigned local ID 0 (the GTM with the lowest IP address) is responsible for writing auto-discovery results to the configuration. If that specific device loses its iQuery connection during the failover window, discovery events are missed entirely — even if every other GTM in the group can still reach the LTM. So you can have a situation where five out of six GTMs look perfectly healthy, iQuery shows connected everywhere, and yet pool members are still disappearing — because the one device that matters for discovery is the one with the broken connection. You can check which device in your sync group holds local ID 0 with: tmsh list sys db gtm.peerinfolocalid If that device's iQuery connection to the LTM is the one that dropped during the failover window, that's your answer — even if everything else looks fine. The Fix: enabled-no-delete Both scenarios share the same root cause: GTM's auto-delete behavior treating a temporary iQuery disruption as a permanent deletion event. The fix is the same for both: gtm server /Common/site1-ltm { addresses { 10.1.1.1 { device-name site1-ltm } } datacenter /Common/dc1 monitor /Common/bigip virtual-server-discovery enabled-no-delete } With enabled-no-delete, GTM still auto-discovers new virtual servers and keeps existing ones updated. The only thing that changes is that it will never delete a virtual server just because LTM temporarily stopped reporting it. Your pool memberships survive both scenarios above. Mode Adds new VS Updates VS Deletes VS Pool memberships survive iQuery disruption? disabled No No No Yes — nothing changes enabled Yes Yes Yes No — any disruption can empty pools enabled-no-delete Yes Yes No Yes — preserved The Trade-Off enabled-no-delete won't clean up after you when you intentionally decommission a virtual server on LTM. The stale GTM object stays in the discovered list until you remove it manually. In environments with a lot of VS churn, this can accumulate over time. The question is which failure mode you'd rather manage: pool members silently disappearing during a maintenance window, or occasionally needing to clean up stale objects after a planned decommission. For most production environments, the latter is far easier to deal with — and far less likely to wake someone up at 2am. How to Make the Change Via tmsh: tmsh modify gtm server /Common/site1-ltm \ virtual-server-discovery enabled-no-delete tmsh save sys config Via GUI: Go to DNS → GSLB → Servers Select the server object Set Virtual Server Discovery to Enabled (No Delete) Click Update This takes effect immediately and does not affect existing discovered virtual servers or current pool memberships. Cleaning Up Stale Objects When you intentionally decommission a virtual server on LTM, remove the leftover GTM object manually: # List virtual servers under a GTM server object tmsh list gtm server /Common/site1-ltm virtual-server # Remove a specific stale entry tmsh modify gtm server /Common/site1-ltm \ virtual-servers delete { /Common/old-vs-name } tmsh save sys config Make this part of your standard VS decommission runbook and stale objects will never pile up. Quick Diagnostic When Members Go Missing Before assuming it's a discovery issue, check iQuery health across all GTM devices first: tmsh show gtm iquery Look for: State: should be connected to all entries Reconnects: A high count suggests instability even if the connection looks up Configuration Time: None means the config has never successfully synced from that LTM Then confirm which GTM holds local ID 0 and verify its connectivity specifically: tmsh list sys db gtm.peerinfolocalid If the local ID 0 device is the one with the broken iQuery connection, that's your answer — regardless of what the other devices are showing. Wrapping Up Whether it's an LTM upgrade or an HA failover, the pattern is the same: iQuery goes quiet for a moment, GTM interprets silence as deletion, and your pool memberships are gone. It's working as designed — just not in a way that's useful to you. enabled-no-delete is a one-line change that stops this from happening. The cleanup overhead it introduces is predictable and manageable. The alternative — rebuilding pool memberships after an unplanned event — is not. Have you run into either of these scenarios in your environment? Drop a comment below, especially if you've seen the local ID 0 shift cause issues during a rolling GTM upgrade.31Views1like0CommentsSSL Orchestrator Advanced Use Cases: Client Certificate Constrained Delegation (C3D) Support
Introduction F5 BIG-IP is synonymous with "flexibility". You likely have few other devices in your architecture that provide the breadth of capabilities that come native with the BIG-IP platform. And for each and every BIG-IP product module, the opportunities to expand functionality are almost limitless. In this article series we examine the flexibility options of the F5 SSL Orchestrator in a set of "advanced" use cases. If you haven't noticed, the world has been steadily moving toward encrypted communications. Everything from web, email, voice, video, chat, and IoT is now wrapped in TLS, and that's a good thing. The problem is, malware - that thing that creates havoc in your organization, that exfiltrates personnel records to the Dark Web - isn't stopped by encryption. TLS 1.3 and multi-factor authentication don't eradicate malware. The only reasonable way to defend against it is to catch it in the act, and an entire industry of security products are designed for just this task. But ironically, encryption makes this hard. You can't protect against what you can't see. F5 SSL Orchestrator simplifies traffic decryption and malware inspection, and dynamically orchestrates traffic to your security stack. But it does much more than that. SSL Orchestrator is built on top of F5's BIG-IP platform, and as stated earlier, is abound with flexibility. SSL Orchestrator Use Case: Client Certificate Constrained Delegation (C3D) Using certificates to authenticate is one of the oldest and most reliable forms of authentication. While not every application supports modern federated access or multi-factor schemes, you'll be hard-pressed to find something that doesn't minimally support authentication over TLS with certificates. And coupled with hardware tokens like smart cards, certificates can enable one of the most secure multi-factor methods available. But certificate-based authentication has always presented a unique challenge to security architectures. Certificate "mutual TLS" authentication requires an end-to-end TLS handshake. When a server indicates a requirement for the client to submit its certificate, the client must send both its certificate, and a digitally-signed hash value. This hash value is signed (i.e. encrypted) with the client's private key. Should a device between the client and server attempt to decrypt and re-encrypt, it would be unable to satisfy the server's authentication request by virtue of not having access to the client's private key (to create the signed hash). This makes encrypted malware inspection complicated, often requiring a total bypass of inspection to sites that require mutual TLS authentication. Fortunately, F5 has an elegant solution to this challenge in the form of Client Certificate Constrained Delegation, affectionally referred to as "C3D". The concept is reasonably straightforward. In very much the same way that SSL forward proxy re-issues a remote server certificate to local clients, C3D can re-issue a remote client certificate to local servers. A local server can continue to enforce secure mutual TLS authentication, while allowing the BIG-IP to explicitly decrypt and re-encrypt in the middle. This presents an immediate advantage in basic load balancing, where access to the unencrypted data allows the BIG-IP greater control over persistence. In the absence of this, persistence would typically be limited to IP address affinity. But of course, access to the unencrypted data also allows the content to be scanned for malicious malware. C3D actually takes this concept of certificate re-signing to a higher level though. The "constrained delegation" portion of the name implies a characteristic much like Kerberos constrained delegation, where (arbitrary) attributes can be inserted into the re-signed token, like the PAC attributes in a Kerberos ticket, to inform the server about the client. Servers for their part can then simply filter on client certificates issued by the BIG-IP (to prevent direct access), and consume any additional attributes in the certificate to understand how better to handle the client. With C3D you can maintain strong mutual TLS authentication all the way through to your servers, while allowing the BIG-IP to more effectively manage availability. And combined with SSL Orchestrator, C3D can enable decryption and inspection of content for malware inspection. This article describes how to configure SSL Orchestrator to enable C3D for inbound decrypted inspection. Arguably, most of what follows is the C3D configuration itself, as the integration with SSL Orchestrator is pretty simple. Note that Client Certificate Constrained Delegation (C3D) is included with Local Traffic Manager (LTM) 13.0 and beyond, but for integration with SSL Orchestrator you should be running 14.1 or later.To learn more about C3D, please see the following resources: K14065425: Configuring Client Certificate Constrained Delegation (C3D): https://support.f5.com/csp/article/K14065425 Manual Chapter: SSL Traffic Management: https://techdocs.f5.com/en-us/bigip-15-1-0/big-ip-system-ssl-administration/ssl-traffic-management.html#GUID-B4D2529E-D1B0-4FE2-8C7F-C3774ADE1ED2 SSL::c3d iRule reference - not required to use C3D, but adds powerful functionality https://clouddocs.f5.com/api/irules/SSL__c3d.html The integration of C3D with SSL Orchestrator involves effectively replacing the client and server SSL profiles that the SSL Orchestrator topology creates, with C3D SSL profiles. This is done programmatically with an iRule, so no "non-strict" customization is required at the topology. Also note that an inbound (reverse proxy) SSL Orchestrator topology will take the form of a "gateway mode" deployment (a routed path to multiple applications), or "application mode" deployment (a single application instance hosted at the BIG-IP). See section 2.5 of the SSL Orchestrator deployment guide for a deeper examination of gateway and application modes: https://clouddocs.f5.com/sslo-deployment-guide/ The C3D integration is only applicable to application mode deployments. Configuration C3D itself involves the creation of client and server SSL profiles: Create a new Client SSL profile: Configuration Certificate Key Chain: public-facing server certificate and private key. This will be the certificate and key presented to the client on inbound request. It will likely be the same certificate and key defined in the SSL Orchestrator inbound topology. Client Authentication Client Certificate: require Trusted Certificate Authorities: bundle that can validate client certificate. This is a certificate bundle used to verify the client's certificate, and will contain all of the client certificate issuer CAs. Advertised Certificate Authorities: optional CA hints bundle. Not expressly required, but this certificate bundle is forwarded to the client during the TLS handshake to "hint" at the correct certificate, based on issuer. Client Certificate Constrained Delegation Client Certificate Constrained Delegation: Enabled Client Fallback Certificate (new in 15.1): option to select a default client certificate if client does not send one. This option was introduced in 15.1 and provides the means to select an alternate (local) certificate if the client does not present one. The primary use case here might be to select a "template" certificate, and use an iRule function to insert arbitrary attributes. OCSP: optional client certificate revocation control. This option defines an OCSP revocation provider for the client certificate. Unknown OCSP Response Control (new in 15.1): determines what happens when OCSP returns Unknown. If an OCSP revocation provider is selected, this option defines what to do if the response to the OCSP query is "unknown". Create a new Server SSL profile: Configuration Certificate: default.crt. The certificate and key here are used as "templates" for the re-signed client certificate. Key: default.key Client Certificate Configuration Delegation Client Certificate Constrained Delegation: Enabled CA Certificate: local forging CA cert. This is the CA certificate used to re-sign the client certificate. This CA must be trusted by the local servers. CA Key: local forging CA key CA Passphrase: optional CA passphrase Certificate Extensions: extensions from the real client cert to be included in the forged cert. This is the list of certificate extensions to be copied from the original certificate to the re-issued certificate. Custom Extension: additional extensions to copy to forged cert from real cert (OID). This option allows you to insert additional extensions to be copied, as OID values. Additional considerations: Under normal conditions, the F5 and backend server attempt to resume existing SSL sessions, whereby the server doesn’t send a Certificate Request message. The effect is that all connections to the backend server use the same forged client cert. There are two ways to get around this: Set a zero-length cache value in the server SSL profile, or Set server authentication frequency to ‘always’ in the server SSL profile CA certificate considerations: A valid signing CA certificate should possess the following attributes. While it can work in some limited scenarios, a self-signed server certificate is generally not an adequate option for the signing CA. keyUsage: certificate extension containing "keyCertSign" and "digitalSignature" attributes basicConstraints: certificate extension containing "CA = true" (for Yes), marked as "Critical" With the client and server SSL profiles built, the C3D configuration is basically done. To integrate with an inbound SSL Orchestrator topology, create a simple iRule and add it to the topology's Interception Rule configuration. Modify the SSL profile paths below to reflect the profiles you created earlier. ### Modify the SSL profile paths below to match real C3D SSL profiles when CLIENT_ACCEPTED priority 250 { ## set clientssl set cmd1 "SSL::profile /Common/c3d-clientssl" ; eval $cmd1 } when SERVER_CONNECTED priority 250 { ## set serverssl SSL::profile "/Common/c3d-serverssl" } In the SSL Orchestrator UI, either from topology workflow, or directly from the corresponding Interception Rule configuration, add the above iRule and deploy. The above iRule programmatically overrides the SSL profiles applied to the Interception Rule (virtual server), effectively enabling C3D support. At this point, the virtual server will request a client certificate, perform revocation checks if defined, and then mint a local copy of the client certificate to pass to the backend server. Optionally, you can insert additional certificate attributes via the server SSL profile configuration, or more dynamically through additional iRule logic: ### Added in 15.1 - allows you to send a forged cert to the server ### irrespective of the client side authentication (ex. APM SSO), ### and insert arbitrary values when SERVERSSL_SERVERCERT { ### The following options allow you to override/replace a submitted ### client cert. For example, a minted client certificate can be sent ### to the server irrespective of the client side authentication method. ### This certificate "template" could be defined locally in the iRule ### (Base64-encoded), pulled from an iFile, or some other certificate source. # set cert1 [b64decode "LS0tLS1a67f7e226f..."] # set cert1 [ifile get template-cert] ### In order to use a template cert, it must first be converted to DER format # SSL::c3d cert [X509::pem2der $cert1] ### Insert arbitrary attributes (OID:value) SSL::c3d extension 1.3.6.1.4.1.3375.3.1 "TEST" } If you've configured the above, a server behind SSL Orchestrator that requires mutual TLS authentication can receive minted client certificates from external users, and SSL Orchestrator can explicitly decrypt and pass traffic to the set of malware inspection tools. You can look at the certificate sent to the server by injecting a tcpdump packet between the BIG-IP and server, then open in Wireshark. tcpdump -lnni [VLAN] -Xs0 -w capture.pcap [additional filters] Finally, you might be asking what to do with certificate attributes injected by C3D, and really it depends on what the server can support. The below is a basic example in an Apache config file to block a client certificate that doesn't contain your defined attribute. <Directory /> SSLRequire "HTTP/%h" in PeerExtList("1.3.6.1.4.1.3375.3.1") RewriteEngine on RewriteCond %{SSL::SSL_CLIENT_VERIFY} !=SUCCESS RewriteRule .? - [F] ErrorDocument 403 "Delegation to SPN HTTP/%h failed. Please pass a valid client certificate" </Directory> And there you have it. In just a few steps you've configured your SSL Orchestrator to integrate with Client Certificate Constrained Delegation to support mutual TLS authentication, and along the way you have hopefully recognized the immense flexibility at your command. Updates As of F5 BIG-IP 16.1.3, there are some new C3D capabilities: C3D has been updated to encode and return the commonName (CN) found in the client certificate subject field in printableString format if possible, otherwise the value will be encoded as UTF8. C3D has been updated to support inserting a subject commonName (CN) via 'SSL::c3d subject commonName' command: when CLIENTSSL_HANDSHAKE { if {[SSL::cert count] > 0} { SSL::c3d subject commonName [X509::subject [SSL::cert 0] commonName] } } C3D has been updated to support inserting a Subject Alternative Name (SAN) and Authority Info Access (AIA) via 'SSL::c3d extension' commands: when CLIENTSSL_HANDSHAKE { ## Insert Subject Alternative Name (SAN) value SSL::c3d extension SAN "DNS:*.test-client.com, IP:1.1.1.1" ### Insert Authority Info Access (AIA) value SSL::c3d extension AIA "ocsp,https://ocsp.entrust.net.com; caIssuer, https://aia.entrust.net/l1m-chain256.cer" } C3D has been updated to add the Authority Key Identifier (AKI) extension to the client certificate if the CA certificate has a Subject Key Identifier (SKI) extension. Another interesting use case is copying the real client certificate Subject Key Identifier (SKI) to the minted client certificate. By default, the minted client certificate will not contain an SKI value, but it's easy to configure C3D to copy the origin cert's SKI by modifying the C3D server SSL profile. In the "Custom extension" field of the C3D section, add 2.5.29.14 as an available extension. As of F5 BIG-IP 17.1.0 (SSL Orchestrator 11.0), C3D has been integrated natively. Now, for a deployed Inbound topology, the C3D SSL profiles are listed in the Protocol Settings section of the Interception Rules tab. You can replace the client and server SSL profiles created by SSL Orchestrator, with C3D SSL profiles in the Interception Rules tab to support C3D. The C3D support is now extended to both Gateway and Application modes.4.6KViews2likes3CommentsBIG-IP Report
Problem this snippet solves: Overview This is a script which will generate a report of the BIG-IP LTM configuration on all your load balancers making it easy to find information and get a comprehensive overview of virtual servers and pools connected to them. This information is used to relay information to NOC and developers to give them insight in where things are located and to be able to plan patching and deploys. I also use it myself as a quick way get information or gather data used as a foundation for RFC's, ie get a list of all external virtual servers without compression profiles. The script has been running on 13 pairs of load balancers, indexing over 1200 virtual servers for several years now and the report is widely used across the company and by many companies and governments across the world. It's easy to setup and use and only requires auditor (read-only) permissions on your devices. Demo/Preview Interactive demo http://loadbalancing.se/bigipreportdemo/ Screen shots The main report: The device overview: Certificate details: How to use this snippet: Installation instructions BigipReport REST This is the only branch we're updating since middle of 2020 and it supports 12.x and upwards. Downloads: https://loadbalancing.se/downloads/bigipreport-v5.7.16.zip Documentation, installation instructions and troubleshooting: https://loadbalancing.se/bigipreport-rest/ Docker support https://loadbalancing.se/2021/01/05/running-bigipreport-on-docker/ Kubernetes support https://loadbalancing.se/2021/04/16/bigipreport-on-kubernetes/ BIG-IP Report (Legacy) Older version of the report that only runs on Windows and is depending on a Powershell plugin originally written by Joe Pruitt (F5) BIG-IP Report (only download this if you have v10 devices): https://loadbalancing.se/downloads/bigipreport-5.4.0-beta.zip iControl Snapin https://loadbalancing.se/downloads/f5-icontrol.zip Documentation and Installation Instructions https://loadbalancing.se/bigip-report/ Upgrade instructions Protect the report using APM and active directory Written by DevCentral member Shann_P: https://loadbalancing.se/2018/04/08/protecting-bigip-report-behind-an-apm-by-shannon-poole/ Got issues/problems/feedback? Still have issues? Drop a comment below. We usually reply quite fast. Any bugs found, issues detected or ideas contributed makes the report better for everyone, so it's always appreciated. --- Join us on Discord: https://discord.gg/7JJvPMYahA Code : BigIP Report Tested this on version: 12, 13, 14, 15, 1618KViews21likes99CommentsEnforcing a Single Connection Max to Pool Members
I like finding jewels and nuggets of clarity in problems presented to the community at large, whether it’s here on DevCentral or in third party communities like Reddit, where member macallen posed the following problem in r/sysadmin a couple months back (paraphrased here, check the link for full context). Problem Statement I have a pool of five servers, and I need a maximum of one connection per server strictly enforced. When I set the connection limit to 1 at the node level, I’m still seeing a second connection offered when the 6th active request comes in. Any ideas on how I can accomplish this? Diagnosing the Problem First, I’ll mock this up in my lab, only on a smaller scale of two servers rather than five, and setting the connection limit on each server to one. Using curl from two virtual machines, I run curl 192.168.102.50/ several times and notice that I am seeing a max of two per server being enforced, not one as anticipated. The problem here is not that TMM is failing to honor the connection limits. The problem, at least on my test system, is that there are two TMMs present. Each TMM is limiting the servers to a maximum of one connection, so in this case, two connections are allowed instead of the required one. And just like the statistical representation of a family consisting of 2.3 kids, well, there’s no such thing as .3 of a kid, and there’s no such thing as .5 of a connection, so setting that doesn’t make much sense and isn’t allowed anyway. The good news is that for almost all use cases at scale the BIG-IP does the math, taking maximum configured connections and dividing by the number of TMMs. Note that this can lead to unexpected issues if for some reason the disaggregator (DAG) has an uneven connection distribution, and it is generally recommended NOT to have a connection maximum less than the active number of TMM instances. See K8457 for additional details. But now that the problem is known, what do I do about it? Solutions Option #1 - Duct Tape & Chewing Gum! In the Reddit thread, the original poster solved his own problem by, in his words, "I created a duct tape solution. I wrote a service that opens a port. When the user connects, it closes the port, when they disconnect it opens it back up. Then I created a contract in F5 for that port so it disables the node when the port is down. Cheap and dirty, but works." Glad to hear that works, but not a process I’d recommend. If someone else takes over ownership of that application and has no idea why that service exists and thus removes it…outage city! Option #2 - Configure BIG-IP VE for a Single Core I call this the machete mode, where I just whack some compute cycles away to solve the problem. That’s an easy one! Shut down the image, strip it down to a single core, fire it back up, and presto! And if this was the only application in service, that would be fantastic. But that’s not likely, and so punishing the rest of the application delivery needs to meet this need is not a great solution. Option #3 - Pin the Virtual Server to a Single Core with an iRule This option requires no system changes at all, just a simple iRule using a global variable, as they are not CMP compatible and thus will demote any virtual server to a single TMM, effectively pinning it and solving the problem. The iRule could look something like this: when RULE_INIT { set ::global_pin_tmm } This iRule is clean and compact, with no impact to traffic since its only engagement is at initialization. It also has a useful name, indicating it’s a global variable and its purpose is to pin the virtual server to a single TMM. Effective, but it feels a little icky to use an iRule with global variables in any version after 11.4 and one of my biggest messages when I speak at user groups is that “iRules are great! But don’t use them!” I always suggest the use of a configuration option when available, and only when iRules are necessary should they be utilized. Option #4 - Pin the Virtual Server to a Single Core with a TMSH Command That brings me to the final option I’ll explore, and that is to use a TMSH command to pin the virtual server. It’s an option on the virtual server (not available in the GUI) to disable CMP: tmsh modify ltm virtual <virtual name> cmp-enabled no Super simple, crystal clear in the configuration, no Tcl-machine necessary. That sounds like a winner to me and is evident now in a new screen capture. Conclusion With BIG-IP, there are often many ways to approach a problem. Sometimes there are no clear advantages amongst solutions, but this problem has a clear winner and that is the final option presented here: using the tmsh command to disable CMP.930Views0likes0CommentsConnection Rate Limit with log output
Hello, I have a question about the "Connection Rate Limit". I recognize that this function is virtual server becomes don't receive new connection after exceeding this threshold. However, I'd rather not block new connection because I may block connection from normal user other than malicious user's one. (I want to output error message only) Q.Do you have any suggestions? (I think it can be achieved by using iRule) Best regards,640Views0likes3Commentssyslog over tcp and define management IP as source
Hello I used following method to add syslog server ip with tcp port. can anyone help me how to define source IP (management IP) to send logs to syslog server. https://support.f5.com/csp/article/K13080 Configuring the BIG-IP system to log to the remote syslog server using TCP protocol Impact of procedure: Performing the following procedure should not have a negative impact on your system. 1.Log in to tmsh by typing the following command: tmsh 2.To log to the remote syslog server using the TCP protocol, use the following command syntax: modify /sys syslog include "destination remote_server {tcp(\"\" port (514));};filter f_alllogs {level (debug...emerg);};log source(local);filter(f_alllogs);destination(remote_server);};" For example, to log to the remote syslog server 172.28.68.42, type the following command: modify /sys syslog include "destination remote_server {tcp(\"172.28.68.42/" port (514));};filter f_alllogs {level (debug...emerg);};log {source(local);filter(f_alllogs);destination(remote_server);};"2.1KViews0likes2CommentsGRE Tunnel Issue
Has anyone run into an issue with GRE tunnels on a BIG-IP? I have a few setup running into a TGW in AWS and something seems to break them. Config change, Module change, ?? I haven't been able to pin down an exact trigger. Sometimes I could failover and have the tunnels on the other HA member work fine and failing back would results in tunnels going down again. (The tunnels are unique to each BIG-IP) They start responding with ICMP protocol 47 unavailable. Once this happens a reboot doesn't seem to fix it. If I tear down the BIG-IP and rebuild it, I can keep them working again for X amount of time before the cycle repeats. Self-IPs are open to the protocol, also tried allow all for a bit. No NATs involved with underlay IPs.Solved201Views0likes3Commentswhich virtual server will be hit?
Hi, we created following virtual forwarding server for internet traffics on LTM. virtual server : internet-vs source ip: 192.12.0.1 ( downstream firewall external interface IP) destination: 0.0.0.0/0 For the return traffics of this VS, do we need to create another virtual server? If we create a new virtual forwarding server like below, will the return traffics of VS "internet-vs" hit this VS "Test-VS"? virtual server: Test-VS source: 0.0.0.0/0 destination: 192.12.0.1 Can someone please advise? Thanks in advance!240Views0likes2Comments