management
5663 Topics- 196Views2likes6Comments
External Data Group Import Failing.
Hey All, Looking for some assistance on importing a DataGroup file. I found the doc that indicated the EOL needed to be in the Unix format instead of Windows. I've "converted" the file in Notepad+, but I'm still getting an error. Is there a way to validate the EOL state of a file? Thanks. -Stephen21Views0likes0CommentsFailed to execute iptable cmd: ," CMD="iptables -A SSH_ALLOW_RULES error
Hi Mates, After upgrading rseries F5 OS to 1.5.4, I observed the below error and I am unable to do SSH for my F5 OS machine version 1.5.4 from the network: 10.54.7.0/24. Rest all the networks are working fine and we are able to do SSH to the same F5 OS machine. Is it something that device was unable to update this entry into iptables. Do we have to manually re-configure this rule? ys-host-config[11678]: priority="Err" version=1.0 msgid=0x7001000000000062 msg="Failed to execute iptable cmd: ," CMD="iptables -A SSH_ALLOW_RULES -s 10.54.7.0/24 -p tcp -m state --state NEW --dport 22 -j ACCEPT -w &>/dev/null" ERR="EXITINFO: 4".29Views0likes0CommentsKerberos Authentication Failing for Exchange 2016 Behind F5 Cloud WAF
Hi Team, We’re running Microsoft Exchange Server 2016 CU24 on Windows Server 2019, and have enabled Kerberos (Negotiate) authentication due to NTLM being deprecated in F5 Cloud WAF. Environment summary: Exchange DAG setup: 4 servers in Primary Site, 2 in DR Site Active Directory: Windows Server 2019 F5 Component: Cloud WAF (BIG-IP F5 Cloud Edition) handling inbound HTTPS traffic Namespaces: mail.domain.lk, webmail.domain.lk, autodiscover.domain.lk Authentication configuration: Negotiate (Kerberos) with NTLM, Basic, and OAuth as fallback SPNs: Correctly registered under the ASA (Alternate Service Account) computer account Certificate: SAN includes mail, webmail, and autodiscover Current status: Internal domain-joined Outlook 2019 clients work without issue. Outlook 2016, Office 2021, and Microsoft 365 desktop apps continue to prompt for passwords. Internal OWA and external OWA through F5 Cloud WAF both work correctly. Observation: Autodiscover XML shows <AuthPackage>Negotiate</AuthPackage> for all URLs. Kerberos authentication works internally, so SPNs and ASA setup are confirmed healthy. Password prompts appear only when traffic passes through F5 Cloud WAF, which terminates TLS before reaching Exchange. Suspected cause: F5 Cloud WAF may not support Kerberos Constrained Delegation (KCD) in the current configuration. TLS termination on F5 breaks the Kerberos authentication chain. NTLM/Basic fallback might not be fully passed through from WAF to backend. We would appreciate clarification on: Does F5 Cloud WAF support Kerberos Constrained Delegation (KCD) for backend Exchange 2016 authentication? If not, can Kerberos pass-through or secure fallback methods (NTLM/Basic) be enabled? Recommended configuration for supporting Outlook 2016 and Microsoft 365 clients when Exchange advertises Kerberos (Negotiate)? Is there an F5 reference configuration or iRule template for this scenario (Exchange 2016 + Kerberos)? Thank you for your guidance.110Views0likes6CommentsSFP Port LEDs Blinking Yellow
Hi I upgraded the F5 OS to version 1.8 and the tenant software to 17.5.1.3. The upgrade went smoothly and both the Active and Standby devices successfully handled traffic after the upgrade. However I have noticed that the SFP port LEDs on both the Primary and Secondary devices are blinking yellow. Both devices appear to be operating normally but I would like to confirm whether this is expected behavior Could the yellow blinking indicate a speed mismatch or should the LEDs be green under normal conditionsSolved59Views0likes2CommentsiCall - All New Event-Based Automation System
The community has long requested the ability to affect change to the BIG-IP configuration by some external factor, be it iRules trigger, process or system failure event, or even monitor results. Well, rest easy folks, among the many features arriving with BIG-IP version 11.4 is iCall, a completely new event-based granular internal automation system. iCall gives you comprehensive control over BIG-IP configuration, leveraging the TMSH control plane and seamlessly integrating the data plane as well. Components The iCall system has three components: events, handlers, and scripts. At a high level, an event is "the message," some named object that has context (key value pairs), scope (pool, virtual, etc), origin (daemon, iRules), and a timestamp. Events occur when specific, configurable, pre-defined conditions are met. A handler initiates a script and is the decision mechanism for event data. There are three types of handlers: Triggered - reacts to a specific event Periodic - reacts to a timer Perpetual - runs under the control of a daemon Finally, there are scripts. Scripts perform the action as a result of event and handler. The scripts are TMSH Tcl scripts organized under the /sys icall section of the system. Flow Basic flows for iCall configurations start with an event followed by a handler kicking off a script. A more complex example might start with a periodic handler that kicks off a script that generates an event that another handler picks up and generates another script. These flows are shown in the image below. A Brief Example We'll release a few tech tips on the development aspect of iCall in the coming weeks, but in the interim here's a prime use case. Often an event will happen that an operator will want to grab a tcpdump on the interesting traffic occurring during that event, but the reaction time isn't quick enough. Enter iCall! First, configure an alert in /config/user_alert.conf for a pool member down: alert local-http-10-2-80-1-80-DOWN "Pool /Common/my_pool member /Common/10.2.80.1:80 monitor status down" { exec command="tmsh generate sys icall event tcpdump context { { name ip value 10.2.80.1 } { name port value 80 } { name vlan value internal } { name count value 20 } }" } You'll need one of these stanzas for each pool member you want to monitor in this way. Next, Create the iCall script: modify script tcpdump { app-service none definition { set date [clock format [clock seconds] -format "%Y%m%d%H%M%S"] foreach var { ip port count vlan } { set $var $EVENT::context($var) } exec tcpdump -ni $vlan -s0 -w /var/tmp/${ip}_${port}-${date}.pcap -c $count host $ip and port $port } description none events none } Finally, create the iCall handler to trigger the script: sys icall handler triggered tcpdump { script tcpdump subscriptions { tcpdump { event-name tcpdump } } } Ready. Set. Go! That's one example of a triggered handler. We have many more examples of perpetual and periodic handlers in the codeshare with several use cases for your immediate use and testing. Get ready to jump aboard the iCall automation/orchestration train!2.8KViews0likes4CommentsXC Distributed Cloud and how to keep the Source IP from changing with customer edges(CE)!
The best will always be the application to stop tracking users based on something primitive as an ip address and sometimes the issue is in the Load Balancer or ADC after the XC RE and then if the persistence is based on source IP address on the ADC to be changed in case it is BIG-IP to Cookie or Universal or SSL session based if the Load Balancer is doing no decryption and it is just TCP/UDP layer . As an XC Regional Edge (RE) has many ip addresses it can connect to the origin servers adding a CE for the legacy apps is a good option to keep the source IP from changing for the same client HTTP requests during the session/transaction. Before going through this article I recommend reading the links below: F5 Distributed Cloud – CE High Availability Options: A Comparative Exploration | DevCentral F5 Distributed Cloud - Customer Edge | F5 Distributed Cloud Technical Knowledge Create Two Node HA Infrastructure for Load Balancing Using Virtual Sites with Customer Edges | F5 Distributed Cloud Technical Knowledge RE to CE cluster of 3 nodes The new SNAT prefix option under the origin pool allows no mater what CE connects to the origin pool the same IP address to be seen by the origin. Be careful as if you have more than a single IP with /32 then again the client may get each time different IP address. This may cause "inet port exhaustion " ( that is what it is called in F5BIG-IP) if there are too many connections to the origin server, so be careful as the SNAT option was added primary for that use case. There was an older option called "LB source IP persistence" but better not use it as it was not so optimized and clean as this one. RE to 2 CE nodes in a virtual site The same option with SNAT pool is not allowed for a virtual site made of 2 standalone CE. For this we can use the ring hash algorithm. Why this works? Well as Kayvan explained to me the hashing of the origin is taking into account the CE name, so the same origin under 2 different CE will get the same ring hash and the same source IP address will be send to the same CE to access the Origin Server. This will not work for a single 3-node CE cluster as it all 3 nodes have the same name. I have seen 503 errors when ring hash is enabled under the HTTP LB so enable it only under the XC route object and the attached origin pool to it! CE hosted HTTP LB with Advertise policy In XC with CE you can do do HA with 3-cluster CE that can be layer2 HA based on VRRP and arp or Layer 3 persistence based bgp that can work 3 node CE cluster or 2 CE in a virtual site and it's control options like weight, as prepend or local preference options at the router level. For the Layer 2 I will just mention that you need to allow 224.0.0.8 for the VRRP if you are migrating from BIG-IP HA and that XC selects 1 CE to hold active IP that is seen in the XC logs and at the moment the selection for some reason can't be controlled. if a CE can't reach the origin servers in the origin pool it should stop advertising the HTTP LB IP address through BGP. For those options Deploying F5 Distributed Cloud (XC) Services in Cisco ACI - Layer Three Attached Deployment is a great example as it shows ECMP BGP but with the BGP attributes you can easily select one CE to be active and processing connections, so that just one ip address is seen by the origin server. When a CE gets traffic by default it does prefer to send it to the origin as by default "Local Preferred" is enabled under the origin pool. In the clouds like AWS/Azure just a cloud native LB is added In front of the 3 CE cluster and the solution there is simple as to just modify the LB to have a persistence. Public Clouds do not support ARP, so forget about Layer 2 and play with the native LB that load balances between the CE 😉 CE on Public Cloud (AWS/Azure/GCP) When deploying on a public cloud the CE can be deployed in two ways one is through the XC GUI and adding the AWS credentials but this way you have not a big freedom to be honest as you can't deploy 2 CE and make a virtual site out of them and add cloud LB in-front of them as it always will be 3-CE cluster with preconfigured cloud LB that will use all 3 LB! Using the newer "clickops" method is much better https://docs.cloud.f5.com/docs-v2/multi-cloud-network-connect/how-to/site-management/deploy-site-aws-clickops or using terraform but with manual mode and aws as a provider (not XC/volterra as it is the same as the XC GUI deployment) https://docs.cloud.f5.com/docs-v2/multi-cloud-network-connect/how-to/site-management/deploy-aws-site-terraform This way you can make the Cloud LB to use just one CE or have some client Persistence or if traffic comes from RE to CE to implement the virtual site 2 CE node! There is no Layer 2 ARP support as I mentioned in public cloud with 3-node cluster but there is NAT policy https://docs.cloud.f5.com/docs-v2/multi-cloud-network-connect/how-tos/networking/nat-policies but I haven't tried it myself to comment on it. Hope you enjoyed this article!96Views2likes0CommentsSearch for pools with no members
HI everyone, hope you are all doing well! We currently have a cleanup script that runs monthly that looks for pools with no members in DNS, removes them, deletes the pools, then delete the VIP. My colleague was able to get the removal of the nodes from the pool, but I am seeing that the script is not picking up the "empty" pools with no members. I would like to figure out how to find pools with no members in there (count of 0) so the script can pick it up and delete it. I found the following commands, but this still shows pools with "downed" members Started with this: tmsh show ltm pool field-fmt | grep -E "ltm pool|available-members 0" --after-context 1 | grep -v 'total-members' refined it to no-members tmsh show ltm pool field-fmt | grep -E "ltm pool|available-members 0" --after-context 1 | grep -v 'no-members' But it is still showing me pools with with members. I am sorta new to this and running out of ideas! If anyone could help, that would be great! ThanksSolved107Views0likes4Comments/mgmt/toc - not possible to launch rest api rest browser
Hi, could you help please on how to kick off the api rest browser? attaching below the internals Thank in advance after providing my admin credentials, the follwoing response is returned { "code": 400, "message": "URI path /mgmt/logmein.html not registered. Please verify URI is supported and wait for /available suffix to be responsive.", "referer": "https://1.2.3.4/mgmt/toc", "restOperationId": 13525870, "kind": ":resterrorresponse" } Platform ID Z101 Platform Name BIG-IP Tenant Software Version BIG-IP v17.1.3 (Build 0.20.11) Bundle, r560067Views0likes4Comments