How to config BGP peering for F5 in HA-pair?
Hi I've setup F5 BGP peering with router and have problem due to we can't use floating IP as IP BGP neighbor address https://support.f5.com/csp/article/K62454350 . So we need to use self IP as IP BGP neighbor address. Problem is It's make router can't decide which path is correct when they send response traffic to F5. F5 active unit or standby unit. Router can't know status on F5. I try to add prepend on BGP which is standby unit and it's fine. but when standby unit takeover . it's failed again. Is there a way to deploy BGP with F5 HA-pair? Thank you2.7KViews0likes2CommentsFailing over of a Virtual F5 configuration to another location using Zerto restore process
We are preparing a process for disaster recovery to use Zerto to copy a server had has our virtual F5 configuration to another server at another facility. What needs to be completed by means of moving license keys and changing MAC to recognize the F5 configuration.602Views1like2Comments301a Study Guide and Lab
Hello, I have an old link for 301a and b exam prep materials on clouddocs.f5.com, but its seems missing now? Any idea where it was moved? This was the link I had https://clouddocs.f5.com/training/community/f5cert/html/class7/modules/module1.html Thanks, JoanneSolved4KViews1like8CommentsIPBlacklist check with iRules
I have list of IP addresses in Data group called "BlackListIP" and it defined as "String" type instead of "Address" like "name": "1.1.1.1, 2.2.2.2, 3.3.3.3, 4.4.4.4, 5.5.5.5, 6.6.6.6" And I have iRule that use for lookup the Client IP address and need to be block if it matched IP address list above when CLIENT_ACCEPTED { if { [class match [IP::client_addr] contains BlackListIP ] } { reject } } Let say, right now my client_addr equal to 1.1.1.1, Logically it should work, but after test it out this particular iRule doesn't work as expect. Anything I missed here. Please shed some light. Thanks,254Views0likes2CommentsKubernetes integration stopped working
Hello, I'm doing Kubernetes intergation. I had a working solution, but it stopped working. I'm out of ideas, maybe someone can help how to debug it. In restjavad-audit.0.log I see something like this: [I][130][29 Mar 2018 10:55:10 UTC][ForwarderPassThroughWorker] {"user":"local/admin","method":"POST","uri":"http://localhost:8100/mgmt/shared/authn/login","status":200,"from":"192.168.100.94"} I used to see a lot of other entries here plus I've seen entries about creationg of pools, nodes in audit log. When I create an Igress I see that one of the nodes is trying to communicate to F5: 13:01:11.718375 IP 192.168.100.94.56322 > 192.168.2.109.https: Flags [R.], seq 1686613753, ack 3495393884, win 851, options [nop,nop,TS val 0 ecr 5336149], length 0 13:01:11.718408 IP 192.168.2.109.https > 192.168.100.94.56322: Flags [.], ack 0, win 365, options [nop,nop,TS val 5361881 ecr 5739246], length 0 13:01:11.718836 IP 192.168.100.94.56322 > 192.168.2.109.https: Flags [R], seq 1686613753, win 0, length 0 13:01:11.718904 IP 192.168.100.94.56330 > 192.168.2.109.https: Flags [S], seq 1179852131, win 26720, options [mss 1336,sackOK,TS val 5764938 ecr 0,nop,wscale 7], length 0 13:01:11.718929 IP 192.168.2.109.https > 192.168.100.94.56330: Flags [S.], seq 307718409, ack 1179852132, win 14480, options [mss 1460,sackOK,TS val 5361882 ecr 5764938,nop,wscale 7], length 0 I'm using version 1.4.2 and pod looks just fine: [root@kuberm ~] kubectl describe pods k8s-bigip-ctlr-deployment-f4b469d69-z9f5m -n kube-system Name: k8s-bigip-ctlr-deployment-f4b469d69-z9f5m Namespace: kube-system Node: kubern2/192.168.100.94 Start Time: Thu, 29 Mar 2018 12:55:08 +0200 Labels: app=k8s-bigip-ctlr pod-template-hash=906025825 Annotations: Status: Running IP: 10.32.0.5 Controlled By: ReplicaSet/k8s-bigip-ctlr-deployment-f4b469d69 Containers: k8s-bigip-ctlr: Container ID: docker://f8d33b328d4a3703fb6ea4b5e0bf23342fd1f714022ac172fdd7bae4ccdab220 Image: f5networks/k8s-bigip-ctlr:1.4.2 Image ID: docker-pullable://docker.io/f5networks/k8s-bigip-ctlr@sha256:bd0d7cb4ae54a92d5d3eec9c2e705665a8452e69423eb5ff091e23e669ed072c Port: Host Port: Command: /app/bin/k8s-bigip-ctlr Args: --bigip-username=$(BIGIP_USERNAME) --bigip-password=$(BIGIP_PASSWORD) --bigip-url=192.168.2.109 --bigip-partition=kubernetes --use-secrets=true --resolve-ingress-names=LOOKUP State: Running Started: Thu, 29 Mar 2018 12:55:10 +0200 Ready: True Restart Count: 0 Environment: BIGIP_USERNAME: Optional: false BIGIP_PASSWORD: Optional: false Mounts: /var/run/secrets/kubernetes.io/serviceaccount from bigip-ctlr-serviceaccount-token-qtlqc (ro) Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: bigip-ctlr-serviceaccount-token-qtlqc: Type: Secret (a volume populated by a Secret) SecretName: bigip-ctlr-serviceaccount-token-qtlqc Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 8m default-scheduler Successfully assigned k8s-bigip-ctlr-deployment-f4b469d69-z9f5m to kubern2 Normal SuccessfulMountVolume 8m kubelet, kubern2 MountVolume.SetUp succeeded for volume "bigip-ctlr-serviceaccount-token-qtlqc" Normal Pulled 8m kubelet, kubern2 Container image "f5networks/k8s-bigip-ctlr:1.4.2" already present on machine Normal Created 8m kubelet, kubern2 Created container Normal Started 8m kubelet, kubern2 Started container My Ingress looks like this: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress namespace: kube-system annotations: virtual-server.f5.com/ip: "192.168.220.242" virtual-server.f5.com/partition: "kubernetes" kubernetes.io/ingress.class: "f5" spec: backend: serviceName: nginx servicePort: 80 I've also tried it on other box 13.1 no success, my box is in 12.1.3.1. Do you have any idea how to debug it futher? Regards, Piotr286Views0likes0CommentsResetting Pool statistics using Python SDK on LTM
Using Python I can grab the stat couters using something like packetsIn = poolstats.entries.get('serverside.pktsIn')['value'] curConns = poolstats.entries.get('serverside.curConns')['value'] How can I clear these counters so they reset to 0?286Views0likes4CommentsRapid failover/failback problem in AWS
We experienced a network-based failover on our F5 pair in AWS. Both are running 12.1.3.5. The logs from secondary show it detected a connectivity problem to primary and took over: Jul 19 16:26:49 f5bigip-2 notice sod[6127]: 010c007e:5: Not receiving status updates from peer device /Common/f5bigip-1.mydomain.com (10.1.2.39) (Disconnected). When this occurred, the primary released its traffic group and AWS moved the floating IP to secondary. Almost immediately, the primary was detected to be healthy again: Jul 19 16:26:49 f5bigip-2 notice sod[6127]: 010c007f:5: Receiving status updates from peer device /Common/f5bigip-1.mydomain.com (10.1.2.39) (Online) This triggered a failback to primary. However, the floating IP stayed on the secondary. I theorize the sudden failover/failback caused a problem for the "ec2:assignprivateipaddresses" API call to AWS EC2 that is responsible for shifting the floating IP between AWS instances. I have opened case with F5 & AWS for troubleshooting, but just curious if anyone has run in to this before.365Views0likes1Comment/stats returning JSON nested objects instead of array?
I noticed that requesting a list of virtuals returns a JSON array, while the stats for the same collection returns nested objects. Is this intentional? https://hostname/mgmt/tm/ltm/virtual << returns JSON array https://hostname/mgmt/tm/ltm/virtual/stats << returns JSON nested objects ` Details (tweaked to protect the innocent): `https://hostname/mgmt/tm/ltm/virtual?$select=destination,name { "kind": "tm:ltm:virtual:virtualcollectionstate", "selfLink": "https://localhost/mgmt/tm/ltm/virtual?$select=destination%2Cname&ver=12.1.2", "items": [ { "name": "vs_vlan1", "destination": "/Common/10.15.15.0:0" }, { "name": "vs_vlan2", "destination": "/Common/any:0" } ] } The above output stores the virtuals (and its details) in an array. However, /stats changes that to nested objects: https://hostname/mgmt/tm/ltm/virtual/stats?$select=destination,tmName { "kind": "tm:ltm:virtual:virtualcollectionstats", "selfLink": "https://localhost/mgmt/tm/ltm/virtual/stats?$select=destination%2Cname%2CtmName&ver=12.1.2", "entries": { "https://localhost/mgmt/tm/ltm/virtual/~Common~vs_vlan1/stats": { "nestedStats": { "entries": { "destination": { "description": "10.15.15.0:any" }, "tmName": { "description": "/Common/vs_vlan1" } } } }, "https://localhost/mgmt/tm/ltm/virtual/~Common~vs_vlan2/stats": { "nestedStats": { "entries": { "destination": { "description": "any:any" }, "tmName": { "description": "/Common/vs_vlan2" } } } } } }546Views0likes3Commentshsl logging apm
hey, i have configured this iRule to send syslog message to a remote server with the username and ip that a user gets once starts network access. i see the log written on the ltm log file however i see no syslog traffic leaving the F5 when CLIENT_ACCEPTED { ACCESS::restrict_irule_events disable set hsl [HSL::open -proto UDP -pool PA-IL-SyslogUID] } when HTTP_REQUEST { if { [HTTP::uri] starts_with "/isession?sess=" } { after 5000 { log local0. "VPN started for [ACCESS::session data get session.logon.last.username] from IP [IP::client_addr] assigned client IP [ACCESS::session data get session.assigned.clientip]"} HSL::send $hsl "Network Access username:[ACCESS::session data get session.logon.last.username] client-ip:[IP::client_addr] vpn-ip:[ACCESS::session data get session.assigned.clientip]" } }223Views0likes0Comments