devops
23972 TopicsInquiry About the "ast-api-discovery" Repository
Hello everyone, I've been exploring the AST tool (application-study-tool) and noticed there’s a related repository at ast-api-discovery that caught my attention. Unfortunately, when I try accessing it, I receive a 404 error. I was really looking forward to diving into that tool as well. Could anyone let me know if the repository has been moved or if there are any updates on its availability? Any guidance or alternative links would be greatly appreciated. Thanks in advance for your help!49Views0likes1CommentAPM for banner and cert
I need to create an APM that will do the following present an advisory banner, request a certificate extract teh upn and send over to active directory send the user to the backends. Looking for articles that explained the different steps in the process so i can understand it better.13Views0likes1CommentLTM Rule, iRule, or Brute Force Configuration to Limit URL Access
Hi, One of the applications integrated with BIG-IP has a specific requirement, as detailed below: URLs under the subdomain https://fduat.fed.com need to limit access to only 10 times per day for each IP. Kindly check the feasibility and provide feedback. The URLs are as follows: /kyc-details/details /kyc-details/personal-detail /kyc-details/review-details /payment-details /vkyc /vkyc/success /summary /payment-details/payment Please confirm whether this requirement can be achieved using a Brute Force configuration, LTM rule, or iRule.8Views0likes0CommentsIntegrating Hashicorp Vault with Cert Manager and F5 NGINX Ingress Controller
Overview Managing TLS certificates manually can be tedious and error-prone. This quick-start guide simplifies the process by integrating HashiCorp Vault as a certificate authority in Kubernetes, using Jetstack’s cert-manager for automation and F5 NGINX Ingress Controller for secure traffic management. With this setup, certificates are issued and renewed automatically, reducing manual effort and improving security. By the end, your Kubernetes environment will be equipped with a streamlined, hands-free TLS certificate management system. Prerequisites Kubernetes Cluster (e.g., Minikube, AKS, EKS, GKE) NGINX Ingress Controller (Installation Guide) kubectl and Helm CLI Key resource names for easy reference Component Resource Name Vault helm release vault Vault pod vault-0 Vault service endpoint http://vault.default:8200 Domain example.com PKI role example-dot-com Kubernetes service account issuer Kubernetes secret issuer-token Cert manager issuer vault-issuer Certificate example-com TLS secret example-com-tls NGINX ingress example-ingress 🤓 Step-by-step guide Deploy vault in Kubernetes #Install vault helm chart helm repo add hashicorp https://helm.releases.hashicorp.com helm repo update helm install vault hashicorp/vault --set "injector.enabled=false" # Initialize and unseal vault kubectl exec vault-0 -- vault operator init -key-shares=1 -key-threshold=1 -format=json > init-keys.json VAULT_UNSEAL_KEY=$(cat init-keys.json | jq -r ".unseal_keys_b64[]") kubectl exec vault-0 -- vault operator unseal $VAULT_UNSEAL_KEY # Verify vault deployment, result will be pod vault-0 running kubectl get pods # Capture the root token to login, you should receive a success message VAULT_ROOT_TOKEN=$(cat init-keys.json | jq -r ".root_token") kubectl exec vault-0 -- vault login $VAULT_ROOT_TOKEN Configure PKI secrets engine # Start an interactive Shell in pod vault-0 kubectl exec --stdin=true --tty=true vault-0 -- /bin/sh # Enable PKI default path vault secrets enable pki # Configure max cert validity time vault secrets tune -max-lease-ttl=8760h pki # Generate root cert vault write pki/root/generate/internal \ common_name=example.com \ ttl=8760h # Configure PKI endpoints vault write pki/config/urls \ issuing_certificates="http://vault.default:8200/v1/pki/ca" \ crl_distribution_points="http://vault.default:8200/v1/pki/crl" # Configure a role name vault write pki/roles/example-dot-com \ allowed_domains=example.com \ allow_subdomains=true \ max_ttl=72h # Create a policy called "pki" for the PKI secrets engine vault policy write pki - <<EOF path "pki*" { capabilities = ["read", "list"] } path "pki/sign/example-dot-com" { capabilities = ["create", "update"] } path "pki/issue/example-dot-com" { capabilities = ["create"] } EOF # Exit the vault-0 pod exit Configure Kubernetes authentication # Start the interactive shall in pod vault-0 kubectl exec --stdin=true --tty=true vault-0 -- /bin/sh # Enable Kubernetes authentication vault auth enable kubernetes # Configure the Kubernetes authentication method to use location of the Kubernetes API vault write auth/kubernetes/config \ kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443" # Create a Kubernetes authentication role vault write auth/kubernetes/role/issuer \ bound_service_account_names=issuer \ bound_service_account_namespaces=default \ policies=pki \ ttl=20m # Exit pod vault-0 exit Deploy cert manager # Install Jetstack's cert-manager version 1.12.3 kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.12.3/cert-manager.crds.yaml # Create cert-manager namespace kubectl create namespace cert-manager # Install with helm helm repo add jetstack https://charts.jetstack.io helm repo update helm install cert-manager \ --namespace cert-manager \ --version v1.12.3 \ jetstack/cert-manager # Get cert manager pods kubectl get pods --namespace cert-manager Configure an issuer and generate a certificate # Create service account in the default namespace kubectl create serviceaccount issuer # Create a secret definition cat >> issuer-secret.yaml <<EOF apiVersion: v1 kind: Secret metadata: name: issuer-token-lmzpj annotations: kubernetes.io/service-account.name: issuer type: kubernetes.io/service-account-token EOF # Create issuer secret kubectl apply -f issuer-secret.yaml # Get all secrets in the default namespace kubectl get secrets # Create variable to capture the value of issuer secret ISSUER_SECRET_REF=$(kubectl get secrets --output=json | jq -r '.items[].metadata | select(.name|startswith("issuer-token-")).name') # Define issuer file cat > vault-issuer.yaml <<EOF apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: vault-issuer namespace: default spec: vault: server: http://vault.default:8200 path: pki/sign/example-dot-com auth: kubernetes: mountPath: /v1/auth/kubernetes role: issuer secretRef: name: $ISSUER_SECRET_REF key: token EOF # Create the issuer kubectl apply -f vault-issuer.yaml # Define a certificate named example-com cat > example-com-cert.yaml <<EOF apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: example-com namespace: default spec: secretName: example-com-tls issuerRef: name: vault-issuer commonName: www.example.com dnsNames: - www.example.com EOF # Create the example-com certificate kubectl apply -f example-com-cert.yaml # View the example-com certificate kubectl describe certificate.cert-manager example-com Configure NGINX Ingress to use the certificate # Define nginx ingress resource cat > example-ingress.yaml <<EOF apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-ingress namespace: default annotations: nginx.ingress.kubernetes.io/ssl-redirect: "true" spec: tls: - hosts: - www.example.com secretName: example-com-tls rules: - host: www.example.com http: paths: - path: / pathType: Prefix backend: service: name: example-service port: number: 80 EOF # Apply ingress configuration kubectl apply -f example-ingress.yaml Verify automatic certificate renewal # Check if the certificate has been created successfully and when its expiry date is kubectl get certificate example-com -n default kubectl get secret example-com-tls -o jsonpath="{.data['tls\.crt']}" | base64 -d | openssl x509 -noout -dates # After waiting for the certificate renewal period to pass, check if a new certificate has been issued kubectl get certificate example-com -n default kubectl get secret example-com-tls -o jsonpath="{.data['tls\.crt']}" | base64 -d | openssl x509 -noout -dates # Ensure that the expiration date has changed, indicating that a new certificate has been issued and applied kubectl describe certificate example-com # Check that NGINX Ingress is working with your TLS curl -v https://www.example.com --resolve www.example.com:443:<YOUR_INGRESS_SERVICE_EXTERNAL_IP> ☕ Summary With HashiCorp Vault and cert-manager, certificate management is fully automated, eliminating the need for manual intervention. NGINX Ingress Controller handles TLS termination efficiently, reducing overhead on backend services while ensuring secure communication. Beyond basic ingress capabilities, NGINX Ingress Controller offers advanced traffic splitting and content-based routing, making it a solid choice for production environments. Offloading TLS termination here not only enhances security but also optimizes resource utilization and performance. This integration provides a scalable, maintainable solution for managing certificates and ingress traffic in Kubernetes.67Views0likes0CommentsF5-SDK Get GTM Pool Server, Virtual Server, Unused Objects, and JSON Conversion for Data Processing
I am honored to be able to use the f5-sdk python compilation tool. I think it is great! Unfortunately, it is no longer supported for updates. Recently, when helping users process data, I wrote some python scripts to handle unused objects and output them in dictionary format. There will not be many updates later. If you are interested, you can update the code. f5-sdk.readthedocs.io Some of the code is shown below. You can refer to the attachment! It’s a great honor to study and discuss with you all!😇 from f5.bigip import ManagementRoot import json from tqdm import tqdm """ Author: Nathan'Asky Date: 2025-03-14 Version: 1.0 Description: Read DNS Device ConfigInfo """ class GTMVisualProcessing: def LoadAccount(self,HostName,User,Pass): try: ConnectMgmt = ManagementRoot(HostName,User,Pass) return ConnectMgmt except: return "Device link failed. Check network and device availability!" def GetWideipDic(self,ConnectMgmt,WideipsJsonPath): try: WideipData = [] for WideipType in tqdm(['a_s', 'aaaas', 'cnames','mxs'],desc="Processing Wideip", unit="Member"): for wideip in getattr(ConnectMgmt.tm.gtm.wideips, WideipType).get_collection(): if '_meta_data' in wideip.raw: del wideip.raw['_meta_data'] WideipData.append(wideip.raw) with open(WideipsJsonPath, 'a', encoding='utf-8') as WideipsJsonFile: json.dump(WideipData, WideipsJsonFile, ensure_ascii=False, indent=4) del WideipData except: return "Data acquisition is abnormal; check the network connection!" def GetPoolMemberDic(self,ConnectMgmt,PoolMemberJsonPath): try: PoolMemberData = [] PoolTypeDic = {'a_s':'a','aaaas':'aaaa','cnames':'cname','mxs':'mx'} for PoolType in tqdm(PoolTypeDic.keys(), desc="Processing PoolMembers Types", unit="type"): for pool in getattr(ConnectMgmt.tm.gtm.pools,PoolType).get_collection(): for PoolMember in getattr(getattr(ConnectMgmt.tm.gtm.pools, PoolType), PoolTypeDic.get(PoolType)).load(name=pool.raw['name']).members_s.get_collection(): if '_meta_data' in PoolMember.raw: del PoolMember.raw['_meta_data'] PoolMember.raw.update(PoolName=pool.raw['name']) PoolMemberData.append(PoolMember.raw) with open(PoolMemberJsonPath, 'a', encoding='utf-8') as PoolMemberJsonFile: json.dump(PoolMemberData, PoolMemberJsonFile, ensure_ascii=False, indent=4) del PoolMemberData except: return "Data acquisition is abnormal; check the network connection!" def GetPoolDic(self,ConnectMgmt,PoolJsonPath): try: PoolData = [] PoolTypeDic = {'a_s':'a','aaaas':'aaaa','cnames':'cname','mxs':'mx'} for PoolType in tqdm(PoolTypeDic.keys(), desc="Processing PoolMembers Types", unit="type"): for pool in getattr(ConnectMgmt.tm.gtm.pools,PoolType).get_collection(): if '_meta_data' in pool.raw: del pool.raw['_meta_data'] PoolData.append(pool.raw) with open(PoolJsonPath, 'a', encoding='utf-8') as PoolJsonFile: json.dump(PoolData, PoolJsonFile, ensure_ascii=False, indent=4) del PoolData except: return "Data acquisition is abnormal; check the network connection!" def GetServerDic(self,ConnectMgmt,ServerJsonPath): try: ServerData = [] for server in tqdm(ConnectMgmt.tm.gtm.servers.get_collection(),desc="Processing Server", unit="Member"): if '_meta_data' in server.raw: del server.raw['_meta_data'] ServerData.append(server.raw) with open(ServerJsonPath, 'a', encoding='utf-8') as ServerJsonFile: json.dump(ServerData, ServerJsonFile, ensure_ascii=False, indent=4) del ServerData except: return "Data acquisition is abnormal; check the network connection!" def GetVirtualServerDic(self,ConnectMgmt,VirtualServerJsonPath): try: VirtualServerData = [] for server in tqdm(ConnectMgmt.tm.gtm.servers.get_collection(),desc="Processing VirtualServer", unit="Member"): for VirtualServer in ConnectMgmt.tm.gtm.servers.server.load(name=server.raw['name']).virtual_servers_s.get_collection(): if '_meta_data' in VirtualServer.raw: del VirtualServer.raw['_meta_data'] VirtualServer.raw.update(ServerName=server.raw['name']) VirtualServerData.append(VirtualServer.raw) with open(VirtualServerJsonPath, 'a', encoding='utf-8') as VirtualServerJsonFile: json.dump(VirtualServerData, VirtualServerJsonFile, ensure_ascii=False, indent=4) del VirtualServerData except: return "Data acquisition is abnormal; check the network connection!" def GetMonitorDic(self,ConnectMgmt,MonitorJsonPath): try: MonitorData = [] for MonitorType in tqdm(["bigips", "bigip_links", "externals", "firepass_s", "ftps", "gateway_icmps", "gtps", "https", "https_s", "imaps", "ldaps", "mssqls", "mysqls", "nntps", "oracles", "pop3s", "postgresqls", "radius_s", "radius_accountings", "real_servers", "scripteds", "sips", "smtps", "snmps", "snmp_links", "soaps", "tcps", "tcp_half_opens", "udps", "waps", "wmis"],desc="Processing Monitor Type", unit="type"): for Monitor in getattr(ConnectMgmt.tm.gtm.monitor,MonitorType).get_collection(): if '_meta_data' in Monitor.raw: del Monitor.raw['_meta_data'] MonitorData.append(Monitor.raw) with open(MonitorJsonPath, 'a', encoding='utf-8') as MonitorJsonFile: json.dump(MonitorData, MonitorJsonFile, ensure_ascii=False, indent=4) del MonitorData except: return "Data acquisition is abnormal; check the network connection!"6Views0likes0CommentsF5 BIG-IP deployment with Red Hat OpenShift - keeping client IP addresses and egress flows
Controlling the egress traffic in OpenShift allows to use the BIG-IP for several use cases: Keeping the source IP of the ingress clients Providing highly scalable SNAT for egress flows Providing security functionalities for egress flows413Views1like1CommentHelp needed with iRule (connection closed log )
Hi, Im troubleshooting a website which gives an Error Connection Reset error when requesting a page. I want to create an iRule which logs this, and created the following iRule on the Virtual Server: when CLIENT_CLOSED { #log local0. "Connection reset by client: [IP::client_addr] - [HTTP::host] - [HTTP::uri]" } when SERVER_CLOSED { #log local0. "Connection reset by server: [IP::client_addr] - [HTTP::host] - [HTTP::uri]" } However when I try to check the log files I dont see any messages shown. Is there anything wrong with the iRule?41Views0likes1CommentHow to get a F5 BIG-IP VE Developer Lab License
(applies to BIG-IP TMOS Edition) To assist operational teams teams improve their development for the BIG-IP platform, F5 offers a low cost developer lab license. This license can be purchased from your authorized F5 vendor. If you do not have an F5 vendor, and you are in either Canada or the US you can purchase a lab license online: CDW BIG-IP Virtual Edition Lab License CDW Canada BIG-IP Virtual Edition Lab License Once completed, the order is sent to F5 for fulfillment and your license will be delivered shortly after via e-mail. F5 is investigating ways to improve this process. To download the BIG-IP Virtual Edition, log into my.f5.com (separate login from DevCentral), navigate down to the Downloads card under the Support Resources section of the page. Select BIG-IP from the product group family and then the current version of BIG-IP. You will be presented with a list of options, at the bottom, select the Virtual-Edition option that has the following descriptions: For VMware Fusion or Workstation or ESX/i: Image fileset for VMware ESX/i Server For Microsoft HyperV: Image fileset for Microsoft Hyper-V KVM RHEL/CentoOS: Image file set for KVM Red Hat Enterprise Linux/CentOS Note: There are also 1 Slot versions of the above images where a 2nd boot partition is not needed for in-place upgrades. These images include _1SLOT- to the image name instead of ALL. The below guides will help get you started with F5 BIG-IP Virtual Edition to develop for VMWare Fusion, AWS, Azure, VMware, or Microsoft Hyper-V. These guides follow standard practices for installing in production environments and performance recommendations change based on lower use/non-critical needs for development or lab environments. Similar to driving a tank, use your best judgement. Deploying F5 BIG-IP Virtual Edition on VMware Fusion Deploying F5 BIG-IP in Microsoft Azure for Developers Deploying F5 BIG-IP in AWS for Developers Deploying F5 BIG-IP in Windows Server Hyper-V for Developers Deploying F5 BIG-IP in VMware vCloud Director and ESX for Developers Note: F5 Support maintains authoritative Azure, AWS, Hyper-V, and ESX/vCloud installation documentation. VMware Fusion is not an official F5-supported hypervisor so DevCentral publishes the Fusion guide with the help of our Field Systems Engineering teams.85KViews13likes147CommentsBIG-IP VE - qemu on an Apple Silicon Macbook
Hey all, I was wondering if anyone has managed to spin up a BIG-IP VE on an Apple Silicon Macbook using qemu? I've been using this guide: https://clouddocs.f5.com/cloud/public/v1/kvm/kvm_setup.html As a reference point, but this is obviously written from the persepctive of a native x86 chipset on the host. I've tried playing around with what I believe are the relevant settings, but the guest just crashes virt-manager every time I try to launch it. Don't suppose anyone has been through this pain and come out the other side successfully and could lend a hand? Thanks!916Views0likes3Comments