deployment
4177 TopicsHow to get a F5 BIG-IP VE Developer Lab License
(applies to BIG-IP TMOS Edition) To assist operational teams teams improve their development for the BIG-IP platform, F5 offers a low cost developer lab license. This license can be purchased from your authorized F5 vendor. If you do not have an F5 vendor, and you are in either Canada or the US you can purchase a lab license online: CDW BIG-IP Virtual Edition Lab License CDW Canada BIG-IP Virtual Edition Lab License Once completed, the order is sent to F5 for fulfillment and your license will be delivered shortly after via e-mail. F5 is investigating ways to improve this process. To download the BIG-IP Virtual Edition, log into my.f5.com (separate login from DevCentral), navigate down to the Downloads card under the Support Resources section of the page. Select BIG-IP from the product group family and then the current version of BIG-IP. You will be presented with a list of options, at the bottom, select the Virtual-Edition option that has the following descriptions: For VMware Fusion or Workstation or ESX/i: Image fileset for VMware ESX/i Server For Microsoft HyperV: Image fileset for Microsoft Hyper-V KVM RHEL/CentoOS: Image file set for KVM Red Hat Enterprise Linux/CentOS Note: There are also 1 Slot versions of the above images where a 2nd boot partition is not needed for in-place upgrades. These images include _1SLOT- to the image name instead of ALL. The below guides will help get you started with F5 BIG-IP Virtual Edition to develop for VMWare Fusion, AWS, Azure, VMware, or Microsoft Hyper-V. These guides follow standard practices for installing in production environments and performance recommendations change based on lower use/non-critical needs for development or lab environments. Similar to driving a tank, use your best judgement. Deploying F5 BIG-IP Virtual Edition on VMware Fusion Deploying F5 BIG-IP in Microsoft Azure for Developers Deploying F5 BIG-IP in AWS for Developers Deploying F5 BIG-IP in Windows Server Hyper-V for Developers Deploying F5 BIG-IP in VMware vCloud Director and ESX for Developers Note: F5 Support maintains authoritative Azure, AWS, Hyper-V, and ESX/vCloud installation documentation. VMware Fusion is not an official F5-supported hypervisor so DevCentral publishes the Fusion guide with the help of our Field Systems Engineering teams.108KViews14likes153CommentsF5 DNS Express
Hi Brothers, I have an issue with F5 DNS Express , I configured it as a secondary DNS server and AD as Primary DNS Server , the first issue that when F5 received Notification from AD it not send Zone transfer Request to AD , the second issue that when i configure the F5 listener as preferred DNS on user Machine this Machine can not join Domain even all required Records are available on F5. If you have any recommendation please advise. Br,62Views0likes2CommentsQuestions on R-Series 5800s
Hello...Trying to bring up a new 5800 What is the recommendation for port-channel. Can we create port-channel between two different pipelines? i.e Ports 3.0,4.0,5.0 and 6.0 are in Pipeline-1 and Ports 7.0,8.0,9.0 and 10.0 are in pipeline-2. Can we create a LAG with 3.0 and 7.0? Plan on using a 10G link for HA and was planning to convert 6.0 to 10G and connect between two R-series 2. When deploying a tenant, Should we just leave it as "Recommended" for Provisioning instead of "Advanced" and have 18 vCPUs if plan on having just one tenant. Not sure how much Virtual disk size is recommended. Any recommendation for Virtual Disk Size? 3. If we want to have additional tenant, is it best to leave the tenant at 14 vCPU or can we change it later and What is the impact? Just existing tenant restart?27Views0likes0CommentsF5 i-series Guests to r-series tenants migration
Hi All, I have two i-series 11900 with 4 guests on each as: 1 LTM, 1 GTM, 1 WAF, and 1 APM. There is HA between the guests. I am working on a migration plan to r-series 10900 and have two options: Option 1: HA method: Here, I will replace the i-series device that has the standby guests with the r-series device. Then will establish the HA between the active i-series and the r-series and sync the configuration. Then will make the r-series active as active. Then will replace the newly bocming standby i-series device with the second r-series and establish the HA with the first r-series. this is a lengthy way but has a positive side of fast rollback in case I faced any issue, and there will be no changes on the management IPs. Option 2: UCS method: in this method I will create a replica of the existing guests on the r-series tenants using the UCS files from the iseries guests. This setup will be isolated from the production network. During the maintenance window, I will disconnect the cables from i-sereis and connect it to the r-series boxes. This way I need to use different management IPs while building the replica setup. and during the migration will change the management IPs and use the onse were on the i-series. Note that, existing devices are connected to cisco ACI. Let me here your thoughts and suggestions.295Views0likes7CommentsAccelerate Application Deployment on Google Cloud with F5 NGINXaaS
Introduction In the push for cloud-native agility, infrastructure teams often face a crossroads: settle for basic, "good enough" load balancing, or take on the heavy lifting of manually managing complex, high-performance proxies. For those building on Google Cloud (GCP), this compromise is no longer necessary. F5 NGINXaaS for Google Cloud represents a shift in how we approach application delivery. It isn’t just NGINX running in the cloud; it is a co-engineered, fully managed on-demand service that lives natively within the GCP ecosystem. This integration allows you to combine the advanced traffic control and programmability NGINX is known for with the effortless scaling and consumption model of an SaaS offering in a platform-first way. By offloading the "toil" of lifecycle management—like patching, tuning, and infrastructure provisioning—to F5, teams can redirect their energy toward modernizing application logic and accelerating release cycles. In this article, we’ll dive into how this synergy between F5 and Google Cloud simplifies your architecture, from securing traffic with integrated secret management to gaining deep operational insights through native monitoring tools. Getting Started with NGINXaaS for Google Cloud The transition to a managed service begins with a seamless onboarding experience through the Google Cloud Marketplace. By leveraging this integrated path, teams can bypass the manual "toil" of traditional infrastructure setup, such as patching and individual instance maintenance. The deployment process involves: Marketplace Subscription: Directly subscribe to the service to ensure unified billing and support. Network Connectivity: Setting up essential VPC and Network Attachments to allow NGINXaaS to communicate securely with your backend resources. Provisioning: Launching a dedicated deployment that provides enterprise-grade reliability while maintaining a cloud-native feel. Secure and Manage SSL/TLS in F5 NGINXaaS for Google Cloud Security is a foundational pillar of this co-engineered service, particularly regarding traffic encryption. NGINXaaS simplifies the lifecycle of SSL/TLS certificates by providing a centralized way to manage credentials. Key security features include: Integrated Secrets Management: Working natively with Google Cloud services to handle sensitive data like private keys and certificates securely. Proxy Configuration: Demonstrating how to set up a Google Cloud proxy network load balancer to handle incoming client traffic. Credential Deployment: Uploading and managing certificates directly within the NGINX console to ensure all application endpoints are protected by robust encryption. Enhancing Visibility in Google Cloud with F5 NGINXaaS Visibility is no longer an afterthought but a native component of the deployment, providing high-fidelity telemetry without separate agents. Native Telemetry Export: By linking your Google Cloud Project ID and configuring Workload Identity Federation (WIF), metrics and logs are pushed directly to Google Cloud Monitoring. Real-Time Dashboards: The observability demo walks through using the Metrics Explorer to visualize critical performance data, such as active HTTP connection counts and response rates. Actionable Logging: Integrated Log Analytics allow you to use the Logs Explorer to isolate events and troubleshoot application issues within a single toolset, streamlining your operational workflow. Whether you are just beginning your transition to the cloud or fine-tuning a sophisticated microservices architecture, F5 NGINXaaS provides the advanced availability, scalability, security, and visibility capabilities necessary for success in the Google Cloud environment. Conclusion The integration of F5 NGINXaaS for Google Cloud represents a significant advantage for organizations looking to modernize their application delivery without the traditional overhead of infrastructure management. By shifting to this co-engineered, managed service, teams can bridge together advanced NGINX performance and the native agility of the Google Cloud ecosystem. Through the demonstrations provided in this article, we’ve highlighted how you can: Accelerate Onboarding: Move from Marketplace subscription to a live deployment in minutes using Network Attachments. Fortify Security: Centralize SSL/TLS management within the NGINX console while leveraging Google Cloud's robust networking layer. Maximize Operational Intelligence: Harness deep, real-time observability by piping telemetry directly into Google Cloud Monitoring and Logging. Resources Accelerating app transformation with F5 NGINXaaS for Google Cloud F5 NGINXaaS for Google Cloud: Delivering resilient, scalable applications47Views1like0CommentsCommon UCS Load Failure Scenarios on F5 BIG-IP Platforms
User Configuration Set (UCS) archives are central to F5 BIG-IP configuration management, supporting backup, system recovery, and platform migrations. In production environments, UCS restores are often performed during maintenance windows or critical recovery scenarios—making reliability and predictability essential. While the UCS mechanism is robust, restore operations can fail for a variety of reasons, ranging from encryption dependencies and platform constraints to feature provisioning and licensing differences. Many of these failures are not random; they follow well-defined patterns that can be identified and mitigated with the right preparation. This article consolidates commonly encountered UCS load failure scenarios, explains their underlying causes, and outlines recommended resolution strategies based on publicly documented behavior, operational best practices, and official F5 knowledge base guidance. The goal is to help administrators recognize issues quickly, reduce trial-and-error during restores, and plan UCS operations with greater confidence—especially during platform transitions such as VE to rSeries or VELOS. Pre-Flight Validation Checklist (VE → rSeries / VELOS) Master Key and Encryption State tmsh list sys crypto master-key Ensure the target system’s master key state is compatible with the source system when restoring encrypted objects. Platform Feature Requirements Enable required features such as network-failover prior to restore. Provisioned Modules Confirm all referenced modules are licensed and provisioned. ASM / Advanced WAF Considerations Validate ASM MySQL database health. Consider importing ASM policies separately if issues arise. Licensing and Resource Alignment Ensure licensed core counts align with platform resources. Note: Issues may surface after reboot. Active Configuration Operations tmsh show sys mcp-state Management and Routing Conflicts Check for duplicate management routes. Maintenance Window Awareness Perform restores on standby units during maintenance windows. Fast Triage Guide Logs to Check /var/log/ltm /var/log/restjavad.0.log /var/log/asm Review error keywords such as: master key, encrypted, license, MySQL, duplicate, failover Key Failure Scenarios Master Key or Encryption Mismatch Resolution: Rekey or recreate encrypted objects. Corrupted or Incomplete UCS File Resolution: Use a known good backup. Encrypted UCS Without Passphrase Resolution: Provide correct passphrase. Platform or Version Mismatch Resolution: Enable required features or adjust config. Simultaneous Configuration Actions Resolution: Wait for other tasks to complete. ASM / MySQL Issues Resolution: Import ASM separately or repair database. FIPS Key / Certificate Issues Resolution: Migrate FIPS keys first. Resource or Licensing Mismatch Resolution: Align licensing and resources. Configuration Conflicts Resolution: Remove conflicting objects. Unexpected Failover or Service Restart Resolution: Restore during maintenance windows. VIPRION Hardware Swap Considerations (Blade / Chassis Transitions) When restoring UCS files during VIPRION hardware swaps (for example, 4340 → 4450 or 4460 blades), additional manual validation is required due to chassis-level and blade-specific configuration differences. Files Requiring Manual Review and Adjustment bigip_emergency.conf .cluster.conf bigip.conf bigip_base.conf Post-Modification Requirement After making any manual edits to configuration files: tmsh load sys config Conclusion Proper UCS restore preparation reduces downtime and operational risk, particularly during platform migrations, hardware swaps, or disaster recovery scenarios. Most UCS load failures are predictable and preventable when encryption state, licensing, platform features, and configuration dependencies are validated upfront. Treat UCS restores as controlled change operations, not simple file imports, and you dramatically improve recovery outcomes across BIG-IP platforms.201Views2likes0CommentsPractical Mapping Guide - F5 BIG-IP TMOS Modules to Feature-Scoped CNFs
Introduction BIG-IP TMOS and BIG-IP CNFs solve similar problems with very different deployment and configuration models. In TMOS you deploy whole modules (LTM, AFM, DNS, etc.), while in CNFs you deploy only the specific features and data plane functions you need as cloud native components. Modules vs feature scoped services In TMOS, enabling LTM gives you a broad set of capabilities in one module: virtual servers, pools, profiles, iRules, SNAT, persistence, etc., all living in the same configuration namespace. In CNFs, you deploy only the features you actually need, expressed as discrete custom resources: for example, a routing CNF for more precise TMM routing, a firewall CNF for policies, or specific CRs for profiles and NAT, rather than bringing the entire LTM or AFM module along “just in case”. Configuration objects vs custom resources TMOS configuration is organized as objects under modules (for example: LTM virtual, LTM pool, security, firewall policy, net vlan), managed via tmsh, GUI, or iControl REST. CNFs expose those same capabilities through Kubernetes Custom Resources (CRs): routes, policies, profiles, NAT, VLANs, and so on are expressed as YAML and applied with GitOps-style workflows, making individual features independently deployable with a version history. Coarse-grained vs precise feature deployment On TMOS, deploying a use case often means standing up a full BIG-IP instance with multiple modules enabled, even if the application only needs basic load balancing with a single HTTP profile and a couple of firewall rules. With CNFs, you can carve out exactly what you need: for example, only a precise TMM routing function plus the specific TCP/HTTP/SSL profiles and security policies required for a given application or edge segment, reducing blast radius and resource footprint. BIG-IP Next CNF Custom Resources (CRs) extend Kubernetes APIs to configure the Traffic Management Microkernel (TMM) and Advanced Firewall Manager (AFM), providing declarative equivalents to BIG-IP TMOS configuration objects. This mapping ensures functional parity for L4-L7 services, security, and networking while enabling cloud-native scalability. Focus here covers core mappings with examples for iRules and Profiles. What are Kubernetes Custom Resources Think of Kubernetes API as a menu of objects you can create: Pods (containers), Services (networking), Deployments (replicas). CRs add new menu items for your unique needs. You first define a CustomResourceDefinition (CRD) (the blueprint), then create CR instances (actual objects using that blueprint). How CRs Work (Step-by-Step) Create CRD - Define new object type (e.g., "F5BigFwPolicy") Apply CRD - Kubernetes API server registers it, adding new REST endpoints like /apis/f5net.com/v1/f5bigfwpolicies Create CR - Write YAML instances of your new type and apply with kubectl Controllers Watch - Custom operators/controllers react to CR changes, creating real resources (Pods, ConfigMaps, etc.) CR examples Networking and NAT Mappings Networking CRs handle interfaces and routing, mirroring, TMOS VLANs, Self-IPs and NAT mapping. CNF CR TMOS Object(s) Purpose F5BigNetVlan VLANs, Self-IPs, MTU Interface config F5BigCneSnatpool SNAT Automap Source NAT s Example: F5BigNetVlan CR sets VLAN tags and IPs, applied via kubectl apply, propagating to TMM like tmsh create net vlan. Security and Protection Mappings Protection CRs map to AFM and DoS modules for threat mitigation. CNF CR CR Purpose TMOS Object(s) Purpose F5BigFwPolicy applies industry-standard firewall rules to TMM AFM Policies Stateful ACL filtering F5BigFwRulelist Consists of an array of ACL rules AFM Rule list Stateful ACL filtering F5BigSvcPolicy Allows creation of Timer Policies and attaching them to the Firewall Rules AFM Policies Stateful ACL filtering F5BigIpsPolicy Allows you to filter inspected traffic (matched events) by various properties such as, the inspection profile’s host (virtual server or firewall policy), traffic properties, inspection action, or inspection service. IPS AFM policies Enforce IPS F5BigDdosGlobal Configures the TMM Proxy Pod to protect applications and the TMM Pod from Denial of Service / Distributed Denial of Service (Dos/DDoS) attacks. Global DoS/DDoS Device DoS protection F5BigDdosProfile The Percontext DDoS CRD configures the TMM Proxy Pod to protect applications from Denial of Service / Distributed Denial of Service (Dos/DDoS) attacks. DoS/DDoS per profile Per Context DoS profile protection F5BigIpiPolicy Each policy contains a list of categories and actions that can be customized, with IPRep database being common. Feedlist based policies can also customize the IP addresses configured. IP Intelligence Policies Reputation-based blocking Example: F5BigFwPolicy references rule-lists and zones, equivalent to tmsh creating security firewall policy. Traffic Management Mappings Traffic CRs proxy TCP/UDP and support ALGs, akin to LTM Virtual Servers and Pools. CNF CR TMOS Object(s) Purpose Part of the Apps CRs Virtual Server (Ingress/HTTPRoute/GRPCRoute) LTM Virtual Servers Load balancing Part of the Apps CRs Pool (Endpoints) LTM Pools Node groups F5BigAlgFtp FTP Profile/ALG FTP gateway F5BigDnsCache DNS Cache Resolution/caching Example: GRPCRoute CR defines backend endpoints and profiles, mapping to tmsh create ltm virtual. Profiles Mappings with Examples Profiles CRs customize traffic handling, referenced by Traffic Management CRs, directly paralleling TMOS profiles. CNF CR TMOS Profile(s) Example Configuration Snippet F5BigTcpSetting TCP Profile spec: { defaultsFrom: "tcp-lan-optimized" } tunes congestion like tmsh create LTM profile TCP optimized F5BigUdpSetting UDP Profile spec: { idleTimeout: 300 } sets timeouts F5BigUdpSetting HTTP Profile spec: { http2Profile: "http2" } enables HTTP/2 F5BigClientSslSetting ClientSSL Profile spec: { certKeyChain: { name: "default" } } for TLS termination F5PersistenceProfile Persistence Profile Enables source/dest persistence Profiles attach to Virtual Servers/Ingresses declaratively, e.g., spec.profiles: [{ name: "my-tcp", context: "tcp" }]. iRules Support and Examples CNF fully supports iRules for custom logic, integrated into Traffic Management CRs like Virtual Servers or FTP/RTSP. iRules execute in TMM, preserving TMOS scripting. Example CR Snippet: apiVersion: k8s.f5net.com/v1 kind: F5BigCneIrule metadata: name: cnfs-dns-irule namespace: cnf-gateway spec: iRule: > when DNS_REQUEST { if { [IP::addr [IP::remote_addr] equals 10.10.1.0/24] } { cname cname.siterequest.com } else { host 10.20.20.20 } } This mapping facilitates TMOS-to-CNF migrations using tools like F5Big-Ip Controller for CR generation from UCS configs. For full CR specs, refer to official docs. The updated full list of CNF CRs per release can be found over here, https://clouddocs.f5.com/cnfs/robin/latest/cnf-custom-resources.html Conclusion In this article, we examined how BIG-IP Next CNF redefines the TMOS module model into a feature-scoped, cloud-native service architecture. By mapping TMOS objects such as virtual servers, profiles, and security policies to their corresponding CNF Custom Resources, we illustrated how familiar traffic management and security constructs translate into declarative Kubernetes workflows. Related content F5 Cloud-Native Network Functions configurations guide BIG-IP Next Cloud-Native Network Functions (CNFs) CNF DNS Express BIG-IP Next for Kubernetes CNFs - DNS walkthrough BIG-IP Next for Kubernetes CNFs deployment walkthrough | DevCentral BIG-IP Next Edge Firewall CNF for Edge workloads | DevCentral Modern Applications-Demystifying Ingress solutions flavors | DevCentral49Views1like0CommentsVisibility for Modern Telco and Cloud‑Native Networks
Introduction As operators transition from 5G to 6G-ready cloud-native architectures, their networks are becoming more disaggregated, dynamic, and intelligent. Functions increasingly span virtualized 5G cores, AI-enhanced 6G control domains, MEC platforms, and hyperscale distributed edge clouds. Traditional visibility tools built for static or centralized topologies can no longer keep pace. Telco and security architects now face the challenge of maintaining real‑time, end‑to‑end observability across highly adaptive, multi‑vendor, and multi‑cloud infrastructures where workloads may move between 5G and 6G service fabrics in milliseconds. BIG‑IP eBPF Observability (EOB) meets this challenge with a high‑performance, kernel‑level telemetry framework built on Linux’s eBPF technology, delivering visibility that scales from 6G microcells to data center cores without adding operational overhead. Why Legacy Visibility Approaches Fail in 5G/6G Environments Tools like SPANs, TAPs, and broker appliances were effective for static topologies. But in cloud‑native 5G and 6G deployments, where AI dynamically places network functions across cores, edges, and reconfigurable slices, they break down. Common limitations include: No reliable physical tap points in cloud-distributed or satellite-connected nodes SPAN mirroring constrained by virtual and container limits Encryption and service mesh layers hiding the real traffic context Vendor probes exposing only proprietary NFs, limiting multi‑domain visibility These gaps fragment insights across control-plane (SBI, PFCP, NGAP, F1‑AP) and AI‑driven management planes now emerging within 6G intelligent network layers (INNs). eBPF: Core Technology for Adaptive Observability eBPF (Extended Berkeley Packet Filter) allows sandboxed programs to execute inside the Linux kernel, offering in‑situ visibility into packet, process, and syscall activities at near‑zero latency. Its key advantages for 5G and 6G include: Safe, programmable telemetry without kernel module changes Full observability across containers, namespaces, and network functions Ultra‑low‑latency insights ideal for closed‑loop automation and AI inference workflows 6G networks depend on autonomous observability loops, eBPF forms the telemetry foundation that lets those loops sense and adapt to conditions in real time. BIG‑IP EOB Architecture and Data Model EOB leverages lightweight containerized sensors orchestrated via Kubernetes or OpenShift across core, edge, and RAN domains. Its data model captures: Raw packets and deep forensic traces Dynamic service topologies reflecting 5G/6G slice relationships Plaintext records of TLS 1.3 and service mesh sessions for deep insight Metadata for telco protocols (SBI, PFCP, DNS, HTTP/2, F1‑AP, NGAP) and emerging 6G access protocols Rich CNFlow telemetry correlating control- and user-plane activity Then telemetry streams to a message bus or observability fabric, ready for real‑time analytics, SIEM integration, or AI‑based fault prediction systems that’s vital for 6G’s full autonomy vision. Core‑to‑Edge‑to‑AI Deployment Model EOB spans the entire 5G/6G topology, from centralized cores to AI‑powered edge clouds: 5G Core Functions: AMF, SMF, NRF, PCF, UDM, UDR 6G Expansion: Cloud‑native service networks running AI‑orchestrated CNFs and reconfigurable RAN domains Edge & MEC: Low‑latency compute nodes supporting URLLC and industrial AI Open RAN: O‑RU, O‑DU, O‑CU, and AI‑RAN management functions A central controller enforces data routing and observability policies, making EOB a unifying visibility plane for multi‑band, multi‑vendor networks that may straddle both 5G and 6G service layers. Restoring Visibility in Encrypted and AI‑Automated Planes Modern telco cores encrypt almost everything, including control messages used for orchestration and identity management. EOB restores inspection capability by extracting essential 5G/6G identifiers and slice attributes from encrypted flows, enabling real‑time anomaly detection. Capabilities include: Extraction of SUPI, SUCI, GUTI, slice/service IDs, and new AI‑Service Identifiers emerging in 6G Node‑level contextual threat detection across AMF, SMF, and cognitive NFs Direct integration with different security products and AI threat analytics for real‑time prevention This removes “blind spots” that AI‑automated security systems would otherwise misinterpret or miss entirely. Let’s go over a demo showing how BIG-IP EOB enhances visibility by TomCreighton Ecosystem and Integration BIG‑IP EOB integrates seamlessly with telco cloud environments: Kubernetes and Red Hat OpenShift: Certified operator framework, integrated with Red Hat’s bpfman for large-scale eBPF management AI/ML Pipelines: Telemetry exported to AIOps, CI/CD, and orchestration frameworks, key for autonomous fault resolution in 6G While we highlighted multiple use cases for Service Providers, EOB can expand other capabilities to the enterprise sector, Application and data monitoring Security and policy assurance User experience and monitoring Cloud-native and infrastructure Conclusion BIG‑IP EOB enables a future‑proofed observability framework that supports the continuous evolution from 5G to 6G: Unified, vendor-neutral visibility across physical, virtual, and AI-driven domains Granular kernel-level insight without probe sprawl Control and user-plane correlation for real-time SLA and security validation Encrypted and service‑mesh traffic observability Telemetry foundation for 6G, autonomous and cognitive networking EOB forms the visibility fabric of the self‑intelligent network—turning real-time telemetry into adaptive intelligence for secure, resilient, and autonomous telco operations. Related Content F5 eBPF Observability: Real-Time Traffic Visibility Dashboard Demo F5 eBPF Observability: Kernel-Level Observability for Modern Applications eBPF: It's All About Observability eBPF: Revolutionizing Security and Observability in 2023
89Views2likes0CommentsLTM logs to Cortex XSIAM are unreadable
Hello, I am trying to forward F5 LTM logs to Cortex XSIAM. We have an on-prem broker for XSIAM and the logs are to be forwarded to the cloud. Data flow is: F5 -- Broker -- XSIAM We are able to see the logs but for some reason they're unreadable. We followed the steps outlined in this link on the cortex official site. Please, any idea what could be causing this?21Views0likes0Comments