NGINX Ingress Controller
11 TopicsF5 CIS -> NGINX Plus Ingress Controller Integration
Hi, I'm using F5 BIG-IP and NGINX Plus Ingress Controller (NPIC) integrated via IngressLink. While attempting to forward the client IP and port by enabling the Proxy Protocol, we encountered the following issue and are seeking assistance. Configuration BIG-IP: Proxy Protocol enabled via iRule NPIC: Proxy Protocol enabled by adding proxy-protocol: "true" in the ConfigMap during deployment Issue When the Proxy Protocol setting is added to the NPIC ConfigMap, the integration with BIG-IP breaks, and routing to pods through NPIC fails. If this setting is removed, IngressLink functions normally: a Virtual Server is automatically created in the BIG-IP GUI, and responses through the NPIC path work correctly. However, in this case, direct requests to the BIG-IP Virtual Server IP fail. In other words, while F5 CIS installation and IngressLink integration are partially functioning, access via the BIG-IP Virtual Server IP completely fails. If anyone has experienced a similar issue or can offer insights into the cause and how to resolve it, your advice would be greatly appreciated. Any debugging tips or relevant documentation would also be a great help. Thank you.76Views0likes2CommentsOWASP Tactical Access Defense Series: Broken Object Level Authorization and BIG-IP APM
Addressing Broken Object Level Authorization (BOLA) vulnerabilities requires a multifaceted approach, combining robust coding practices, secure development methodologies, and powerful tools. Among these tools, F5 BIG-IP Access Policy Manager (APM) stands out as a crucial component in the arsenal of security measures. This article, the second in a series of articles dedicated to fortifying application security, delves into the pivotal role that BIG-IP APM plays in identifying, mitigating, and ultimately preventing OWASP top 10 API vulnerabilities by providing developers and security professionals with a comprehensive guide to bolstering application security in the face of evolving cyber threats. Broken Object Level Authorization This is one of the most common and severe vulnerabilities within APIs and is related to Insecure Direct Object References (IDOR). Starting with, what's Object Level Authorization? This is an access control mechanism that's in place to validate which user has access to a specific endpoint and what actions to be performed. BOLA and IDOR refer to situations where the endpoints fail to enforce specific authorization rules on endpoints, or the user is successfully able to access unauthorized endpoints and perform unauthorized actions. The weakness that can lead to this vulnerability is the server component fails to track client state and rely on other parameters that can be tweaked from the client side, for example (Cookies, object IDs). BOLA Example Let's assume this backend directory, - /uploads/ - user1/ - file1.txt - file2.txt - user2/ - file3.txt - file4.txt The expected user1 usage is as follows, https://example.com/viewfile?file=file1.txt the user can access file1. If the server is vulnerable to BOLA, let's have user2 accessing the server, then try to navigate to file1 as follows, https://example.com/viewfile?file=user1/file1.txt What could help us in this situation? Yes, we need granular endpoint authorization with proper client state tracking. That's where our lovely friend BIG-IP APM comes into the picture. Let's see how BIG-IP APM can help us. BIG-IP APM and BOLA protection BIG-IP APM provides API protection through its Per-Request policy, where the it applies granular Access protection to each API endpoint. How BIG-IP APM enhances defenses We start with creating our Per-Request policy, this policy works in a different way than the per-session policy, as the flow will be evaluted on a per-request basis, making sure to consider variations throught the session life-time. Below are some of the key benefits: Wide range of Authentication, SSO, and MFA mechanisms to properly identify the initiating machine or user. Ability to integrate with 3rd parties to provide additional enforcement decisions based on the organization's policy. Ability to apply endpoint checks on the client side before session initiation. This goes to BIG-IP in general, the ability to apply custom traffic control on both of the traffic sides, Client and Server. Using BIG-IP API protection profile. Protection profiles are an easy way to deploy both APM (Per-Request policy) and Advanced Web Application Firewall (AWAF). As a pre-requisite, you need APM, AWAF licensed and provisioned. Use OpenAPI Spec 2.0 as an input to the API protection. Apply different Authentication methods, whether Basic, Oauth (Directly from the template), or once we have the API protection profile created, we can customize policy elements to our needs. Using Hybrid approach with F5 Distributed Cloud (F5 XC) + BIG-IP APM We had this approach discussed in details through F5 Hybrid Security Architectures (Part 5 - F5 XC, BIG-IP APM, CIS, and NGINX Ingress Controller) Stay engaged to explore the comprehensive capabilities of BIG-IP APM and how it plays a pivotal role in fortifying your security posture against these formidable threats. Related Content F5 BIG-IP Access Policy Manager | F5 Introduction to OWASP API Security Top 10 2023 OWASP Top 10 API Security Risks – 2023 - OWASP API Security Top 10 API Protection Concepts OWASP Tactical Access Defense Series: How BIG-IP APM Strengthens Defenses Against OWASP Top 10 F5 Hybrid Security Architectures (Part 5 - F5 XC, BIG-IP APM, CIS, and NGINX Ingress Controller)760Views3likes1CommentCertified Kubernetes Administrator (CKA) for F5 guys and gals
Summary The Certified Kubernetes Administrator (CKA) program is an industry certification that allows you to quickly establish your skills and credibility in today's job market. Kubernetes is exploding in popularity, and with the majority of new development happening in microservices-based technologies, it's important for existing network and security teams to learn in order to stay relevant. We recently chatted about this on DevCentral live show: What is CKA? The CKA is designed by the Cloud Native Computing Foundation (CNCF), which is a part of the non-profit Linux Foundation. As a result, the learning required for this certification is not skewed to any 3rd party cloud, hardware, or software platform. This vendor-agnostic approach means you can easily access the technology you need to learn, build your own cluster, and pass the exam. This agnostic approach also means the exam, which you can take from home, does not require special knowledge that you normally get from working in a corporate environment. Personally I don't manage a production k8s environment, and I passed the exam - no vendor-specific knowledge required. Why should I care about CKA? Are you feeling "left behind" or "legacy" as the industry converts to Kubernetes? Are new terms and concepts used by developers creating silos or gaps between traditional Networks or Security teams, and developers? Are you looking to integrate your corporate security into Kubernetes environments? Do you want to stay relevant in the job market? If you answered yes to any of the above, the CKA is a great way to catch up and thrive in Kubernetes conversations at your workplace. Sure, but how is this relevant to an F5 guy or gal? Personally I'm often helping devs expose their k8s apps with a firewall policy or SSL termination. NGINX Kubernetes Ingress Controller (KIC) is extremely popular for developers to manage ingress within their Kubernetes cluster. F5 Container Ingress Services (CIS) is a popular solution for enterprises integrating k8s or OpenShift with their existing security measures. Both of these are areas where your networking and security skills are needed by your developers! As a network guy, the CKA rounded out my knowledge of Kubernetes - from networking to storage to security and troubleshooting. The figure below is a common solution that I talk customers through when they want to use KIC for managing traffic inside their cluster, but CIS to have inbound traffic traverse an enterprise firewall and integrate with Kubernetes. How do I study for the CKA? Personally I took a course at ACloudGuru, which was all the preparation I needed. Others I know took a course from Udemy and spoke highly of it, and others just read all they could from free community sources. Recently the exam fee (currently $375 USD) changed to include 2x practice exams from Killer.sh. This wasn't available when I signed up for the exam, but my colleagues spoke extremely highly of the preparation they received from these practice exams. I recommend finding a low-cost course to study if you can, and strongly recommend you build your own cluster from the ground up as preparation for your exam. Building your own cluster (deploying Linux hosts, installing a runtime like Docker, and then installing k8s control plane servers and nodes) is a fantastic way to appreciate the architecture and purpose of Kubernetes. Finally, take the exam (register here). I recommend paying for and scheduling your exam before you start studying - that way you'll have a deadline to motivate you! I'm a CKA, now what? Share your achievement on LinkedIn! Share your knowledge with colleagues. You'll be in high demand from employers but more importantly, you'll be valuable in your workplace when you can help developers with things like network and security and integrations with your existing platforms. And then shoot me a note to let me know you've done it!872Views3likes0CommentsAnnouncing F5 NGINX Gateway Fabric 1.3.0 with Tracing, GRPCRoute, and Client Settings
The release of NGINX Gateway Fabric version 1.3.0, introduces plenty of highly requested features and improvements. GRPCRoutes are now supported to manage gRPC traffic, similar to the handling of HTTPRoute. The update includes new custom policies like ClientSettingsPolicy for client request configurations and ObservabilityPolicy for enabling application tracing with OpenTelemetry support. The GRPCRoute allows for efficient routing, header modifications, traffic weighting, and error conversion from HTTP to gRPC. We will explain how to set up NGINX Gateway Fabric to manage gRPC traffic using a Gateway and a GRPCRoute, providing a detailed example of the setup. It also outlines how to enable tracing through the NginxProxy resource and ObservabilityPolicy, emphasizing a selective approach to tracing to avoid data overload. Additionally, the ClientSettingsPolicy allows for the customization of NGINX directives at the Gateway or Route level, giving users control over certain NGINX behaviors with the possibility of overriding Gateway defaults at the Route level. Looking ahead, the NGINX Gateway Fabric team plans to work on TLS Passthrough, IPv6, and improvements to the testing suite, while preparing for larger updates like NGINX directive customization and separation of data and control planes. Check the end of the article to see how to get involved in the development process through GitHub and participate in bi-weekly community meetings. Further resources and links are also provided within.296Views0likes0CommentsF5 Hybrid Security Architectures (Part 5 - F5 XC, BIG-IP APM, CIS, and NGINX Ingress Controller)
Here in this example solution, we will be using DevSecOps practices to deploy an AWS Elastic Kubernetes Service (EKS) cluster running the Brewz test web application serviced by F5 NGINX Ingress Controller. To secure our application and APIs, we will deploy F5 Distributed Cloud's Web App and API Protection service as well as F5 BIG-IP Access Policy Manger and Advanced WAF. We will then use F5 Container Ingress Service and IngressLink to tie it all together.1.8KViews3likes0CommentsSimplifying Kubernetes Ingress using F5 Technologies
Kubernetes Ingress is an important component to any Kubernetes environment as you're likely trying to build applications that need to be accessed from outside of the k8s environment. F5 provides both BIG-IP and NGINX approaches to Ingress and with that, the breadth of F5 solutions can be applied to a Kubernetes environment. This might be overwhelming if you don't have experience in all of those solutions and you may just want to simply expose an application to start. Mark Dittmer, Sr. Product Management Engineer at F5, has put together a simple walkthrough guide for how to configure Kubernetes Ingress using F5 technologies. He incorporated both BIG-IP Container Ingress Services and NGINX Ingress Controller in this walkthrough. By the end, you'll be able to securely present your k8s Service using an IP that is dynamically provisioned from a range you specify and leverage the Service Type LoadBalancer. Simple as that! GitHub repo: https://github.com/mdditt2000/k8s-bigip-ctlr/tree/main/user_guides/simplifying-ingress1.5KViews0likes0CommentsDeploying NGINX Ingress Controller with OpenShift on AWS Managed Service: ROSA
Introduction In March 2021, Amazon and Red Hat announced the General Availability of Red Hat OpenShift Service on AWS (ROSA). ROSA is a fully-managed OpenShift service, jointly managed and supported by both Red Hat and Amazon Web Services (AWS). OpenShift offers users several different deployment models. For customers that require a high degree of customization and have the skill sets to manage their environment, they can build and manage OpenShift Container Platform (OCP) on AWS. For those who want to alleviate the complexity in managing the environment and focus on their applications, they can consume OpenShift as a service, or Red Hat OpenShift Service on AWS (ROSA). The benefits of ROSA are two-fold. First, we can enjoy more simplified Kubernetes cluster creation using the familiar Red Hat OpenShift console, features, and tooling without the burden of manually scaling and managing the underlying infrastructure. Secondly, the managed service made easier with joint billing, support, and out-of-the-box integration to AWS infrastructure and services. In this article, I am exploring how to deploy an environment with NGINX Ingress Controller integrated into ROSA. Deploy Red Hat OpenShift Service on AWS (ROSA) The ROSA service may be deployed directly from the AWS console. Red Hat has done a great job in providing the instructions on creating a ROSA cluster in the Installation Guide. The guide documents the AWS prerequisites, required AWS service quotas, and configuration of your AWS accounts. We run the following commands to ensure that the prerequisites are met before installing ROSA. - Verify that my AWS account has the necessary permissions: ❯ rosa verify permissions I: Validating SCP policies... I: AWS SCP policies ok - Verify that my AWS account has the necessary quota to deploy a Red Hat OpenShift Service on the AWS cluster. ❯ rosa verify quota --region=us-west-2 I: Validating AWS quota... I: AWS quota ok. If cluster installation fails, validate actual AWS resource usage against https://docs.openshift.com/rosa/rosa_getting_started/rosa-required-aws-service-quotas.html Next, I ran the following command to prepare my AWS account for cluster deployment: ❯ rosa init I: Logged in as 'ericji' on 'https://api.openshift.com' I: Validating AWS credentials... I: AWS credentials are valid! I: Validating SCP policies... I: AWS SCP policies ok I: Validating AWS quota... I: AWS quota ok. If cluster installation fails, validate actual AWS resource usage against https://docs.openshift.com/rosa/rosa_getting_started/rosa-required-aws-service-quotas.html I: Ensuring cluster administrator user 'osdCcsAdmin'... I: Admin user 'osdCcsAdmin' created successfully! I: Validating SCP policies for 'osdCcsAdmin'... I: AWS SCP policies ok I: Validating cluster creation... I: Cluster creation valid I: Verifying whether OpenShift command-line tool is available... I: Current OpenShift Client Version: 4.7.19 If we were to follow their instructions to create a ROSA cluster using the rosa CLI, after about 35 minutes our deployment would produce a Red Hat OpenShift cluster along with the needed AWS components. ❯ rosa create cluster --cluster-name=eric-rosa I: Creating cluster 'eric-rosa' I: To view a list of clusters and their status, run 'rosa list clusters' I: Cluster 'eric-rosa' has been created. I: Once the cluster is installed you will need to add an Identity Provider before you can login into the cluster. See 'rosa create idp --help' for more information. I: To determine when your cluster is Ready, run 'rosa describe cluster -c eric-rosa'. I: To watch your cluster installation logs, run 'rosa logs install -c eric-rosa --watch'. Name: eric-rosa … During the deployment, we may enter the following command to follow the OpenShift installer logs to track the progress of our cluster: > rosa logs install -c eric-rosa --watch After the Red Hat OpenShift Service on AWS (ROSA) cluster is created, we must configure identity providers to determine how users log in to access the cluster. What just happened? Let's review what just happened. The above installation program automatically set up the following AWS resources for the ROSA environment: AWS VPC subnets per Availability Zone (AZ). For single AZ implementations two subnets were created (one public one private) The multi-AZ implementation would make use of three Availability Zones, with a public and private subnet in each AZ (a total of six subnets). OpenShift cluster nodes (or EC2 instances) Three Master nodes were created to cater for cluster quorum and to ensure proper fail-over and resilience of OpenShift. At least two infrastructure nodes, catering for build-in OpenShift container registry, OpenShift router layer, and monitoring. Multi-AZ implementations Three Master nodes and three infrastructure nodes spread across three AZs Assuming that application workloads will also be running in all three AZs for resilience, this will deploy three Workers. This will translate to a minimum of nine EC2 instances running within the customer account. A collection of AWS Elastic Load Balancers, some of these Load balancers will provide end-user access to the application workloads running on OpenShift via the OpenShift router layer, other AWS elastic load balancers will expose endpoints used for cluster administration and management by the SRE teams. Source: https://aws.amazon.com/blogs/containers/red-hat-openshift-service-on-aws-architecture-and-networking/ Deploy NGINX Ingress Controller The NGINX Ingress Operator is a supported and certified mechanism for deploying NGINX Ingress Controller in an OpenShift environment, with point-and-click installation and automatic upgrades. It works for both the NGINX Open Source-based and NGINX Plus-based editions of NGINX Ingress Controller. In this tutorial, I’ll be deploying the NGINX Plus-based edition. Read Why You Need an Enterprise-Grade Ingress Controller on OpenShift for use cases that merit the use of this edition. If you’re not sure how these editions are different, read Wait, Which NGINX Ingress Controller for Kubernetes Am I Using? I install the NGINX Ingress Operator from the OpenShift console. There are numerous options you can set when configuring the NGINX Ingress Controller, as listed in our GitHub repo. Here is a manifest example : apiVersion: k8s.nginx.org/v1alpha1 kind: NginxIngressController metadata: name: my-nginx-ingress-controller namespace: openshift-operators spec: ingressClass: nginx serviceType: LoadBalancer nginxPlus: true type: deployment image: pullPolicy: Always repository: ericzji/nginx-plus-ingress tag: 1.12.0 To verify the deployment, run the following commands in a terminal. As shown in the output, the manifest I used in the previous step deployed two replicas of the NGINX Ingress Controller and exposed them with a LoadBalancer service. ❯ oc get pods -n openshift-operators NAME READY STATUS RESTARTS AGE my-nginx-ingress-controller-b556f8bb-bsn4k 1/1 Running 0 14m nginx-ingress-operator-controller-manager-7844f95d5f-pfczr 2/2 Running 0 3d5h ❯ oc get svc -n openshift-operators NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-nginx-ingress-controller LoadBalancer 172.30.171.237 a2b3679e50d36446d99d105d5a76d17f-1690020410.us-west-2.elb.amazonaws.com 80:30860/TCP,443:30143/TCP 25h nginx-ingress-operator-controller-manager-metrics-service ClusterIP 172.30.50.231 <none> With NGINX Ingress Controller deployed, we'll have an environment that looks like this: Post-deployment verification After the ROSA cluster was configured, I deployed an app (Hipster) in OpenShift that is exposed by NGINX Ingress Controller (by creating an Ingress resource). To use a custom hostname, it requires that we manually change your DNS record on the Internet to point to the IP address value of AWS Elastic Load Balancer. ❯ dig +short a2dc51124360841468c684127c4a8c13-808882247.us-west-2.elb.amazonaws.com 34.209.171.103 52.39.87.162 35.164.231.54 I made this DNS change (optionally, use a local host record), and we will see my demo app available on the Internet, like this: Deleting your environment To avoid unexpected charges, don't forget to delete your environment if you no longer need it. ❯ rosa delete cluster -c eric-rosa --watch ? Are you sure you want to delete cluster eric-rosa? Yes I: Cluster 'eric-rosa' will start uninstalling now W: Logs for cluster 'eric-rosa' are not available … Conclusion To summarize, ROSA allows infrastructure and security teams to accelerate the deployment of the Red Hat OpenShift Service on AWS. Integration with NGINX Ingress Controller provides comprehensive L4-L7 security services for the application workloads running on Red Hat OpenShift Service on AWS (ROSA). As a developer, having your clusters as well as security services maintained by this service gives you the freedom to focus on deploying applications. You have two options for getting started with NGINX Ingress Controller: Download the NGINX Open Source-based version of NGINX Ingress Controller from our GitHub repo. If you prefer to bring your own license to AWS, get a free trial directly from F5 NGINX.5.3KViews0likes0CommentsDigital Transformation in Financial Services Using Production Grade Kubernetes Deployment
The Banking and Financial Services Industry (BFSI) requires the speed of modern application development in order to shorten the time it takes to bring value to their customers. But they also face the constraints of security and regulatory requirements that tend to slow down the development and deployment process. F5 and NGINX bring the security and agile development technology while Red Hat OpenShift provides the modern development architecture needed to achieve the speed and agility required by BFSI companies.315Views0likes0CommentsQuick Deployment: Deploy F5 CIS/F5 IngressLink in a Kubernetes cluster on AWS
Summary This article describes how to deploy F5 Container Ingress Services (CIS) and F5 IngressLink with NGINX Ingress Controller on AWS cloud quickly and predictably. All you need to begin with are your AWS credentials. (15-35 mins). Problem Statement Deploying F5 IngressLink is quick and simple. However, to do so, you must first deploy the following resources: AWS resources such as VPC, subnets, security groups and more. A Kubernetes cluster on AWS A BIG-IP Instance F5 Container Ingress Services, CIS NGINX Ingress Controller Application pods Many times, we want to spin up a Kubernetes cluster with the resources listed above for a quick demo, educational purposes, experimental testing or to simply run a command to view its output. However, the creation and deletion processes are both however error prone and time consuming. In addition, we don't want the overhead and cost of maintaining the cluster and keeping the instances/virtual machines running. And we'd like to tear down the deployment as soon as we're done. Solution We need to automate and integrate predictably the creation steps described in F5 Clouddocs: To deploy BIG-IP CIS and F5 IngressLink. NGINX documentation: To deploy NGINX Ingress Controller. Refer to my GitHub repository to perform this using either: Disclaimer: The deployment in the GitHub repository is for demo or experimental purposes and not meant for production use or supported by F5 Support. For example, the Kubernetes nodes are configured to have public elastic IP addresses for easy access for troubleshooting. kops: kops takes about 6-8 mins to deploy a Kubernetes cluster on AWS. You can complete the BIG-IP CIS/Ingress Link deployment in about 15 mins. OR eksctl: eksctl takes about 25-28mins to deploy an EKS cluster on AWS. You can complete the BIG-IP CIS/Ingress Link deployment in about 35mins. If you don't need the Kubernetes resources eksctl creates, such as an EKS cluster managed by Amazon's EKS control plane and at least two subnets in different availability zones for resilience and so on, kops is a faster option. For any bugs, please raise an issue on GitHub.1.7KViews4likes2Comments