Use of NGINX Controller to Authenticate API Calls
API calls authentication is essential for API security and billing. Authentication helps to reduce load by dropping anonymous calls and provides clear view on per user/group usage information since every call bears an identity marker. NGINX Controller provides an easy method for API owners to setup authentication for calls that traverse NGINX Plus instances as API gateways. What is API authentication and how is it different from authorization? API authentication it is an action when API gateway verifies an identity of a call by checking an identity marker (token, credentials, ...) in the call body. Authorization in turn is usually based on authentication. Authorization mechanisms extract an identity marker from a call and check if this identity is allowed to make the call or not. There are many approaches to authenticate an API call: HTTP Basic: API call carries clear text credentials in HTTP Authorization header. E.g. "Authorization: Basic dXNlcjpwYXNzd29yZA==" API key: API call carries an API key in request (multiple injection points possible) E.g: "GET /endpoint?token=dXNlcjpwYXNzd29yZA" Oauth: Complex open source standard for access delegation. When oAuth is in use API consumer obtains cryptographically signed JWT (json web token) token from an external identity provider and places it in the call. Server in turn uses JWK (json web key) obtained from the same identity provider to verify token signature and make sure data in JWT is true. As you may already know from previous articles NGINX Controller doesn't process traffic on its own but it configures NGINX Plus instances which run as API gateways to apply all necessary actions and policies to the traffic. The picture below shows how all these pieces to work together. Picture 1. Controller can setup two approaches for authenticating API calls: API key based oAuth (JWT) based This article covers procedures needed to configure both of supported API call authentication methods. As a prerequisite I assume you already have NGINX Plus and Controller setup along with at least one API published (if not please take a look at the previous article for details) Assume API owner developed an API and wants to make it avaliable for authenticated users only. Owner knows that customers have different use cases therefore different authentication methods fit each use case better. So it is required to authenticate users by API key or by JWT token. As discussed in previous article NGINX Controller abstracts API gateway configuration with higher level concepts for ease of configuration. The abstractions are shown on picture below. Picture 2. Therefore API definition, gateway, and workload group form a data path, the way how calls get accepted and where they get forwarded if all policies are passed. Policies contain necessary verifications/actions which apply for every API call traversing the data path. Picture 3. Amongst others there is authentication policy which allows to authenticate API calls. As shown on picture 2 the policy applies to published API instance which in turn represents data path for the traffic. Therefore the policy affects every call which flies through. Usually every authentication method fits one use case better then others. E.g. robots/bots better go with API key because process of obtaining of an JWT token from an authorization server is complex and requires a tools which bot/robot may not have. For human situation is opposite. It is much more native for user to type username/password and get token in exchange under the covers instead of copy/paste long API key to every call. Therefore oAuth fits better here. Steps below cover configuration of both supported authentication methods: API key and oAuth2.0. Assume Company_1 has bought access to an API. As a customer Company_1 wants to consume API automatically with robots and allow its employees to make requests manually as well. In order to authenticate employees using oAuth and robots with API key two different identity providers needs to be configured on controller. First we create a provider for employees. In order to NGINX Plus to verify JWT token in a call JWK key is required. There are two ways to supply the JWK key for the provider: upload a file or reference it as web URL. In case of reference NGINX Plus automatically refreshes the key periodically. These two approaches are shown in two pictures below. A second provider is used to authenticate Customer_1 robots with a simple API key. There are two options for creating API keys in provider. The first is to create them manually. The second is to upload CSV file containing user accounts credentials. user@linux$cat api-clients.csv CUSTOMER_1_ROBOT_1000,2b31388ccbcb4605cb2b77447120c27ecd7f98a47af9f17107f8f12d31597aa2 CUSTOMER_1_ROBOT_1001,71d8c4961e228bfc25cb720e0aa474413ba46b49f586e1fc29e65c0853c8531a CUSTOMER_1_ROBOT_1002,fc979b897e05369ebfd6b4d66b22c90ef3704ef81e4e88fc9907471b0d58d9fa CUSTOMER_1_ROBOT_1003,e18f4cacd6fc4341f576b3236e6eb3b5decf324552dfdd698e5ae336f181652a CUSTOMER_1_ROBOT_1004,3351ac9615248518348fbddf11d9c597967b1e526bd0c0c20b2fdf8bfb7ae30a The next step is to assign an authentication policy to an published API. Each authentication policy may include only one client group therefore we need two of them to authenticate employees with JWT or robots with API token. Policy for robots specifies the provider and a location where an API token is placed. Along with query string in our example also headers, cookies, and bearer token locations are supported. Policy for employees specifies the provider and the JWT location NOTE: (Limitation) Policies in an environment have AND operand between them. This means environment can have only one authentication policy otherwise identity requirements from both of them need to be satisfied for call to pass. Once policy for employees is set up and config has been pushed to NGINX Plus instance, calls authentication is in place and we can now test it. At first let us make sure unauthenticated calls are rejected. I use postman as API client. As you see request without JWT token is rejected with 401 "Unauthorized" response code. Now I obtain valid token from identity provider and insert it in the same call. A call with valid JWT token successfully passes authentication and brings response back! Now we can try to replace authentication policy with policy for robots and conduct the same tests. I am emulating a robot with console tool which can not act as an oAuth client to retrieve a JWT token. So robots simply append API key to the query string. API call without any token is blocked. ubuntu@ip-10-1-1-7:~$ httphttps://prod.httpbin.internet.lab/uuid HTTP/1.1 401 Unauthorized Connection: keep-alive Content-Length: 40 Content-Type: application/json Date: Wed, 18 Dec 2019 00:12:26 GMT Server: nginx/1.17.6 { "message": "Unauthorized", "status": 401 } API call with valid API key in query string is allowed. ubuntu@ip-10-1-1-7:~$ httphttps://prod.httpbin.internet.lab/uuid?token=2b31388ccbcb4605cb2b77447120c27ecd7f98a47af9f17107f8f12d31597aa2 HTTP/1.1 200 OK Access-Control-Allow-Credentials: true Access-Control-Allow-Origin: * Connection: keep-alive Content-Length: 53 Content-Type: application/json Date: Wed, 18 Dec 2019 00:12:57 GMT Server: nginx/1.17.6 { "uuid": "b57f6b72-7730-4d0e-bbb7-533af8e2a4c0" } Therefore even such a complex feature implementation as API calls authentication becomes much easier yet flexible when managed by NGINX Controller. Hope this overview was useful. Good luck!2.2KViews1like0CommentsThe Power of F5 and NGINX
NGINX Controller 3.0 released Since the NGINX acquisition, F5 and NGINX have been integrating teams, listening to customers, and planning our first release as a unified company. Now, we have introduced NGINX Controller 3.0, which allows you to manage apps and services across a variety of deployment models, including multi-cloud scenarios. NGINX Controller 3.0 shifts from an infrastructure-centric to an application-centric design, improving developer productivity and accelerating time-to-market for new applications. In this article, learn about core NGINX concepts and explore new NGINX documentation on AskF5. Core NGINX concepts Putting your Apps First (5 mins) Load Balancing in a Multi-Cloud World (4 mins) Managing a Real Time API (4 mins) Simplifying the Move to Microservices (5 mins) Putting your Apps First (5 mins) Learn how an app-centric delivery platform can increase collaboration, decrease risk, and help you move with speed. Load Balancing in a Multi-Cloud World (4 mins) Explore considerations for deploying your applications to multiple clouds. Managing a Real Time API (4 mins) Learn the benefits of a lightweight API management platform. Simplifying the Move to Microservices (5 mins) Learn about options to successfully deploy microservices and see our six-point checklist to help you determine if you’re ready for a service mesh. New NGINX documentation on AskF5 As the F5 and NGINX engineering teams are releasing products together, engineers from both Support teams and AskF5 are combining forces to produce new documentation. For example, if you want to deploy your BIG-IP LTM system with HTTP load balancing to two NGINX proxies in AWS, see Quick deployment: BIG-IP LTM system with HTTP load balancing to two NGINX Plus web servers in AWS. More NGINX articles on AskF5: K74544015: Removing nginx/<version> from HTTP response headers K82655201: Host OS swap space must be disabled in NGINX Controller 2.8.0 and later K24214052: NGINX Controller 2.0.0 installation fails when the host OS locale is not UTF-8 K64001240: Enabling NGINX Controller Agent debug logging K06962163: Resetting the Admin account password on the NGINX Controller system K30389284: Backing up and restoring the NGINX Controller system K10640269: Setting nginx-controller as the default Kubernetes namespaceK51798430: Using the proxy_headers_hash_max_size and proxy_headers_hash_bucket_size directives K03453121: Basic Authentication on the health check request K21528053: [crit] message in error.log says '24: Too many open files' K43542013: NGINX returns status '400 Request Header Or Cookie Too Large' or '414 Request-URI Too Large' K48373902: [warn] message in error log: an upstream response is buffered to a temporary file while reading upstream K84508595: Different SSL protocols for different servers K18050039: Enabling client certificate authentication for NGINX K95305552: How to download or update the GeoIP2 database K68914062: Displaying a custom 502 response page K13912623: Configuring a default 'catchall' server K04600350: Using a common set of directives in the NGINX Plus configuration K46613025: High Availability solutions available for NGINX Plus in Azure K42497190: NGINX versions that support Lightweight M2M protocol K53631303: Capturing HTTP headers of a request in a log file K95324441: modsec_audit.log dramatically increasing What new NGINX topics would you like to see on AskF5? Leave your suggestions in the comments.1.6KViews4likes1CommentUsing NGINX Controller API Management Module and NGINX App Protect to secure financial services API transactions
As financial services APIs (such as Open Banking) are concerned primarily with managing access to exposed banking APIs, the security aspect has always been of paramount importance. Securing financial services APIs is a vast topic, as security controls are distributed among different functions, such as user authentication at the Identity Provider level, user authorization and basic API security at the API Gateway level and advanced API security at the WAF level. In this article we will explore how two NGINX products, Controller API Management Module and App Protect, can be deployed to secure the OAuth Authorization Code flow which is a building block of the access controls used to secure many financial services APIs.. Physical setup The setup used to support this article comprises of NGINX Controller API Management Module, providing API Management functions through an instance of NGINX API Gateway and NGINX App Protect deployed on a Kubernetes Ingress Controller providing advanced security for the Kubernetes-deployed demo application, Arcadia Finance. These elements are being deployed and configured in an automated fashion using a Gitlab CI/CD pipeline. The visualization for NGINX App Protect is provided by NAP dashboards deployed in ELK. Note: For the purpose of supporting this lab, APM was configured as an OAuth Authorization Server supporting OpenID Connect. Its configuration, along with the implementation details of the third party banking application (AISP/PISP), acting as an OAuth Client, is beyond the scope of this article. In an OAuth Authorization Code flow, the PSU (End User) is initiating an API request through the Account or Payment Information Services Provider (AISP/PISP Application) which first redirects the end user to the Authorization Server. Strong Customer Authentication is being performed between the end user and Authorization Server which, if successful, will issue an authorization code and redirect the user back to theAISP/PISP Application. The AISP/PISP Application will exchange the authorization code for an ID Token and a JWT Access Token, the latter will be attached as a bearer token to the initial end-user API request which will then be forwarded to the API Gateway. The API Gateway will authenticate the signature of the JWT Access Token by downloading the JSON Web Key (JWK) from the Authorization Server and may apply further security controls by authorising the API call based on JWT claims and/or apply rate limits. Worth noting here is the security function of the API Gateway, which provides positive security by allowing only calls conforming to published APIs, in addition to authentication and authorization functions. The Web Application Firewall function, represented here by the NGINX App Protect deployed on the Kubernetes Ingress Controller (KIC), will add negative security protection, by checking the request against a database of attack signatures, and advanced API security, by validating the API request against the OpenAPI manifest and providing Bot detection capabilities. Configuration To configure the NGINX Controller API Management Module, first create an Application by sending a POST request to 'https://{{ my_controller }}/api/v1/services/environments/env_prod/apps' having the following body: { "metadata": { "name": "app_api", "displayName": "API Application Arcadia", "description": "", "tags": [] }, "desiredState": {} } Then create an Identity Provider, pointed at the Authorization Server's JWK endpoint, by sending a PUT request to 'https://{{ my_controller }}/api/v1/security/identity-providers/bank_idp' having the following body: { "metadata": { "name": "bank_idp", "tags": [] }, "desiredState": { "environmentRefs": [ { "ref": "/services/environments/env_prod" } ], "identityProvider": { "type": "JWT", "jwkFile": { "type": "REMOTE_FILE", "uri": "https://bank.f5lab/f5-oauth2/v1/jwks", "cacheExpire": "12h" } } } } Create an API definition by sending a PUT request to 'https://{{ my_controller }}/api/v1/services/api-definitions/arcadia-api-def/versions/v1' with the following body: { "metadata": { "name": "v1", "displayName": "arcadia-api-def" }, "desiredState": { "specs": { "REST": { "openapi": "3.0.0", "info": { "version": "v1", "title": "arcadia-api-def" }, "paths": {} } } } } Then import the OpenAPI definition by sending a PUT request to 'https://{{ my_controller }}/api/v1/services/api-definitions/arcadia-api-def/versions/v1/import' with the OpenAPI JSON as a request body. Publish the API definition by sending a PUT request to 'https://{{ my_controller }}/api/v1/services/environments/env_prod/apps/app_api/published-apis/prod-api', with the following body: { "metadata": { "name": "prod-api", "displayName": "prod-api", "tags": [] }, "desiredState": { "apiDefinitionVersionRef": { "ref": "/services/api-definitions/arcadia-api-def/versions/v1" }, "gatewayRefs": [ { "ref": "/services/environments/env_prod/gateways/gw_api" } ] } } Declare the necessary back-end components (in this example webapi-kic.nginx-udf.internal Kubernetes workload) by sending a PUT to 'https://{{ my_controller }}/api/v1/services/environments/env_prod/apps/app_api/components/cp_moneytransfer_api' with the following body: { "metadata": { "name": "cp_moneytransfer_api", "displayName": "cp_moneytransfer_api", "tags": [] }, "desiredState": { "ingress": { "uris": { "/api/rest/execute_money_transfer.php": { "php": { "get": { "description": "Send money to a friend", "parameters": [ { "in": "body", "name": "body", "required": true, "schema": { "type": "object" } } ], "responses": { "200": { "description": "200 response" } } }, "matchMethod": "EXACT" } } }, "gatewayRefs": [ { "ref": "/services/environments/env_prod/gateways/gw_api" } ] }, "backend": { "ntlmAuthentication": "DISABLED", "preserveHostHeader": "DISABLED", "workloadGroups": { "wl_mainapp_api": { "loadBalancingMethod": { "type": "ROUND_ROBIN" }, "uris": { "http://webapi-kic.nginx-udf.internal:30276": { "isBackup": false, "isDown": false, "isDrain": false } } } } }, "programmability": { "requestHeaderModifications": [ { "action": "DELETE", "applicableURIs": [], "headerName": "Host" }, { "action": "ADD", "applicableURIs": [], "headerName": "Host", "headerValue": "k8s.arcadia-finance.io" } ] }, "logging": { "errorLog": "DISABLED", "accessLog": { "state": "DISABLED" } }, "security": { "rateLimits": { "policy_1": { "rate": "5000r/m", "burstBeforeReject": 0, "statusCode": 429, "key": "$binary_remote_addr" } }, "conditionalAuthPolicies": { "policy_1": { "action": "ALLOW", "comparisonType": "CONTAINS", "comparisonValues": [ "Payment" ], "sourceType": "JWT_CLAIM", "sourceKey": "scope", "denyStatusCode": 403 } }, "identityProviderRefs": [ { "ref": "/security/identity-providers/bank_idp" } ], "jwtClientAuth": { "keyLocation": "BEARER" } }, "publishedApiRefs": [ { "ref": "/services/environments/env_prod/apps/app_api/published-apis/prod-api" } ] } } Note the 'security' block, specifying the JWT authentication, the Identity Provider from where to download the JWK, the authorization check applied on each request and the rate limit policy. The configuration used to deploy NGINX App Protect on the Kubernetes Ingress Controller can be consulted here. Summary In this article we showed how NGINX Controller API Management Module and NGINX App Protect can be deployed to protect API calls as part of the OAuth Authorization Code flow which is a basic flow used to control the access to many financial services APIs. Links UDF lab environment link.1.6KViews1like0CommentsAutomating NGINX Controller Installation on Azure with Pulumi
Introduction Cloud infrastructure definition and application installation when treated as code provides awesome benefits. It allows for storing application setup as a versionable artifact, provides documentation regarding how it was built and is repeatable. Moreover, when such a definition is truly code and not just a set of configuration files, it can provide flexible integrations between the application layer and the infrastructure layer or between a particular cloud and external services. In this article, we will use Pulumi with Azure to automate the install of NGINX Controller to illustrate these principles. Pulumi is an infrastructure automation tool like Terraform but differing in that it allows for infrastructure to be defined as code rather than configuration files. Infrastructure can be defined in JavaScript, TypeScript, Python, Go, or any .NET language. Custom functions, scripting, external references or remote invocations are all possible and can cleanly integrate into infrastructure definition. Conveniently, Pulumi offers a conversion path that allows many Terraform providers to be consumed in Pulumi and all the major cloud providers are supported. Controller Install Overview A few resources must be present before installing NGINX Controller on the cloud: TLS certificates for the domain name to be associated with Controller Public IP address Virtual network and subnet interfaces Firewall rules (opening ports 22, 80, 443, 8443 to the Controller VM) SMTP server (installation will work with bogus settings, but email will not work) Data disk for storing OLAP data and optionally configuration data Optional external PostgreSQL configuration database VM instance setup: secrets, required libraries, data disk partition Using Pulumi we can automate the setup of all the above resources apart from the SMTP server (which can be configured easily through the Azure portal). The resulting infrastructure deployed when visualized is relatively simple. The Controller VM connects to the (optional) Azure PostgreSQL database via a private network link. Users of Controller connect directly to the Controller VM using the Azure provided DNS secured by Let’s Encrypt TLS certificates. Deployment Environment DNS Controller will be configured to automatically use the DNS name controller-<installation_id>.<azure region>.cloudapp.azure.com. This DNS entry is assigned to the Azure VM instance by default upon provisioning. The installation_id portion of the DNS entry is specified as part of your configuration. It is a unique id that is used throughout the installation. After the instance is created, it will automatically be assigned a TLS certificate using Let’s Encrypt, so your Controller instance will be ready to go with a valid certificate from the start. If you need a different DNS entry assigned to controller, you will need to modify the installation script to work for your particular environment. PostgreSQL Configuration Database The configuration database for Controller can be installed on the same instance as Controller or alternatively it can be installed in an external database. When installed locally, the total resource requirements of the Controller instance will increase, but this may be acceptable for trials or deployments with low utilization. Alternatively, the configuration database can be configured to use the Azure PostgreSQL Database service. This is a service offering from Azure that provides automated management of your database that allows for ease in scaling, backups, and performance tuning. SMTP Configuration The installation automation provided in this project does not install an email server. In order to receive emails from Controller about password resets or alerts, you will need to have a SMTP server set up that can be reached from Controller. An easy way to set this up is to use the SendGrid service on Azure. Using the Azure GUI, you can quickly have a SMTP server online that will work with Controller. Getting Started Download NGINX Controller Go to MyF5 and login or sign up for a new account. Follow the prompts to start a NGINX Controller trial if you are trialing. Then download the latest Controller installer and copy it into the installer-archives directory under the azure-pulumi directory. Also, be sure to note the association token providing in the MyF5 portal because you will need it when you use Controller for the first time. Run and configure Pulumi Choose the getting started method that works best for you from the configuration and run directions. If you want to get going quickly and running Linux or MacOS, you may find the quick start script, or the Docker container approaches to be the easiest.1.5KViews0likes0CommentsPublishing an API using NGINX Controller
Overview API management is a complex process of governing the design, and implementation of APIs. This article introduces a solution to implement an API management system based on the market-leading NGINX Plus platform. The NGINX solution contains two main components which are NGINX Plus and NGINX Controller. NGINX Plus is the data processing unit that handles the API traffic. NGINX Controller manages NGINX Plus instances and provides a human consumable interface to handle API lifecycle. Managing a single NGINX Plus instance and its configuration is relatively straightforward. However, for managing multiple NGINX Plus instances, a management system is necessary. NGINX Controller allows administrators to centrally configure, monitor, and analyze telemetry from multiple NGINX Plus instances regardless of their location.Instances can be deployed on-premises or in any public cloud infrastructure. Architecture and Network Topology NGINX Controller manages multiple NGINX Plus instances, that act as API gateways.In the diagram below, data plane communication flows are shown in blue and the control plane communications are shown in green. Picture 1. “Controller to Nginx Plus interactions” The NGINX Plus instances run the Controller agent, which registers the instances with the NGINX Controller. The agent, running on NGINX Plus, uses an API key issued by the Controller to register itself with the Controller. The key is used to authenticate control-plane data in transit between the NGINX Plus instance and the Controller. Once NGINX Plus is registered with the Controller, the latter fully control the instance. Subsequently, the Controller pushes the configuration to the NGINX Plus instance and monitors telemetry. IP connectivity is provided by the networking stack of the underlying operating systems where the NGINX Plus and the NGINX Controller instances run. Those systems need to be able to reach each other over a network. Note: Following ports need to be open to allow communications between NGINX Plus, controller, and database DB: port 5432 TCP (incoming to DB from NGINX Controller host) NGINX Controller: 80 TCP (incoming from NGINX Plus instances) NGINX Controller: 443 TCP (incoming from where you are accessing from a browser, for example, an internal network) NGINX Controller: 8443 TCP (incoming from NGINX Plus instances) NGINX Controller: 6443 TCP (incoming requests to the Kubernetes master node; used for the Kubernetes API server) NGINX Controller: 10250 TCP (incoming requests to the Kubernetes worker node; used for the Kubelet API) NGINX Plus uses the underlying operating system’s networking stack to accept and forward data plane traffic. As a daemon on a Linux system, it listens on all available IP interfaces and ports (sockets) specified in its configuration. NGINX Plus can reuse a socket for delivering data to many different applications that sit behind it. As an example, assume NGINX Plus listens on network socket 192.168.1.1:80 and receives requests to multiple applications such as api.xyz.com, www.xyz.com, api.pqr.com, www.pqr.com, etc. NGINX Plus is configured with virtual servers for each of the applications that it is serving. When requests come, NGINX Plus examines the hostname header in the request and matches it to the appropriate virtual server. This feature makes it possible to host multiple applications behind a single socket 192.168.1.1:80 instead of running them on some random port that is not native to most web apps. Thus multiple applications can be served on the same machine using a single socket, rather than having to allocate different ports for each of the applications. Publishing an API Once the NGINX Plus and NGINX Controller instances are deployed and installed on the target systems, they can be configured to handle API traffic. This article doesn’t contain step-by-step instructions to register NGINX Plus instances on Controller.Administrators are welcome to use the official documentation, that is available online: link. Once the registration process is complete, an administrator can access a list of all registered instances and review graphs created from the telemetry data sent back to the controller. Picture 2. Controller dashboard lists managed NGINX Plus instances The system is now ready to define APIs and publish them through selected NGINX Plus instances regardless of their location. The diagram below visually describes the scenario where a company publishes and maintains both a ‘test’ API and a ‘production’ API deployments. Picture 3. Deployment layout As an example API I use Httpbin app. It provides number of API endpoints that generate all kinds of responses depending on request. Following steps describe how to publish 'test' version of API using NGINX controller. 1) Create an environment. Environment is an logical container that aggregates all kinds of resources (certificates, gateways, apps, etc...) for particular deployment. For example all resources that belong to testing deployment go to 'test' environment and resources for production use go to 'prod' environment. Such segregation helps to make configuration more error prone. 2) Add a certificate to publish an API via secure channel. 3) Create a gateway. It is similar to virtual server concept that defines HTTP listener properties. 4) Create an application. It provides a logical abstraction for an application. An application may include multiple components including API. 5) Create an API definition. A logical container for an API. 6) Create an API version. An API version enumerates all endpoints for an API. 7) Create a published API. A published API represents an API version deployed to particular gateway and forwarding API calls to a backend. Once API is published NGINX Controller pushes configuration to corresponding NGINX Plus instances. user@nginx-plus-2$ cat /etc/nginx/nginx.conf | grep -ie "server {" -A 7 server { listen 80; listen 443 ssl; server_name test.httpbin.internet.lab; status_zone test.httpbin.internet.lab; set $apimgmt_entry_point 3; ssl_certificate /etc/controller-agent/configurator/auxfiles/cert.crt; ssl_certificate_key /etc/controller-agent/configurator/auxfiles/cert.key; Now NGINIX Plus is ready to process API calls and forward them towards the backend user@client-vm$ http -v https://test.httpbin.internet.lab/uuid GET /uuid HTTP/1.1 Accept: */* Accept-Encoding: gzip, deflate Connection: keep-alive Host: test.httpbin.internet.lab User-Agent: HTTPie/0.9.2 HTTP/1.1 200 OK Access-Control-Allow-Credentials: true Access-Control-Allow-Origin: * Connection: keep-alive Content-Length: 53 Content-Type: application/json Date: Thu, 12 Dec 2019 22:27:59 GMT Server: nginx/1.17.6 { "uuid": "08232fcb-1e41-4433-adc3-2818a971647f" } As you may noticed API definition and API version abstractions don't belong to an environment. This means that exactly the same definition and version of an API may be published to any environment. E.g. once all tests are complete in 'test' environment it is easy to re-publish to production by creating another published API in 'prod' environment. Therefore NGINX controller significantly simplifies API lifecycle management.1.2KViews1like0CommentsSetting up NGINX Controller
Recently NGINX Controller has completely moved to kubernetes platform. That is great since it makes operations, maintenance, and upgrade of controller software much easier and reliable. However it also requires some kubernetes knowledge from a team. I personally faced couple bumps when installed controller for fist time and ended up writing this article to help community to have smooth experience with controller installation. So this article contains exact steps on how to install NGINX Controller software on fresh centOS 7. 1) Before you start make sure your system fits official technical specs 2) (Optional) Update CentOS packages [centos@ip-10-1-1-11 ~]$ sudo yum update 4) (Optional) Follow official docker documentation to install and run docker on the system if not yet installed [centos@ip-10-1-1-11 ~]$ docker -v Docker version 19.03.5, build 633a0ea 5) Install jq tool [centos@ip-10-1-1-11 ~]$ curl -Lo jq https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64 && chmod +x ./jq && sudo cp jq /usr/bin 6) Store controller installer archive locally [centos@ip-10-1-1-10 ~]$ ls ~/ controller-installer-2.9.0.tar.gz 7) Extract archive [centos@ip-10-1-1-10 ~]$ tar xzf controller-installer-2.9.0.tar.gz && ls controller-installercontroller-installer-2.9.0.tar.gz 8) (Optional) Installation may fail on step 16 [centos@ip-10-1-1-10 controller-installer]$ ./install.sh ... 16. Running database initialization task... failed ... This usually means that besides OS can resolve domain names a pod which runs in k8s can't. Coredns k8s service needs little config change to make pods to use proper name server as well. To archive that modify coredns config map: [centos@ip-10-1-1-10 ~]$ kubectl edit cm coredns -n kube-system apiVersion: v1 data: Corefile: | .:53 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa ttl 30 } prometheus :9153 forward . resolv.conf cache 30 loop reload loadbalance } kind: ConfigMap ...omitted Change line "forward . resolv.conf" to "forward . YOUR_DNS_NAME_SERVER" [centos@ip-10-1-1-10 ~]$ kubectl edit cm coredns -n kube-system apiVersion: v1 data: Corefile: | .:53 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa ttl 30 } prometheus :9153 forward . 10.1.1.5 cache 30 loop reload loadbalance } kind: ConfigMap ...omitted 9) Start controller installation and follow guidance [centos@ip-10-1-1-10 controller-installer]$ ./install.sh --- This script will install the NGINX Controller system --- 1. Checking required ports... OK 2. Checking for existing installation... 3. Attempting to detect your Operating System... Found core 4. Checking for required tools: grep sed less tee sort head ps cat awk id mkdir dirname basename getent rev tar gunzip envsubst jq base64 openssl numfmt. All found. 5. Checking Docker version... Docker version 19.03.5, build 633a0ea We recommend setting native.cgroupdriver to systemd for Docker. WARNING! Docker configuration does not seem to have log rotation enabled. We recommend enabling log rotation for docker containers. For steps to enable log rotation follow this link: https://success.docker.com/article/how-to-setup-log-rotation-post-installation 6. Checking Kubernetes... 7. Checking resource requirements... Warning: Available CPU cores: 2. The Controller needs at least 8 CPU cores to work effectively. numfmt: invalid format ‘%.2f’, directive must be %['][-][N]f Warning: Available memory: @{available_memory_gb}B. The Controller needs at least 8GB of RAM to work effectively. numfmt: invalid format ‘%.2f’, directive must be %['][-][N]f Warning: Available disk space: B. The Controller needs at least 80GB of disk space to work effectively. In order to avoid performance issues, consider installing the Controller with the recommended specifications. 8. End User License Agreement Do you accept this End User License Agreement [y/n]? Do you accept this End User License Agreement [y/n]? y 9. Loading docker images... Loaded image: python:3.6-alpine Loaded image: controller/controller-init:2.9.5-1035548 Loaded image: controller/controller-prod:2.9.5-1035548 Loaded image: postgres:9.5 Loaded image: controller-frontend/frontend:2.9.5-1035543 Loaded image: controller-installer/apigw:2.9.0-1035620 Loaded image: controller/controller-audit-log:2.9.5-1035548 Loaded image: controller/controller-cron:2.9.5-1035548 Loaded image: rabbitmq:3.7 10. Database configuration Provide the database hostname: postgres.mgmt.lab Provide the database port (for example, 5432): 5432 Provide the database username: naas Provide the database password: Repeat password: 11. SMTP settings Provide the SMTP host: smtp.mgmt.lab Provide the SMTP port: 25 Use SMTP authentication? [y/n]: n Use TLS for SMTP communication? [y/n]: n Provide a do-not-reply email address: dnr@mgmt.lab 12. Admin user configuration The FQDN, for example, controller.mycompany.com, will be used to access NGINX Controller in the browser as https://{FQDN}. Provide the FQDN for your Controller: nginx-controller@mgmt.lab Domain must be resolvable on this system. Check your entry and try again. Provide the FQDN for your Controller: nginx-controller.mgmt.lab Provide the organization name: Lab Provide the admin's first name: admin Provide the admin's last name: admin Provide the admin's email address: admin@mgmt.lab Provide the admin's password. Passwords must be 6 to 64 characters, and must include letters and digits: Repeat password: 13. Checking HTTPS certificates... A certificate for HTTPS connection was not found in the /opt/nginx-controller/certs/controller/ directory. This certificate is required to establish a TLS connection between the Controller and your web browser. If you choose not to generate a self-signed certificate, you will be prompted to provide the path to your certificate and key files. Would you like to generate a self-signed certificate now? [y/n]? y Generating a 4096 bit RSA private key ......................................................++ .........................................++ writing new private key to './certs/controller/server.key' ----- 14. Generating password and session salts... OK. 15. Generating Kubernetes resource files... Restored the original Kubernetes config files. Cleaned up certs. 16. Running database initialization task... NGINX Controller database has been initialized. 17. Starting up Controller stack... configmap/controller-config-6gbfmmd72g created configmap/frontend-kbg4g4fd8h created configmap/nginx-config-d9f9f7bk4m created configmap/rabbitmq-5fcdtt75g4 created secret/controller-reverseproxy-tls-bmk8dtt9t9 created secret/controller-secrets-488cf7285g created secret/rabbitmq-7k4d2bg7g9 created service/apigw created service/apimgmt created service/appregistry created service/coreapi created service/frontend created service/rabbitmq created service/receiver created deployment.apps/apigw created deployment.apps/apimgmt created deployment.apps/appregistry created deployment.apps/celery created deployment.apps/coreapi created deployment.apps/cron created deployment.apps/frontend created deployment.apps/rabbitmq created deployment.apps/receiver created NGINX Controller services are ready. OK, everything went just fine! Thank you for installing NGINX Controller. You can find your installation in /opt/nginx-controller. You can find the install log file in /var/log/nginx-controller/nginx-controller-install.log. Access the system using your web browser at https://nginx-controller.mgmt.lab. Documentation is available at https://nginx-controller.mgmt.lab/docs/. As you see controller installation process is still pretty straightforward. I hope this article will save someone time on controller installation. Good luck!1KViews2likes1CommentF5 powered API security and management
Editor's Note:The F5 Beacon capabilities referenced in this article hosted on F5 Cloud Services are planning a migration to a new SaaS Platform - Check out the latesthere. Introduction Application Programming Interfaces (APIs) enable application delivery systems to communicate with each other. According to a survey conducted by IDC, security is the main impediment to delivery of API-based services.Research conducted by F5 Labs shows that APIs are highly susceptible to cyber-attacks. Access or injection attacks against the authentication surface of the API are launched first, followed by exploitation of excessive permissions to steal or alter data that is reachable via the API.Agile development practices, highly modular application architectures, and business pressures for rapid development contribute to security holes in both APIs exposed to the public and those used internally. API delivery programs must include the following elements : (1) Automated Publishing of APIs using Swagger files or OpenAPI files, (2) Authentication and Authorization of API calls, (3) Routing and rate limiting of API calls, (4) Security of API calls and finally (5) Metric collection and visualization of API calls.The reference architecture shown below offers a streamlined way of achieving each element of an API delivery program. F5 solution works with modern automation and orchestration tools, equipping developers with the ability to implement and verify security at strategic points within the API development pipeline. Security gets inserted into the CI/CD pipeline where it can be tested and attached to the runtime build, helping to reduce the attack surface of vulnerable APIs. Common Patterns Enterprises need to maintain and evolve their traditional APIs, while simultaneously developing new ones using modern architectures. These can be delivered with on-premises servers, from the cloud, or hybrid environments. APIs are difficult to categorize as they are used in delivering a variety of user experiences, each one potentially requiring a different set of security and compliance controls. In all of the patterns outlined below, NGINX Controller is used for API Management functions such as publishing the APIs, setting up authentication and authorization, and NGINX API Gateway forms the data path.Security controls are addressed based on the security requirements of the data and API delivery platform. 1.APIs for highly regulated business Business APIs that involve the exchange of sensitive or regulated information may require additional security controls to be in compliance with local regulations or industry mandates.Some examples are apps that deliver protected health information or sensitive financial information.Deep payload inspection at scale, and custom WAF rules become an important mechanism for protecting this type of API. F5 Advanced WAF is recommended for providing security in this scenario. 2.Multi-cloud distributed API Mobile App users who are dispersed around the world need to get a response from the API backend with low latency.This requires that the API endpoints be delivered from multiple geographies to optimize response time.F5 DNS Load Balancer Cloud Service (global server load balancing) is used to connect API clients to the endpoints closest to them.In this case, F5 Cloud Services Essential App protect is recommended to offer baseline security, and NGINX APP protect deployed closer to the API workload, should be used for granular security controls. Best practices for this pattern are described here. 3.API workload in Kubernetes F5 service mesh technology helps API delivery teams deal with the challenges of visibility and security when API endpoints are deployed in Kubernetes environment. NGINX Ingress Controller, running NGINX App Protect, offers seamless North-South connectivity for API calls. F5 Aspen Mesh is used to provide East-West visibility and mTLS-based security for workloads.The Kubernetes cluster can be on-premises or deployed in any of the major cloud provider infrastructures including Google’s GKE, Amazon’s EKS/Fargate, and Microsoft’s AKS. An example for implementing this pattern with NGINX per pod proxy is described here, and more examples are forthcoming in the API Security series. 4.API as Serverless Functions F5 cloud services Essential App Protect offering SaaS-based security or NGINX App Protect deployed in AWS Fargate can be used to inject protection in front of serverless API endpoints. Summary F5 solutions can be leveraged regardless of the architecture used to deliver APIs or infrastructure used to host them.In all patterns described above, metrics and logs are sent to one or many of the following: (1) F5 Beacon (2) SIEM of choice (3) ELK stack.Best practices for customizing API related views via any of these visibility solutions will be published in the following DevCentral series. DevOps can automate F5 products for integration into the API CI/CD pipeline.As a result, security is no longer a roadblock to delivering APIs at the speed of business. F5 solutions are future-proof, enabling development teams to confidently pivot from one architecture to another. To complement and extend the security of above solutions, organizations can leverage the power of F5 Silverline Managed Services to protect their infrastructure against volumetric, DNS, and higher-level denial of service attacks.The Shape bot protection solutions can also be coupled to detect and thwart bots, including securing mobile access with its mobile SDK.799Views2likes0CommentsIntegrating NGINX Controller API Management with PingFederate to secure financial services API transactions
Introduction The previous article in the "Securing financial services APIs" series, "Using NGINX Controller API Management Module and NGINX App Protect to secure financial services API transactions", described a setup where NGINX Controller APIm, acting as an OAuth Resource Server, was using F5's APM as an OIDC IdP / OAuth Autorization Server in an OAuth/OIDC authentication flow. The current article explores the integration of NGINX Controller APIm with PingFederate, one of the market leading identity management solutions, in a similar setup. Ping Identity has partnered with OBIE (Open Banking Implementation Entity) the body responsible for UK Open Banking implementation as a response to EU's PSD2 directive and, as such, it acquired a front seat in the development of Open Banking initiative, one of the most mature examples of financial service API. Ping Identity technology is also Financial-Grade API (FAPI) compliant, supporting the features critical in ensuring higher security for financial API transactions, while maintaining seamless user experience and ease of configuration. Ping Identity's PDS2 & Open Banking technical solution guide can be found here, while this article focusses primarily on the ease of configuration of NGINX Controller APIm to interact with PingFederate solution in a basic financial services API scenario. Setup For demo purposes, as a backend banking application we used a server stub generated from UK Open Banking's OpenAPI spec deployed in a Kubernetes environment, having NGINX App Protect deployed on Kubernetes Ingress controller as an API WAF. The API Gateway and API Management function is implemented by NGINX API Gateway and NGINX Controller APIm, placed in front of the Kubernetes environment. The configuration of all the above (backend server, NAP/KIC and NGINX APIm) is managed through a CI/CD pipeline configured in Gitlab, simulating a modern application development environment. Authentication and API flow This demo is implementing the Authorization Code flow to enable a "domestic payment" transaction. Summarising the steps of the authentication and API flow (refer to the setup diagram above): 1. The user logs into the Third Party Provider application ("client") and creates a new funds transfer 2. The TPP application redirects the user to the OAuth Authorization Server / OIDC IdP - PingFederate 3. The user provides its credentials to PingFederate and gets access to the consent management screen where the required "payments" scope will be listed 4. If the user agrees to give consent to the TPP client to make payments out of his/her account, PingFederate will generate an authorization code (and an ID Token) and redirect the user to the TPP client 5. The TPP client exchanges the authorization code for an access token and attaches it as a bearer token to the /domestic-payments call sent to the API gateway 6. The API Gateway authenticates the access token by downloading the JSON Web Keys from PingFederate and grants conditional access to the backend application 7. The Kubernetes Ingress receives the API call and performs WAF security checks via NGINX App Protect 8. The API call is forwarded to the backend server pod NGINX APIm configuration In this scenario, NGINX APIm is performing the Resource Server OAuth role, where it downloads the JWKs from the OAuth Authorization Server / OIDC IdP (PingFederate) and checks the authenticity of the access token presented in the API call. Additionally, it may apply further checks to conditionally grant access to the application - in this demo it will check for the presence of the "payments" scope. The NGINX APIm configuration is straightforward and consists of two steps: 1. Configuring the IdP Go to Services => Identity Providers and click on Create identity Providers. Fill in the mandatory parameters Name, Environment and Type (JWT). Enter the JWKs URL location and the caching duration. 2. Configuring the OAuth authorization and conditional access criteria Go to Services =>APIs , select an API Definition and edit the associated Published API. Navigate to Routing and edit the Component to be protected, navigating to Security/Authentication. Select the previously created Identity Provider and optionally enable conditional access. As an example, access is granted if "payments" is one of the scopes found in the access token. Conclusion NGINX APIm offers a very simple yet granular way of configuring NGINX API Gateway as an OAuth Resource Server and allows the integration with an industry-leading IAM solution, PingFederate, to protect financial services API transactions. Links UDF lab environment link.760Views1like0CommentsIntegrating NGINX Controller to CICD Pipeline
Introduction Assume application development team delivers an application. An application consists of two components: frontend and API. Both of them have to be exposed to the world. If frontend part is fairly static and has just a few URIs then API URIs layout changes often. NGINX Controller provides a rich tool set to simplify application delivery whether it is API or generic app. This article covers a way to integrate NGINX Controller to CICD pipeline to speed up application publishing ever further. Architecture and Prerequisites Architecture is pretty typical for controller deployment. NGINX Controller manages two NGINX Plus gateways. They in turn publish an application to the world. As a demo application I use Httpbin. This app consists of two components: frontend and an API. URI structure is flat. One pager frontend avaliable right behind "/". API endpoints base path is "/" as well, so endpoints are ("/ip", "/uuid", "/get", so on). The way this app is deployed is not important for this article. It is deployed and only reachable from gateways. Everything you see on the picture above is a prerequisite. NGINX Controller, NGINX Plus and application are deployed. Gateways are registered on a Controller. Controller user for a gitlab has admin permissions. NGINX Controller Configuration Abstractions Controller represents gateway configuration as set of abstractions. Picture below displays abstraction dependencies. Environment is a logical group of configuration entities which publish an app together. Entities from different environments can not be mixed. Application is a logical representation of a real application. Each application can include multiple components. E.g. Httpbin application consists of frontend and api components. Component contains lists of URIs, list of backend servers (workload groups) and routes defining where to forward request to particular URI. Gateway represents a virtual server which matches requests with certain hostnames. A published application ties together a gateway to listen for requests and one or more components to define routes to backend servers. Controller Integration to CICD Pipeline NGINX Controller provides an REST API which allows to integrate it with any kind of CICD platform. Integration aims to automatically publish both frontend and API application components. Following pipeline implements application publishing in three stages. First stage "create-env" creates common configuration abstractions for both components. Other two "publish-frontend" and "publish-api" publish frontend and api components respectively. Note: Each stage has "only" directive. It allows to reduce pipeline execution time by executing only stages with changed configuration. A script for each stage has only one command. It executes Ansible playbook which contains all steps to reach desired state. Note: Playbooks use official Ansible collection for to communicate with controller. This approach provides much more clear code than raw curl use. image: alpine:3.12.1 variables: CONTROLLER_FQDN: ctr.f5-demo.com stages: - create-env - deploy-frontend - deploy-api default: before_script: - apk add ansible~=2.9.14 - ansible-galaxy collection install nginxinc.nginx_controller create-env: stage: create-env script: - ansible-playbook playbooks/common.yml only: changes: - playbooks/common.yml - .gitlab-ci.yml deploy-frontend: stage: deploy-frontend script: - ansible-playbook playbooks/frontend.yml only: changes: - playbooks/frontend.yml - .gitlab-ci.yml deploy-api: stage: deploy-api script: - ansible-playbook playbooks/api.yml only: changes: - playbooks/api.yml - .gitlab-ci.yml Playbook from "create-env" stage creates common configuration entities (see "controller configuration abstractions" section): environment, application and a gateway. Playbook code is self explanatory. "nginx_controller_generate_token" role obtains a login credentials from controller. Other three roles create actual configuration entities. - hosts: localhost gather_facts: no collections: - nginxinc.nginx_controller vars: env_name: prod app_name: httpbin roles: - role: nginx_controller_generate_token vars: nginx_controller_user_email: "{{ lookup('env', 'CONTROLLER_USER') }}" nginx_controller_user_password: "{{ lookup('env', 'CONTROLLER_PASSWORD') }}" nginx_controller_fqdn: "{{ lookup('env', 'CONTROLLER_FQDN') }}" nginx_controller_validate_certs: false - role: nginx_controller_environment vars: nginx_controller_environment: metadata: name: "{{ env_name }}" - role: nginx_controller_gateway vars: nginx_controller_environmentName: "{{ env_name }}" nginx_controller_gateway: metadata: name: "{{ app_name }}" desiredState: ingress: uris: "http://nplus.httpbin.f5-demo.com": {} placement: instanceRefs: - ref: "/infrastructure/locations/unspecified/instances/ip-10-4-96-225.us-west-2.compute.internal" - ref: "/infrastructure/locations/unspecified/instances/ip-10-4-96-90.us-west-2.compute.internal" - role: nginx_controller_application vars: nginx_controller_environmentName: "{{ env_name }}" nginx_controller_app: metadata: name: "{{ app_name }}" Playbook from "publish-frontend" stage publishes a frontend component. Structure is similar to previous one. First role logs in to controller. "nginx_controller_component" role creates an application component which represents a gateway configuration to publish an application frontend. - hosts: localhost gather_facts: no collections: - nginxinc.nginx_controller vars: env_name: prod app_name: httpbin roles: - role: nginx_controller_generate_token vars: nginx_controller_user_email: "{{ lookup('env', 'CONTROLLER_USER') }}" nginx_controller_user_password: "{{ lookup('env', 'CONTROLLER_PASSWORD') }}" nginx_controller_fqdn: "{{ lookup('env', 'CONTROLLER_FQDN') }}" nginx_controller_validate_certs: false - role: nginx_controller_component vars: nginx_controller_environmentName: "{{ env_name }}" nginx_controller_appName: "{{ app_name }}" nginx_controller_component: metadata: name: frontend displayName: "Frontend" description: "Frontend for {{ app_name }} API" desiredState: ingress: uris: "/": {} gatewayRefs: - ref: "/services/environments/{{ env_name }}/gateways/{{ app_name }}" backend: workloadGroups: group1: uris: "http://10.4.113.213:30445": {} monitoring: response: status: range: startCode: 200 endCode: 201 match: true Playbook from "publish-api" stage publishes an api application component. Unlike to frontend which has only one static URI to publish (forward), API component should handle number of API URIs "endpoints". To avoid manual input and in sake of single source of truth controller can import all URIs from an OpenAPI file. Role "nginx_controller_api_definition_import" reads openAPI file and imports all endpoints as a new API version (v1). Following roles create a published API out of a version and reference it form a component. - hosts: localhost gather_facts: no collections: - nginxinc.nginx_controller env_name: prod app_name: httpbin roles: - role: nginx_controller_generate_token vars: nginx_controller_user_email: "{{ lookup('env', 'CONTROLLER_USER') }}" nginx_controller_user_password: "{{ lookup('env', 'CONTROLLER_PASSWORD') }}" nginx_controller_fqdn: "{{ lookup('env', 'CONTROLLER_FQDN') }}" nginx_controller_validate_certs: false - role: nginx_controller_api_definition_import vars: nginx_controller_api_definition_version: v1 nginx_controller_api_definition_name: "{{ app_name }}" nginx_controller_api_definition: "{{ lookup('file', '../httpbin.openapi.json') }}" - role: nginx_controller_publish_api vars: nginx_controller_environment: "{{ env_name }}" nginx_controller_application: "{{ app_name }}" nginx_controller_publish_api: metadata: name: "v1" displayName: "v1" desiredState: basePath: "/api" stripWorkloadBasePath: true apiDefinitionVersionRef: ref: "/services/api-definitions/httpbin/versions/v1" gatewayRefs: - ref: "/services/environments/{{ env_name }}/gateways/{{ app_name }}" - role: nginx_controller_component vars: nginx_controller_environmentName: "{{ env_name }}" nginx_controller_appName: "{{ app_name }}" nginx_controller_component: metadata: name: api displayName: "API" description: "{{ app_name }} API" desiredState: ingress: uris: "/ip": matchMethod: EXACT gatewayRefs: - ref: "/services/environments/{{ env_name}}/gateways/{{ app_name}}" backend: workloadGroups: group1: uris: "http://10.4.113.213:30445": {} monitoring: response: status: range: startCode: 200 endCode: 201 match: true publishedApiRefs: - ref: "/services/environments/{{ env_name }}/apps/{{ app_name}}/published-apis/v1" Once all stages end controller transforms all directives to NGINX Plus configuration and pushes it down to gateways. Therefore publishing both frontend and API application components to the world. Hope it is helpful. Feel free to reach me with questions and concerns. Repository: https://gitlab.com/464d41/deploy-httpbin-via-nginx-controller754Views1like0CommentsNGINX(ingress controller)-F5 integration
Hello Team I am working on integrating F5 and NGINX(ingress controller) as per below article https://devcentral.f5.com/s/articles/Better-together-F5-Container-Ingress-Services-and-Nginx-Ingress-Controller-Integration I have created F5 Container Ingress Services as per the link and have couple of questions - --bigip-username=$(root) --> this is GUI or CLI username and does it have to bracket, like mentioned - --bigip-password=$(shivshiv) --> this is GUI or CLI username and does it have to bracket, like mentioned - --bigip-url=https://192.168.178.44:8443 --> do we need port 8443 to be mentioned or can i just put https://192.168.178.44: Also added envFrom: - configMapRef: name: as3-template ->> do we need to call config map here, its not part of yaml file in the above link - secretRef: --> i have created using imperative model, does this is OK as opaque or we need to refer kubernetes.io/service-account.name: bigip-ctlr created for CIS controller name: bigip-login --insecure=true ( i understand this will allow the session to be established without exchanging certificate, or is this the requirement) Once CIS controller been created and AS3 been defined, I understand i will be to connect with F5 and initial config can be done as specified in AS3. Is my understanding correct? Also i have installed following package, is this requirement? f5-appsvcs-3.17.1-1.noarch Most importantly: Does this integration supported between open source NGINX or do we NGINX+ as ingress controller ? Looking forward to the response. Thanks a lot in advance Kind Regards Kunal698Views1like3Comments