nginx controller
15 TopicsF5 powered API security and management
Editor's Note:The F5 Beacon capabilities referenced in this article hosted on F5 Cloud Services are planning a migration to a new SaaS Platform - Check out the latesthere. Introduction Application Programming Interfaces (APIs) enable application delivery systems to communicate with each other. According to a survey conducted by IDC, security is the main impediment to delivery of API-based services.Research conducted by F5 Labs shows that APIs are highly susceptible to cyber-attacks. Access or injection attacks against the authentication surface of the API are launched first, followed by exploitation of excessive permissions to steal or alter data that is reachable via the API.Agile development practices, highly modular application architectures, and business pressures for rapid development contribute to security holes in both APIs exposed to the public and those used internally. API delivery programs must include the following elements : (1) Automated Publishing of APIs using Swagger files or OpenAPI files, (2) Authentication and Authorization of API calls, (3) Routing and rate limiting of API calls, (4) Security of API calls and finally (5) Metric collection and visualization of API calls.The reference architecture shown below offers a streamlined way of achieving each element of an API delivery program. F5 solution works with modern automation and orchestration tools, equipping developers with the ability to implement and verify security at strategic points within the API development pipeline. Security gets inserted into the CI/CD pipeline where it can be tested and attached to the runtime build, helping to reduce the attack surface of vulnerable APIs. Common Patterns Enterprises need to maintain and evolve their traditional APIs, while simultaneously developing new ones using modern architectures. These can be delivered with on-premises servers, from the cloud, or hybrid environments. APIs are difficult to categorize as they are used in delivering a variety of user experiences, each one potentially requiring a different set of security and compliance controls. In all of the patterns outlined below, NGINX Controller is used for API Management functions such as publishing the APIs, setting up authentication and authorization, and NGINX API Gateway forms the data path.Security controls are addressed based on the security requirements of the data and API delivery platform. 1.APIs for highly regulated business Business APIs that involve the exchange of sensitive or regulated information may require additional security controls to be in compliance with local regulations or industry mandates.Some examples are apps that deliver protected health information or sensitive financial information.Deep payload inspection at scale, and custom WAF rules become an important mechanism for protecting this type of API. F5 Advanced WAF is recommended for providing security in this scenario. 2.Multi-cloud distributed API Mobile App users who are dispersed around the world need to get a response from the API backend with low latency.This requires that the API endpoints be delivered from multiple geographies to optimize response time.F5 DNS Load Balancer Cloud Service (global server load balancing) is used to connect API clients to the endpoints closest to them.In this case, F5 Cloud Services Essential App protect is recommended to offer baseline security, and NGINX APP protect deployed closer to the API workload, should be used for granular security controls. Best practices for this pattern are described here. 3.API workload in Kubernetes F5 service mesh technology helps API delivery teams deal with the challenges of visibility and security when API endpoints are deployed in Kubernetes environment. NGINX Ingress Controller, running NGINX App Protect, offers seamless North-South connectivity for API calls. F5 Aspen Mesh is used to provide East-West visibility and mTLS-based security for workloads.The Kubernetes cluster can be on-premises or deployed in any of the major cloud provider infrastructures including Google’s GKE, Amazon’s EKS/Fargate, and Microsoft’s AKS. An example for implementing this pattern with NGINX per pod proxy is described here, and more examples are forthcoming in the API Security series. 4.API as Serverless Functions F5 cloud services Essential App Protect offering SaaS-based security or NGINX App Protect deployed in AWS Fargate can be used to inject protection in front of serverless API endpoints. Summary F5 solutions can be leveraged regardless of the architecture used to deliver APIs or infrastructure used to host them.In all patterns described above, metrics and logs are sent to one or many of the following: (1) F5 Beacon (2) SIEM of choice (3) ELK stack.Best practices for customizing API related views via any of these visibility solutions will be published in the following DevCentral series. DevOps can automate F5 products for integration into the API CI/CD pipeline.As a result, security is no longer a roadblock to delivering APIs at the speed of business. F5 solutions are future-proof, enabling development teams to confidently pivot from one architecture to another. To complement and extend the security of above solutions, organizations can leverage the power of F5 Silverline Managed Services to protect their infrastructure against volumetric, DNS, and higher-level denial of service attacks.The Shape bot protection solutions can also be coupled to detect and thwart bots, including securing mobile access with its mobile SDK.820Views2likes0CommentsNGINX(ingress controller)-F5 integration
Hello Team I am working on integrating F5 and NGINX(ingress controller) as per below article https://devcentral.f5.com/s/articles/Better-together-F5-Container-Ingress-Services-and-Nginx-Ingress-Controller-Integration I have created F5 Container Ingress Services as per the link and have couple of questions - --bigip-username=$(root) --> this is GUI or CLI username and does it have to bracket, like mentioned - --bigip-password=$(shivshiv) --> this is GUI or CLI username and does it have to bracket, like mentioned - --bigip-url=https://192.168.178.44:8443 --> do we need port 8443 to be mentioned or can i just put https://192.168.178.44: Also added envFrom: - configMapRef: name: as3-template ->> do we need to call config map here, its not part of yaml file in the above link - secretRef: --> i have created using imperative model, does this is OK as opaque or we need to refer kubernetes.io/service-account.name: bigip-ctlr created for CIS controller name: bigip-login --insecure=true ( i understand this will allow the session to be established without exchanging certificate, or is this the requirement) Once CIS controller been created and AS3 been defined, I understand i will be to connect with F5 and initial config can be done as specified in AS3. Is my understanding correct? Also i have installed following package, is this requirement? f5-appsvcs-3.17.1-1.noarch Most importantly: Does this integration supported between open source NGINX or do we NGINX+ as ingress controller ? Looking forward to the response. Thanks a lot in advance Kind Regards Kunal707Views1like3CommentsThe Power of F5 and NGINX
NGINX Controller 3.0 released Since the NGINX acquisition, F5 and NGINX have been integrating teams, listening to customers, and planning our first release as a unified company. Now, we have introduced NGINX Controller 3.0, which allows you to manage apps and services across a variety of deployment models, including multi-cloud scenarios. NGINX Controller 3.0 shifts from an infrastructure-centric to an application-centric design, improving developer productivity and accelerating time-to-market for new applications. In this article, learn about core NGINX concepts and explore new NGINX documentation on AskF5. Core NGINX concepts Putting your Apps First (5 mins) Load Balancing in a Multi-Cloud World (4 mins) Managing a Real Time API (4 mins) Simplifying the Move to Microservices (5 mins) Putting your Apps First (5 mins) Learn how an app-centric delivery platform can increase collaboration, decrease risk, and help you move with speed. Load Balancing in a Multi-Cloud World (4 mins) Explore considerations for deploying your applications to multiple clouds. Managing a Real Time API (4 mins) Learn the benefits of a lightweight API management platform. Simplifying the Move to Microservices (5 mins) Learn about options to successfully deploy microservices and see our six-point checklist to help you determine if you’re ready for a service mesh. New NGINX documentation on AskF5 As the F5 and NGINX engineering teams are releasing products together, engineers from both Support teams and AskF5 are combining forces to produce new documentation. For example, if you want to deploy your BIG-IP LTM system with HTTP load balancing to two NGINX proxies in AWS, see Quick deployment: BIG-IP LTM system with HTTP load balancing to two NGINX Plus web servers in AWS. More NGINX articles on AskF5: K74544015: Removing nginx/<version> from HTTP response headers K82655201: Host OS swap space must be disabled in NGINX Controller 2.8.0 and later K24214052: NGINX Controller 2.0.0 installation fails when the host OS locale is not UTF-8 K64001240: Enabling NGINX Controller Agent debug logging K06962163: Resetting the Admin account password on the NGINX Controller system K30389284: Backing up and restoring the NGINX Controller system K10640269: Setting nginx-controller as the default Kubernetes namespaceK51798430: Using the proxy_headers_hash_max_size and proxy_headers_hash_bucket_size directives K03453121: Basic Authentication on the health check request K21528053: [crit] message in error.log says '24: Too many open files' K43542013: NGINX returns status '400 Request Header Or Cookie Too Large' or '414 Request-URI Too Large' K48373902: [warn] message in error log: an upstream response is buffered to a temporary file while reading upstream K84508595: Different SSL protocols for different servers K18050039: Enabling client certificate authentication for NGINX K95305552: How to download or update the GeoIP2 database K68914062: Displaying a custom 502 response page K13912623: Configuring a default 'catchall' server K04600350: Using a common set of directives in the NGINX Plus configuration K46613025: High Availability solutions available for NGINX Plus in Azure K42497190: NGINX versions that support Lightweight M2M protocol K53631303: Capturing HTTP headers of a request in a log file K95324441: modsec_audit.log dramatically increasing What new NGINX topics would you like to see on AskF5? Leave your suggestions in the comments.1.7KViews4likes1CommentCost of API usage
For the F5 API family below, is there any cost for using API? Per API usage or by tiers or subscription to a program, etc. outside of the purchase of the products. https://clouddocs.f5.com/products/big-iq/mgmt-api/v0.0/ApiReferences/bigiq_public_api_ref/r_public_api_references.html475Views1like1CommentIntegrating NGINX Controller API Management with PingFederate to secure financial services API transactions
Introduction The previous article in the "Securing financial services APIs" series, "Using NGINX Controller API Management Module and NGINX App Protect to secure financial services API transactions", described a setup where NGINX Controller APIm, acting as an OAuth Resource Server, was using F5's APM as an OIDC IdP / OAuth Autorization Server in an OAuth/OIDC authentication flow. The current article explores the integration of NGINX Controller APIm with PingFederate, one of the market leading identity management solutions, in a similar setup. Ping Identity has partnered with OBIE (Open Banking Implementation Entity) the body responsible for UK Open Banking implementation as a response to EU's PSD2 directive and, as such, it acquired a front seat in the development of Open Banking initiative, one of the most mature examples of financial service API. Ping Identity technology is also Financial-Grade API (FAPI) compliant, supporting the features critical in ensuring higher security for financial API transactions, while maintaining seamless user experience and ease of configuration. Ping Identity's PDS2 & Open Banking technical solution guide can be found here, while this article focusses primarily on the ease of configuration of NGINX Controller APIm to interact with PingFederate solution in a basic financial services API scenario. Setup For demo purposes, as a backend banking application we used a server stub generated from UK Open Banking's OpenAPI spec deployed in a Kubernetes environment, having NGINX App Protect deployed on Kubernetes Ingress controller as an API WAF. The API Gateway and API Management function is implemented by NGINX API Gateway and NGINX Controller APIm, placed in front of the Kubernetes environment. The configuration of all the above (backend server, NAP/KIC and NGINX APIm) is managed through a CI/CD pipeline configured in Gitlab, simulating a modern application development environment. Authentication and API flow This demo is implementing the Authorization Code flow to enable a "domestic payment" transaction. Summarising the steps of the authentication and API flow (refer to the setup diagram above): 1. The user logs into the Third Party Provider application ("client") and creates a new funds transfer 2. The TPP application redirects the user to the OAuth Authorization Server / OIDC IdP - PingFederate 3. The user provides its credentials to PingFederate and gets access to the consent management screen where the required "payments" scope will be listed 4. If the user agrees to give consent to the TPP client to make payments out of his/her account, PingFederate will generate an authorization code (and an ID Token) and redirect the user to the TPP client 5. The TPP client exchanges the authorization code for an access token and attaches it as a bearer token to the /domestic-payments call sent to the API gateway 6. The API Gateway authenticates the access token by downloading the JSON Web Keys from PingFederate and grants conditional access to the backend application 7. The Kubernetes Ingress receives the API call and performs WAF security checks via NGINX App Protect 8. The API call is forwarded to the backend server pod NGINX APIm configuration In this scenario, NGINX APIm is performing the Resource Server OAuth role, where it downloads the JWKs from the OAuth Authorization Server / OIDC IdP (PingFederate) and checks the authenticity of the access token presented in the API call. Additionally, it may apply further checks to conditionally grant access to the application - in this demo it will check for the presence of the "payments" scope. The NGINX APIm configuration is straightforward and consists of two steps: 1. Configuring the IdP Go to Services => Identity Providers and click on Create identity Providers. Fill in the mandatory parameters Name, Environment and Type (JWT). Enter the JWKs URL location and the caching duration. 2. Configuring the OAuth authorization and conditional access criteria Go to Services =>APIs , select an API Definition and edit the associated Published API. Navigate to Routing and edit the Component to be protected, navigating to Security/Authentication. Select the previously created Identity Provider and optionally enable conditional access. As an example, access is granted if "payments" is one of the scopes found in the access token. Conclusion NGINX APIm offers a very simple yet granular way of configuring NGINX API Gateway as an OAuth Resource Server and allows the integration with an industry-leading IAM solution, PingFederate, to protect financial services API transactions. Links UDF lab environment link.777Views1like0CommentsUsing NGINX Controller API Management Module and NGINX App Protect to secure financial services API transactions
As financial services APIs (such as Open Banking) are concerned primarily with managing access to exposed banking APIs, the security aspect has always been of paramount importance. Securing financial services APIs is a vast topic, as security controls are distributed among different functions, such as user authentication at the Identity Provider level, user authorization and basic API security at the API Gateway level and advanced API security at the WAF level. In this article we will explore how two NGINX products, Controller API Management Module and App Protect, can be deployed to secure the OAuth Authorization Code flow which is a building block of the access controls used to secure many financial services APIs.. Physical setup The setup used to support this article comprises of NGINX Controller API Management Module, providing API Management functions through an instance of NGINX API Gateway and NGINX App Protect deployed on a Kubernetes Ingress Controller providing advanced security for the Kubernetes-deployed demo application, Arcadia Finance. These elements are being deployed and configured in an automated fashion using a Gitlab CI/CD pipeline. The visualization for NGINX App Protect is provided by NAP dashboards deployed in ELK. Note: For the purpose of supporting this lab, APM was configured as an OAuth Authorization Server supporting OpenID Connect. Its configuration, along with the implementation details of the third party banking application (AISP/PISP), acting as an OAuth Client, is beyond the scope of this article. In an OAuth Authorization Code flow, the PSU (End User) is initiating an API request through the Account or Payment Information Services Provider (AISP/PISP Application) which first redirects the end user to the Authorization Server. Strong Customer Authentication is being performed between the end user and Authorization Server which, if successful, will issue an authorization code and redirect the user back to theAISP/PISP Application. The AISP/PISP Application will exchange the authorization code for an ID Token and a JWT Access Token, the latter will be attached as a bearer token to the initial end-user API request which will then be forwarded to the API Gateway. The API Gateway will authenticate the signature of the JWT Access Token by downloading the JSON Web Key (JWK) from the Authorization Server and may apply further security controls by authorising the API call based on JWT claims and/or apply rate limits. Worth noting here is the security function of the API Gateway, which provides positive security by allowing only calls conforming to published APIs, in addition to authentication and authorization functions. The Web Application Firewall function, represented here by the NGINX App Protect deployed on the Kubernetes Ingress Controller (KIC), will add negative security protection, by checking the request against a database of attack signatures, and advanced API security, by validating the API request against the OpenAPI manifest and providing Bot detection capabilities. Configuration To configure the NGINX Controller API Management Module, first create an Application by sending a POST request to 'https://{{ my_controller }}/api/v1/services/environments/env_prod/apps' having the following body: { "metadata": { "name": "app_api", "displayName": "API Application Arcadia", "description": "", "tags": [] }, "desiredState": {} } Then create an Identity Provider, pointed at the Authorization Server's JWK endpoint, by sending a PUT request to 'https://{{ my_controller }}/api/v1/security/identity-providers/bank_idp' having the following body: { "metadata": { "name": "bank_idp", "tags": [] }, "desiredState": { "environmentRefs": [ { "ref": "/services/environments/env_prod" } ], "identityProvider": { "type": "JWT", "jwkFile": { "type": "REMOTE_FILE", "uri": "https://bank.f5lab/f5-oauth2/v1/jwks", "cacheExpire": "12h" } } } } Create an API definition by sending a PUT request to 'https://{{ my_controller }}/api/v1/services/api-definitions/arcadia-api-def/versions/v1' with the following body: { "metadata": { "name": "v1", "displayName": "arcadia-api-def" }, "desiredState": { "specs": { "REST": { "openapi": "3.0.0", "info": { "version": "v1", "title": "arcadia-api-def" }, "paths": {} } } } } Then import the OpenAPI definition by sending a PUT request to 'https://{{ my_controller }}/api/v1/services/api-definitions/arcadia-api-def/versions/v1/import' with the OpenAPI JSON as a request body. Publish the API definition by sending a PUT request to 'https://{{ my_controller }}/api/v1/services/environments/env_prod/apps/app_api/published-apis/prod-api', with the following body: { "metadata": { "name": "prod-api", "displayName": "prod-api", "tags": [] }, "desiredState": { "apiDefinitionVersionRef": { "ref": "/services/api-definitions/arcadia-api-def/versions/v1" }, "gatewayRefs": [ { "ref": "/services/environments/env_prod/gateways/gw_api" } ] } } Declare the necessary back-end components (in this example webapi-kic.nginx-udf.internal Kubernetes workload) by sending a PUT to 'https://{{ my_controller }}/api/v1/services/environments/env_prod/apps/app_api/components/cp_moneytransfer_api' with the following body: { "metadata": { "name": "cp_moneytransfer_api", "displayName": "cp_moneytransfer_api", "tags": [] }, "desiredState": { "ingress": { "uris": { "/api/rest/execute_money_transfer.php": { "php": { "get": { "description": "Send money to a friend", "parameters": [ { "in": "body", "name": "body", "required": true, "schema": { "type": "object" } } ], "responses": { "200": { "description": "200 response" } } }, "matchMethod": "EXACT" } } }, "gatewayRefs": [ { "ref": "/services/environments/env_prod/gateways/gw_api" } ] }, "backend": { "ntlmAuthentication": "DISABLED", "preserveHostHeader": "DISABLED", "workloadGroups": { "wl_mainapp_api": { "loadBalancingMethod": { "type": "ROUND_ROBIN" }, "uris": { "http://webapi-kic.nginx-udf.internal:30276": { "isBackup": false, "isDown": false, "isDrain": false } } } } }, "programmability": { "requestHeaderModifications": [ { "action": "DELETE", "applicableURIs": [], "headerName": "Host" }, { "action": "ADD", "applicableURIs": [], "headerName": "Host", "headerValue": "k8s.arcadia-finance.io" } ] }, "logging": { "errorLog": "DISABLED", "accessLog": { "state": "DISABLED" } }, "security": { "rateLimits": { "policy_1": { "rate": "5000r/m", "burstBeforeReject": 0, "statusCode": 429, "key": "$binary_remote_addr" } }, "conditionalAuthPolicies": { "policy_1": { "action": "ALLOW", "comparisonType": "CONTAINS", "comparisonValues": [ "Payment" ], "sourceType": "JWT_CLAIM", "sourceKey": "scope", "denyStatusCode": 403 } }, "identityProviderRefs": [ { "ref": "/security/identity-providers/bank_idp" } ], "jwtClientAuth": { "keyLocation": "BEARER" } }, "publishedApiRefs": [ { "ref": "/services/environments/env_prod/apps/app_api/published-apis/prod-api" } ] } } Note the 'security' block, specifying the JWT authentication, the Identity Provider from where to download the JWK, the authorization check applied on each request and the rate limit policy. The configuration used to deploy NGINX App Protect on the Kubernetes Ingress Controller can be consulted here. Summary In this article we showed how NGINX Controller API Management Module and NGINX App Protect can be deployed to protect API calls as part of the OAuth Authorization Code flow which is a basic flow used to control the access to many financial services APIs. Links UDF lab environment link.1.6KViews1like0CommentsAutomating NGINX Controller Installation on Azure with Pulumi
Introduction Cloud infrastructure definition and application installation when treated as code provides awesome benefits. It allows for storing application setup as a versionable artifact, provides documentation regarding how it was built and is repeatable. Moreover, when such a definition is truly code and not just a set of configuration files, it can provide flexible integrations between the application layer and the infrastructure layer or between a particular cloud and external services. In this article, we will use Pulumi with Azure to automate the install of NGINX Controller to illustrate these principles. Pulumi is an infrastructure automation tool like Terraform but differing in that it allows for infrastructure to be defined as code rather than configuration files. Infrastructure can be defined in JavaScript, TypeScript, Python, Go, or any .NET language. Custom functions, scripting, external references or remote invocations are all possible and can cleanly integrate into infrastructure definition. Conveniently, Pulumi offers a conversion path that allows many Terraform providers to be consumed in Pulumi and all the major cloud providers are supported. Controller Install Overview A few resources must be present before installing NGINX Controller on the cloud: TLS certificates for the domain name to be associated with Controller Public IP address Virtual network and subnet interfaces Firewall rules (opening ports 22, 80, 443, 8443 to the Controller VM) SMTP server (installation will work with bogus settings, but email will not work) Data disk for storing OLAP data and optionally configuration data Optional external PostgreSQL configuration database VM instance setup: secrets, required libraries, data disk partition Using Pulumi we can automate the setup of all the above resources apart from the SMTP server (which can be configured easily through the Azure portal). The resulting infrastructure deployed when visualized is relatively simple. The Controller VM connects to the (optional) Azure PostgreSQL database via a private network link. Users of Controller connect directly to the Controller VM using the Azure provided DNS secured by Let’s Encrypt TLS certificates. Deployment Environment DNS Controller will be configured to automatically use the DNS name controller-<installation_id>.<azure region>.cloudapp.azure.com. This DNS entry is assigned to the Azure VM instance by default upon provisioning. The installation_id portion of the DNS entry is specified as part of your configuration. It is a unique id that is used throughout the installation. After the instance is created, it will automatically be assigned a TLS certificate using Let’s Encrypt, so your Controller instance will be ready to go with a valid certificate from the start. If you need a different DNS entry assigned to controller, you will need to modify the installation script to work for your particular environment. PostgreSQL Configuration Database The configuration database for Controller can be installed on the same instance as Controller or alternatively it can be installed in an external database. When installed locally, the total resource requirements of the Controller instance will increase, but this may be acceptable for trials or deployments with low utilization. Alternatively, the configuration database can be configured to use the Azure PostgreSQL Database service. This is a service offering from Azure that provides automated management of your database that allows for ease in scaling, backups, and performance tuning. SMTP Configuration The installation automation provided in this project does not install an email server. In order to receive emails from Controller about password resets or alerts, you will need to have a SMTP server set up that can be reached from Controller. An easy way to set this up is to use the SendGrid service on Azure. Using the Azure GUI, you can quickly have a SMTP server online that will work with Controller. Getting Started Download NGINX Controller Go to MyF5 and login or sign up for a new account. Follow the prompts to start a NGINX Controller trial if you are trialing. Then download the latest Controller installer and copy it into the installer-archives directory under the azure-pulumi directory. Also, be sure to note the association token providing in the MyF5 portal because you will need it when you use Controller for the first time. Run and configure Pulumi Choose the getting started method that works best for you from the configuration and run directions. If you want to get going quickly and running Linux or MacOS, you may find the quick start script, or the Docker container approaches to be the easiest.1.5KViews0likes0CommentsIntegrating NGINX Controller to CICD Pipeline
Introduction Assume application development team delivers an application. An application consists of two components: frontend and API. Both of them have to be exposed to the world. If frontend part is fairly static and has just a few URIs then API URIs layout changes often. NGINX Controller provides a rich tool set to simplify application delivery whether it is API or generic app. This article covers a way to integrate NGINX Controller to CICD pipeline to speed up application publishing ever further. Architecture and Prerequisites Architecture is pretty typical for controller deployment. NGINX Controller manages two NGINX Plus gateways. They in turn publish an application to the world. As a demo application I use Httpbin. This app consists of two components: frontend and an API. URI structure is flat. One pager frontend avaliable right behind "/". API endpoints base path is "/" as well, so endpoints are ("/ip", "/uuid", "/get", so on). The way this app is deployed is not important for this article. It is deployed and only reachable from gateways. Everything you see on the picture above is a prerequisite. NGINX Controller, NGINX Plus and application are deployed. Gateways are registered on a Controller. Controller user for a gitlab has admin permissions. NGINX Controller Configuration Abstractions Controller represents gateway configuration as set of abstractions. Picture below displays abstraction dependencies. Environment is a logical group of configuration entities which publish an app together. Entities from different environments can not be mixed. Application is a logical representation of a real application. Each application can include multiple components. E.g. Httpbin application consists of frontend and api components. Component contains lists of URIs, list of backend servers (workload groups) and routes defining where to forward request to particular URI. Gateway represents a virtual server which matches requests with certain hostnames. A published application ties together a gateway to listen for requests and one or more components to define routes to backend servers. Controller Integration to CICD Pipeline NGINX Controller provides an REST API which allows to integrate it with any kind of CICD platform. Integration aims to automatically publish both frontend and API application components. Following pipeline implements application publishing in three stages. First stage "create-env" creates common configuration abstractions for both components. Other two "publish-frontend" and "publish-api" publish frontend and api components respectively. Note: Each stage has "only" directive. It allows to reduce pipeline execution time by executing only stages with changed configuration. A script for each stage has only one command. It executes Ansible playbook which contains all steps to reach desired state. Note: Playbooks use official Ansible collection for to communicate with controller. This approach provides much more clear code than raw curl use. image: alpine:3.12.1 variables: CONTROLLER_FQDN: ctr.f5-demo.com stages: - create-env - deploy-frontend - deploy-api default: before_script: - apk add ansible~=2.9.14 - ansible-galaxy collection install nginxinc.nginx_controller create-env: stage: create-env script: - ansible-playbook playbooks/common.yml only: changes: - playbooks/common.yml - .gitlab-ci.yml deploy-frontend: stage: deploy-frontend script: - ansible-playbook playbooks/frontend.yml only: changes: - playbooks/frontend.yml - .gitlab-ci.yml deploy-api: stage: deploy-api script: - ansible-playbook playbooks/api.yml only: changes: - playbooks/api.yml - .gitlab-ci.yml Playbook from "create-env" stage creates common configuration entities (see "controller configuration abstractions" section): environment, application and a gateway. Playbook code is self explanatory. "nginx_controller_generate_token" role obtains a login credentials from controller. Other three roles create actual configuration entities. - hosts: localhost gather_facts: no collections: - nginxinc.nginx_controller vars: env_name: prod app_name: httpbin roles: - role: nginx_controller_generate_token vars: nginx_controller_user_email: "{{ lookup('env', 'CONTROLLER_USER') }}" nginx_controller_user_password: "{{ lookup('env', 'CONTROLLER_PASSWORD') }}" nginx_controller_fqdn: "{{ lookup('env', 'CONTROLLER_FQDN') }}" nginx_controller_validate_certs: false - role: nginx_controller_environment vars: nginx_controller_environment: metadata: name: "{{ env_name }}" - role: nginx_controller_gateway vars: nginx_controller_environmentName: "{{ env_name }}" nginx_controller_gateway: metadata: name: "{{ app_name }}" desiredState: ingress: uris: "http://nplus.httpbin.f5-demo.com": {} placement: instanceRefs: - ref: "/infrastructure/locations/unspecified/instances/ip-10-4-96-225.us-west-2.compute.internal" - ref: "/infrastructure/locations/unspecified/instances/ip-10-4-96-90.us-west-2.compute.internal" - role: nginx_controller_application vars: nginx_controller_environmentName: "{{ env_name }}" nginx_controller_app: metadata: name: "{{ app_name }}" Playbook from "publish-frontend" stage publishes a frontend component. Structure is similar to previous one. First role logs in to controller. "nginx_controller_component" role creates an application component which represents a gateway configuration to publish an application frontend. - hosts: localhost gather_facts: no collections: - nginxinc.nginx_controller vars: env_name: prod app_name: httpbin roles: - role: nginx_controller_generate_token vars: nginx_controller_user_email: "{{ lookup('env', 'CONTROLLER_USER') }}" nginx_controller_user_password: "{{ lookup('env', 'CONTROLLER_PASSWORD') }}" nginx_controller_fqdn: "{{ lookup('env', 'CONTROLLER_FQDN') }}" nginx_controller_validate_certs: false - role: nginx_controller_component vars: nginx_controller_environmentName: "{{ env_name }}" nginx_controller_appName: "{{ app_name }}" nginx_controller_component: metadata: name: frontend displayName: "Frontend" description: "Frontend for {{ app_name }} API" desiredState: ingress: uris: "/": {} gatewayRefs: - ref: "/services/environments/{{ env_name }}/gateways/{{ app_name }}" backend: workloadGroups: group1: uris: "http://10.4.113.213:30445": {} monitoring: response: status: range: startCode: 200 endCode: 201 match: true Playbook from "publish-api" stage publishes an api application component. Unlike to frontend which has only one static URI to publish (forward), API component should handle number of API URIs "endpoints". To avoid manual input and in sake of single source of truth controller can import all URIs from an OpenAPI file. Role "nginx_controller_api_definition_import" reads openAPI file and imports all endpoints as a new API version (v1). Following roles create a published API out of a version and reference it form a component. - hosts: localhost gather_facts: no collections: - nginxinc.nginx_controller env_name: prod app_name: httpbin roles: - role: nginx_controller_generate_token vars: nginx_controller_user_email: "{{ lookup('env', 'CONTROLLER_USER') }}" nginx_controller_user_password: "{{ lookup('env', 'CONTROLLER_PASSWORD') }}" nginx_controller_fqdn: "{{ lookup('env', 'CONTROLLER_FQDN') }}" nginx_controller_validate_certs: false - role: nginx_controller_api_definition_import vars: nginx_controller_api_definition_version: v1 nginx_controller_api_definition_name: "{{ app_name }}" nginx_controller_api_definition: "{{ lookup('file', '../httpbin.openapi.json') }}" - role: nginx_controller_publish_api vars: nginx_controller_environment: "{{ env_name }}" nginx_controller_application: "{{ app_name }}" nginx_controller_publish_api: metadata: name: "v1" displayName: "v1" desiredState: basePath: "/api" stripWorkloadBasePath: true apiDefinitionVersionRef: ref: "/services/api-definitions/httpbin/versions/v1" gatewayRefs: - ref: "/services/environments/{{ env_name }}/gateways/{{ app_name }}" - role: nginx_controller_component vars: nginx_controller_environmentName: "{{ env_name }}" nginx_controller_appName: "{{ app_name }}" nginx_controller_component: metadata: name: api displayName: "API" description: "{{ app_name }} API" desiredState: ingress: uris: "/ip": matchMethod: EXACT gatewayRefs: - ref: "/services/environments/{{ env_name}}/gateways/{{ app_name}}" backend: workloadGroups: group1: uris: "http://10.4.113.213:30445": {} monitoring: response: status: range: startCode: 200 endCode: 201 match: true publishedApiRefs: - ref: "/services/environments/{{ env_name }}/apps/{{ app_name}}/published-apis/v1" Once all stages end controller transforms all directives to NGINX Plus configuration and pushes it down to gateways. Therefore publishing both frontend and API application components to the world. Hope it is helpful. Feel free to reach me with questions and concerns. Repository: https://gitlab.com/464d41/deploy-httpbin-via-nginx-controller773Views1like0CommentsNGINX Controller Image Building Automation
Proper application deployment assumes that all stages are automated to eliminate human factor and make all operations as quick and smooth as possible. NGINX Controller is an application. Mission critical application which requires 100% automation. It should start from the very bottom at controller image creation and cover all other NGINX Controller operations. This article presents a small aspect of NGINX Controller automation process - automated creation of NGINX Controller image to use in public cloud. Production application deployment assumes that administrator always know what exactly code is currently running. In case of commercial software it is often impossible to take specific code version directly from a repo and deploy it on a server because software usually distributed as a package. However a package may become this immutable asset to build image on top. The concept behind this project is to take NGINX Controller package and use CICD pipeline to describe automation steps as a code. Such approach allows to statically define all procedures and parameters for NGINX Controller installation and image. Diagram below explains what are the building workflow components and how do they interact with each other. As you can see everything starts from CICD platform which stores image automation pipeline code. When pipeline gets triggered with code commit it uses Hashicorp Packer tool to create temporary AWS EC2 instance and installs NGINX Controller software there then convert it to an AMI image. Writing a pipeline is always challenging. However there is a gitlab project which already has the pipeline defined (link). In case you need to build your own image just clone template, set variables and let the pipeline to do the rest. Example below gives step be step instructions. Step 1. Import the template repo to a brand new Gitlab project Step 2. Set variables. Secrets go as CICD environment variables All other go to ".env". Directly to the code Step 3. Once changes submitted the pipeline starts and builds an image using Hashicorp Packer tool. In case pipeline succeeds you can find NGINX Controller image in the list of AWS AMIs. Such approach allows to formalize NGINX Controller image creation to make it immutable piece of infrastructure to eliminate any inconsistencies. Each build gets tagged with commit hash so you know exactly how controller installation is setup. This project doesn't stop on AWS implementation. Other public clouds are in roadmap. If you are interested in expanding project to other clouds please contact me (m.fedorov@f5.com) and join development. Good luck!240Views1like0CommentsDelivery and Protection of Geographically Distributed API
Introduction Public facing applications have globally distributed users. In order to deliver low latency, the APIs that the application front ends consume, need to be globally distributed as well. CDNs are used to deliver static content to globally distributed users but have limited efficacy for providing access to API endpoints that offer dynamic responses as requests come in.This article shows how to achieve low latency secure delivery of API endpoints using F5 Global Server Load Balancing, and Essential App Protect from F5 Cloud Services, and also to publish, authenticate and authorize API calls using NGINX Controller and NGINX Plus instances.The article shows how to achieve Central Management for APIs using NGINX Controller, and easy management of security of the API via F5 Cloud Services. The picture below shows an architecture that I set up as an example for delivery of distributed API. This architecture consists of three main layers:1) Global Server Load Balancer (GSLB) Cloud Service for source aware traffic steering. 2) Essential App Protect (EAP) for application protection, and 3) NGINX Controller for centralized API delivery. Traffic journey starts from user making DNS query for app.company.test. GSLB as authorized DNS server returns nearest EAP endpoint location which in turn forwards traffic to application instance in the same region to keep latency low. Lastly NGINX Plus gateway centrally managed by NGINX Controller load balances traffic between local application servers. As any production ready architecture should be,this architecture isfully redundant and doesn't contain any single point of failure. If any hop in the chain goes down traffic gets redirected to secondary avaliable path. This is true for both cloud services and NGINX instances. GSLB What is GSLB and why is it here? The GSLB solution is offered in the cloud as a SaaS. Itprovides automatic failover, load balancing across multiple locations, increased reliability by avoiding a single point of failure, and increased performance by directing traffic to the nearest application instance. It does this by monitoring application availability and performance, and then responds dynamicallyto client DNS queries in order to direct traffic based on rules that you define. The architecture uses GSLB to direct users to the nearest Essential App Protect endpoint that offers security for the traffic. When DNS query arrives at GSLB, it reads user source IP, identifies a region which this source belongs to and returns response from a pool associated with this particular source region. Therefore, regardless of user geographical location GSLB makes user traffic to go to the nearest EAP endpoint to keep latency low. Picture below showsthe way GSLB routes traffic to the appropriate EAP instance. At step 1 user makes a DNS query for api.company.test. GSLB as authorized DNS server returns CNAME type record with domain name of EAP endpoint. On step 2 user resolves domain name of Essential App Protect endpoint and gets Essential App Protect gateway IP addresses as destination for data traffic. Essential App Protect accepts connections from all users on the same gateway IPs and differentiates traffic based on HTTP host header in request. In our example, at this stage HTTP traffic with host header api.company.test is further examined by Essential App Protect. EAP What is EAP and why is it here? Essential App Protect (EAP) is the first line of defense for API endpoints.. It intercepts all traffic destined to the API endpoint, performs full-featured WAF inspection and forwards only clean traffic to the backend. EAP as a mature cloud service is avaliable in multiple regions around the world. Regardless of where user is located,EAP can accept user traffic with minimal latency and then forward it towards nearest backend. Obviously in order to keep latency low for all end users,application owner should have EAP endpoint and API instance deployed in every target region. NGINX Controller What is NGINX Controller and why is it here? NGINX Controller is the cloud-agnostic application delivery platform which reduces deployment time, supports seamless integration with CI/CD tools and provides clear insights into performance metrics. NGINX Controller performs central management of NGINX Plus gateways that are geographically distributed and sit in front of every API instance. Just like every other element in the architecture, NGINX Controller is also deployed in HA mode. NGINX Plus instances talk to the Controllers via Load Balancers which forward traffic to the active Controller. Data plane reliability is ensured by NGINX Plus HA in every region. It is important to note that doubling of NGINX Plus instances doesn't cause additional management burden for application development team. NGINX Controller pushes exactly the same configuration to all gateway instances.528Views1like0Comments