nginx open source
9 Topics2022 DevCentral MVP Announcement
Congratulations to the 2022 DevCentral MVPs! Without users who take time from their busy days to share their experience and knowledge for others, DevCentral would be more of a corporate news site and not an actual user community. To that end, the DevCentral MVP Award is given annually to the outstanding group of individuals – the experts in the technical F5 user community who go out of their way to engage with the user community. The award is our way of recognizing their significant contributions, because while all of our users collectively make DevCentral one of the top community sites around and a valuable resource for everyone, MVPs regularly go above and beyond in assisting fellow F5 users.We understand that 2021 was difficult for everyone, and we are extra-grateful to this year's MVPs for going out of their ways to help others. MVPs get badges in their DevCentral profiles so everyone can see that they are recognized experts. This year’s MVPs will receive a glass award, certificate, exclusive thank-you gifts, and invitations to exclusive webinars and behind-the-scenes looks at things like roadmaps, new product sneak-previews, and innovative concepts in development. The 2022 DevCentral MVPs are: Aditya K Vlogs AlexBCT Amine_Kadimi Austin_Geraci Boneyard Daniel_Wolf Dario_Garrido David.burgoyne Donamato 01 Enes_Afsin_Al FrancisD iaine jaikumar_f5 Jim_Schwartzme1 JoshBecigneul JTLampe Kai Wilke Kees van den Bos Kevin_Davies Lionel Deval (Lidev) LouisK Mayur_Sutare Neeeewbie Niels_van_Sluis Nikoolayy1 P K Patrik_Jonsson Philip Jönsson Rob_Carr Rodolfo_Nützmann Rodrigo_Albuquerque Samstep SanjayP ScottE Sebastian Maniak Stefan_Klotz StephanManthey Tyler.Hatton1.2KViews8likes0CommentsLightboard Lessons: HTTP Cookie SameSite Attribute
In this episode of Lightboard Lessons, Jason covers the SameSite attribute on HTTP cookies, and the implications for site developers and end users when Chrome begins enforcing a default behavior set to "lax" later this month in a limited rollout for Chrome v80 stable users. This should be addressed in the applications, but BIG-IP can help via iRules and local traffic policies as briefly described in the video, as well as ASM module settings and through NGINX directives. Resources Start Here: Article: Handling Incompatible Clients AskF5 Knowledge Article on SameSite enforcement: K03346798 Article: Increased Security with First Party Cookies Additional iRule options: Codeshare: Setting SameSite on LTM Persistence Cookies Codeshare: Setting SameSite on All Web App & BIG-IP Cookies Codeshare: Add SameSite Attribute to APM Cookies Article: Detecting SameSite=None Incompatible Browsers ASM & NGINX Configuration Options ASM Manual info on SameSite NGINX proxy_cookie_path (Example: proxy_cookie_path / "/; secure; HttpOnly; SameSite=none";) NGINX sticky cookie (Example: sticky cookie srv_id expires=1h httponly secure “path=/; SameSite”;) Industry Insight Chromium Updates on SameSite (offsite) CSRF is (really) dead (offsite)2.3KViews5likes1CommentWhat is NGINX?
Introduction NGINX started out as a high performance web-server and quickly expanded adding more functionality in an integrated manner. Put simply, NGINX is an open source web server, reverse proxy server, cache server, load balancer, media server and much more. The enterprise version of NGINX has exclusive production ready features on top of what's available, including status monitoring, active health checks, configuration API, and live dashboard for metrics. Think of this article as a quick introduction to each product but more importantly, as our placeholder for NGINX articles on DevCentral. If you're interested in NGINX, you can use this article as the place to find DevCentral articles broken down by functionality in the near future. By the way, this article here has also links to a bunch of interesting articles published on AskF5 and some introductory NGINX videos. NGINX as a Webserver The most basic use case of NGINX. It can handle hundreds of thousandsof requests simultaneously by using an event-drive architecture (as opposed to process-driven one) to handle multiple requests within one thread. NGINX as a Reverse Proxy and Load Balancer Both NGINX and NGINX+ provide load balancing functionality and work as reverse-proxy by sitting in front of back-end servers: Similar to F5, traffic comes in, NGINX load balances the requests to different back-end servers. In NGINX Plus version, it can even do session persistence and health check monitoring. Published Content: Server monitoring - some differences between BIG-IP and NGINX NGINX as Caching Server NGINX content caching improves efficiency, availability and capacity of back end servers. When caching is on, NGINX checks if content exists in its cache and if that's the case, content is served to client without the need to contact back end server. Otherwise, NGINX reaches out to backend server to retrieve content. A content cache sits between a client and back-end server and saves copies of pre-defined cacheable content. Caching improves performance as strategically, content cache is supposed to be closer to client. It also has the benefit of offloads requests from back-end servers. NGINX Controller NGINX controller is a piece of software that centralises and simplifies configuration, deployment and monitoring of NGINX Plus instances such as load balancers, API gateway and even web server. By the way, NGINX Controller 3.0 has just been released. Published Content: Introducing NGINX Controller 3.0 Setting up NGINX Controller Use of NGINX Controller to Authenticate API Calls Publishing an API using NGINX Controller NGINX as Kubernetes Ingress Controller NGINX Kubernetes Ingress Controller is a software that manages all Kubernetes ingress resources within a Kubernetes cluster. It monitors and retrieves all ingress resources running in a cluster and configure the corresponding L7 proxy accordingly. There are 2 versions of NGINX Ingress Controllers. One is maintained by the community and the other by NGINX itself. Published Content: Lightboard Lesson: NGINX Kubernetes Ingress Controller Overview NGINX as API Gateway An API Gateway is a way of abstracting application services interaction from client by providing a single entry-point into the system. Clients may issue a simple request to the application, for example, by requesting to load some information from a specific product. In the background, API gateway may contact several different services to bundle up the information requested and fulfil client's request. NGINX API management module for NGINX Controller can do request routing, composition, applying rate limiting to prevent overloading, offloading TLS traffic to improve performance, authentication, and real-time monitoring and alerting. NGINX as Application Server (Unit) NGINX Unit provides all sorts of functionalities to integrate applications and even to migrate and split services out of older monolithic applications. A key feature of Unit is that we don't need to reload processes once they're reconfigured. Unit only changes part of the memory associated to the changes we made. In later versions, NGINX Unit can also serve as intermediate node within a web framework, accepting all kinds of traffic and maintaining dynamic configuration and acting as a reverse proxy for back-end servers. NGINX as WAF NGINX uses ModSecurity module to protect applications from L7 attacks. NGINX as Sidecar Proxy Container We can also use NGINX as side car proxy container in Service Mesh architecture deployment (e.g. using Istio with NGINX as sidecar proxy container). A service mesh is an infrastructure layer that is supposed to be configurable and fast for the purposes of network-based interprocess communication using APIs. NGINX can be configured as a Sidecar proxy to handle inter-service communication, monitoring and security-related features. This is a way of ensuring developers only handle development, support and maintenance while platform engineers (ops team) can handle the service mesh maintenance.1.6KViews3likes2CommentsNGINX as an HTTP Load Balancer
Quick Intro You've probably used NGINX as a WebServer and you might be wondering how does it work as a load balancer. We don't need a different NGINX file or to install adifferent package to make it a load balancer. NGINX works as a load balancer out of the box as long as we specify we want it to work as a load balancer. In this article, we're going to introduce NGINX HTTP load balancer. If you need more details, please refer to NGINX official documentation. However, NGINX also supportsTCP and UDP load balancingandhealth monitorswhich are not covered here. NGINX as a WebServer by default Once NGINX is installed, it works as a WebServer by default: The response above came from our client-facing NGINX Load Balancer-to-be which is currently just a webserver. Let's make it an HTTP load balancer, shall we? NGINX as a Load Balancer Disable WebServer Let's comment out thedefaultfile on/etc/nginx/sites-enabledto disable our local WebServer just in case: Then I reload the config: If we try to connect now, it won't work now because we disabled thedefaultpage: Now, we're ready to create the load balancer's file! Creating Load Balancer's file Essentially, the default NGINX config file (/etc/nginx/nginx.conf) already has an http block which references the/etc/nginx/conf.ddirectory. With that in mind, we can pretty much create our file in/etc/nginx/conf.d/and whatever is in there will be in HTTP context: Withinupstreamdirective, we add our backend servers along with the port they're listening to. These are the servers NGINX load balancer will forward the request to. Lastly, we create a server config for our listener withproxy_passpointing to upstream name (backends). We now reload NGINX again: Lab tests NGINX has a couple of different load balancing methods and round robin (potentially weighted) is the default one. Round robin test I tried the first time: Second time: Third time: Fourth time it goes back to server 1: That's because the default load balancing method for NGINX is round robin. Weight test Now I've added weight=2 and I expect that server 2 will proportionally take 2x more requests than the rest of the servers: Once again I reload the configuration after the changes: First request goes to Server 3: Next one to Server 2: Then again to Server 2: And lastly Server 1: Administratively shutting down a server We can also administratively shut down server 2 for maintenance, for example, by adding thedownkeyword: When we issue the requests it only goes to server 1 or 3 now: Other Load Balancing methods Least connections Least connections sends requests to the server with the least number of active connections taking into consideration any optionally configuredweight: If there's a tie,round robinis used, taking into account the optionally configured weights. For more information on least connections method, please click here. IP Hash The request is sent to the server as a result of a hash calculation based on the first 3 octets of Client IP address or the whole IPv6 address. This method makes sure requests from the same client are always sent to the same server, unless it's unavailable. Note: A hash from server that is marked down is preserved for when it comes back up. For more information on IP hash method, please click here. Custom or Generic Hash We can also define a key for the hash itself. In below example, the key is the variable$request_uriwhich represents the URI present in the HTTP request sent by client: Least Time (NGINX Plus only) This method is not supported in NGINX free version. In this case, NGINX Plus picks the server with lowest average latency + number of active connections. The lowest average latency is calculated based on an argument passed toleast_timewith the following values: header: time it took to receive the first byte from server last_byte: time it took to receive the full response from server last_byteinflight: time it took toreceive full response from server taking into account incomplete requests In above example, NGINX Plus will make its load balancing decision based on the time it took to receive the full HTTP response from our server and will also include incomplete requests in the calculation. Note that /etc/nginx/conf.d/ is the default directory for NGINX config files. In above case, as I installed NGINX directly from Debian APT repository, it also added sites-enabled. Some Linux distributions do add it to make it easier for those who use Apache. For more information on least time method, please click here. Random In this method, requests are passed on to randomly selected servers but it's only ideal for environments where multiple load balancers are passing on requests to the same back end servers. Note that least_time parameter in this method is only available on NGINX Plus. For more details about this method, please clickhere.3KViews2likes0CommentsGuarding privacy and security using API Gateways: Implement the Phantom Token Flow using NGINX and the Curity Identity Server
ByDamian Curry, NGINX Business Development Technical Director, F5,andMichal Trojanowski, Product Marketing Engineer,Curity ---- In today’s world, APIs are ubiquitous, either in communication between-backend services or from front ends to back ends. They serve all kinds of purposes and come in different flavors and can return data in various formats. The possibilities are countless. Still, they all share one common trait – an API needs to be secure. Secure access to an API should be paramount for any company exposing them, especially if the APIs are available externally and consumed by third-party clients. To help organizations address the critical topic of API security, OWASP has provided guidelines to ensure safety. In 2019, OWASP released a “top 10” compilation of the most common security vulnerabilities in APIs. OAuth and JWT as a Sign of Mature API Security Modern, mature APIs are secured using access tokens issued according to the OAuth standard or ones that build on top of it. Although OAuth does not require the use of JSON Web Token (JWT) format for access tokens, per se, it is common practice to use them. Signed JWTs (JWTs signed using JSON Web Signing, JWS) have become a popular solution because they include built-in mechanisms that protect their integrity. Although signed JWTs are a reliable mechanism when the integrity of the data is considered, they are not as goodin regards toprivacy. A signed JWT (JWS) protects integrity of the data, while an encrypted one (JWE) protects also the privacy. A JSON Web Token consists of claims which can carry information about the resource owner – the user who has granted access to their resources or even about the API itself. Anyone in possession of a signed JWT can decode it and read the claims inside. This can produce various issues: You have to be really careful when putting claims about the user or your API in the token. If any Personally Identifiable Information (PII) ends up in there, this data essentially becomes public. Striving to keep PII private should be the goal of your organization. What is more, in some countries, privacy is protected by laws like GDPR or CCPA, and it can be an offense not to keep PII confidential. Developers of apps consuming your access tokens can start depending on the contents of your tokens. This can lead to a situation where making updates to the contents of your access tokens breaks the integrations with your API. Such a situation would be an inconvenience both for your company and your partners. An access token should be opaque to the consumers of your API, but this can be hard to enforce outside of your organization. By-reference Tokens and the Phantom Token Flow When there is a problem, there must be a solution. In fact, there are two ways to address these aforementioned problems: Encryption Use by-reference (AKA handle) tokens To protect the contents of JWTs from being visible, the token can be encrypted (thus creating a JWE inside the JWS). Encrypted content will be safe as long as you use a strong encryption algorithm and safeguard the keys used for encryption. To use JWEs, though, you need a Public Key Infrastructure (PKI), and setting one up can be a big deal. If you do not use PKI, you need another mechanism to exchange keys between the Authorization Server and the client. As JWEs also need to be signed, your APIs have to be able to access over TLS the keys used to verify and decrypt tokens. These requirements add considerable complexity to the whole system. Another solution is to use by-reference tokens – an opaque string that serves as a reference to the actual data kept securely by the Authorization Server. Using opaque tokens protects the privacy of the data normally kept in tokens, but now your services, which process requests, need some other way to get the information that was earlier available in the JWT’s claims. Here,the Token Introspectionstandard can prove useful. Using a standardized HTTP call to the Authorization service, the API can exchange the opaque token for a set of claims in JSON format. Exchanging an opaque token for a set of claims comes to the API just as easily as extracting the data from decoding a JWT. Because the API connects securely to the Authentication Server and trusts it, the response does not have to be signed, as is the case with JWTs. Still, in a world dominated by microservices, a problem may arise with the proposed solution. Have a look at this diagram below. Note that every service which processes the request needs to perform the introspection. When there are many services processing one request, the introspection must be done repeatedly or else the APIs would have to pass unsigned JSONs and trust the caller with the authorization data – which can lead to security issues. Situation in which every service calls the Authorization Server to introspect the token can put excess load on the Authorization Server, which will have to return the same set of claims many times. It can also overload the network between the services and the Authorization Server. If your API consists of numerous microservices, there is probably some kind of a reverse proxy or an API gateway standing in front of all of them, e.g., an instance of NGINX. Such a gatewayis capable of performing the introspection flow for you. This kind of setup is called a Phantom Token Flow. The Authorization Server issues opaque tokens and serves them to the client. The client uses the token as any other token – by appending it to the request in theAuthorizationheader. When the gateway receives a request, it extracts the opaque token from the header and performs the introspection flow, but it is a bit different from the flow performed by APIs. According to the specification, when the API introspects a token, it receives a JSON with the claims that would usually end up in a JWT. For an API, it is enough, as it trusts the Authorization Server and just needs the data associated with the access token, the claims do not have to be signed. For the API gateway, though, it’s not enough. The gateway needs to exchange the opaque token for the set of data but will be passing this data to the downstream services together with the request. The APIs must be sure that the claims they received from the gateway have not been tampered with. That’s why some Authorization Servers allow returning a JWT from the introspection endpoint. The API gateway can introspect the opaque token, but in the response, it gets a JWT that corresponds to the access token. This JWT can then be added to the request’sAuthorizationheader, and the downstream APIs can use it as any other JWT. Thanks to this solution, APIs can securely passthe access token between themselves and at the same time have simple access to the claims, without the need of contacting the Authorization Server. Using the Phantom Token Flow allows you to leverage the power of JWTs inside your infrastructure while at the same time keeping high levels of security and privacy outside of your organization. The network workload is greatly reduced because now only the gateway needs to contact the Authorization Server - the introspection is done only once for every request. Also, the amount of work the gateway needs to do is limited – it does not have to parse the body or decode the JWT in order to decide whether the token is valid or has not expired. The gateway can depend only on the status code of the response from the Authorization Server. An OK response means that the token is valid and has not expired. The amount of network traffic can be reduced even further. The API gateway can cache the Authorization Server’s response for as long as the server tells it to (which will usually be the lifetime of the access token). If the opaque reference token is globally unique, this works as an ideal cache key, and the mapping in the cache can even be shared across any API. Implementing the Phantom Token Flow using NGINX and theCurityIdentity Server AtCurity, we have created anopen-source modulefor NGINX to facilitate implementing the Phantom Token flow. The module takes care of performing the introspection and keeping the result in the cache (if caching is enabled). As values of the opaque token are globally unique, so the cache can be as well; there is no need to keep a reference on a per-client basis. The NGINX module is easy to use as all parameters can be set using standard NGINX configuration directives. The module can be built from source, but there are binaries available for a few different Linux distributions and NGINX versions. You can find them in thereleasessection on GitHub. Installing and Configuring the Module Once downloaded, the module needs to be enabled in your instance of NGINX unless you want to create your own build of NGINX with all your modules embedded. To enable the module, add this directive in the main context of your configuration: load_modulemodules/ngx_curity_http_phantom_token_module.so; Then you can apply the phantom token flow to chosen servers and locations. Here’s an example configuration that enables the Phantom Token Flow for an/apiendpoint. The configuration assumes that: Your NGINX instance runs on a different host (nginx.example.com) than yourCurityIdentity Server instance (curity.example.com). The API that will eventually process the request listens onexample.com/api. TheCurityIdentity Server’s introspection endpoint is/oauth/v2/introspection. Responses from introspection are cached by NGINX. http { proxy_cache_path/path/to/cache/cache levels=1:2keys_zone=my_cache:10mmax_size=10g inactive=60muse_temp_path=off; server { server_namenginx.example.com; location /api{ proxy_passhttps://example.com/api; phantom_tokenon; phantom_token_client_credential"client_id" "client_secret"; phantom_token_introspection_endpointcurity; } locationcurity{ proxy_pass"https://curity.example.com/oauth/v2/introspection"; proxy_cache_methodsPOST; proxy_cachemy_cache; proxy_cache_key$request_body; proxy_ignore_headersSet-Cookie; } } } Now, your NGINX gateway is capable of performing the Phantom Token Flow! Conclusion The Phantom Token Flow is a recommended practice for dealing with access tokens that are available publicly. The flow is especially effective when there are many microservices processing one request, which would otherwise have to introspect the tokens on their own. Using Phantom Tokens enhances your security and the privacy of your users, which can never be underestimated. With the help of NGINX as your gateway and theCurityIdentity Server as the Authorization Server implementing the Phantom Token Flow is as simple as loading a module in your NGINX server! Sign up now for a free API Gateway Webinar to learn more about guarding privacy and security with Curity and NGINX2.1KViews1like1Comment