k8s
10 TopicsUnderstanding Modern Application Architecture - Part 1
This is part 1 of a series. Here are the other parts: Understanding Modern Application Architecture - Part 2 Understanding Modern Application Architecture - Part 3 Over the past decade, there has been a change taking place in how applications are built. As applications become more expansive in capabilities and more critical to how a business operates, (or in many cases, the application is the business itself) a new style of architecture has allowed for increased scalability, portability, resiliency, and agility. To support the goals of a modern application, the surrounding infrastructure has had to evolve as well. Platforms like Kubernetes have played a big role in unlocking the potential of modern applications and is a new paradigm in itself for how infrastructure is managed and served. To help our community transition the skillset they've built to deal with monolithic applications, we've put together a series of videos to drive home concepts around modern applications. This article highlights some of the details found within the video series. In these first three videos, we breakdown the definition of a Modern Application. One might think that by name only, a modern application is simply an application that is current. But we're actually speaking in comparison to a monolithic application. Monolithic applications are made up of a single, or a just few pieces. They are rigid in how they are deployed and fragile in their dependencies. Modern applications will instead incorporate microservices. Where a monolithic application might have all functions built into one broad encompassing service, microservices will break down the service into smaller functions that can be worked on separately. A modern application will also incorporate 4 main pillars. Scalability ensures that the application can handle the needs of a growing user base, both for surges as well as long term growth. Portability ensures that the application can be transportable from its underlying environment while still maintaining all of its functionality and management plane capabilities. Resiliency ensures that failures within the system go unnoticed or pose minimal disruption to users of the application. Agility ensures that the application can accommodate for rapid changes whether that be to code or to infrastructure. There are also 6 design principles of a modern application. Being agnostic will allow the application to have freedom to run on any platform. Leveraging open source software where it makes sense can often allow you to move quickly with an application but later be able to adopt commercial versions of that software when full support is needed. Defining by code allows for more uniformity of configuration and move away rigid interfaces that require specialized knowledge. Automated CI/CD processes ensures the quick integration and deployment of code so that improvements are constantly happening while any failures are minimized and contained. Secure development ensures that application security is integrated into the development process and code is tested thoroughly before being deployed into production. Distributed Storage and Infrastructure ensures that applications are not bound by any physical limitations and components can be located where they make the most sense. These videos should help set the foundation for what a modern application is. The next videos in the series will start to define the fundamental technical components for the platforms that bring together a modern application. Continued in Part 23.7KViews8likes0CommentsEgress control for Kubernetes using F5 Distributed Cloud Services
Summary When using F5 Distributed Cloud Services (F5 XC) to manage your Kubernetes (K8s) workloads, egress firewalling based on K8s namespaces or labels is easy. While network firewalls have no visibility into which K8s workload initiated outbound traffic - and therefore cannot apply security policies based on workload - we can use a platform like F5 XC Managed Kubernetes (mK8s) to achieve this. Introduction Applying security policies to outbound traffic is common practice. Security teams inspect Internet-bound traffic in order to detect/prevent Command & Control traffic, allow select users to browse select portions of the Internet, or for visibility into outbound traffic. Often the allow/deny decision is based on a combination of user, source IP, and destination website. Here's an awesome walk through of outbound inspection. Typical outbound inspection performed by a network-based device. Network-based firewalls cannot do the same for K8s workloads because pods are ephemeral. They can be short-lived, their IP addresses are temporary and reused, and all pods on the same node making outbound connections will have the same source IP on the external network. In short, anetwork device cannot distinguish traffic from one pod versus another. Which microservice is making this outbound request? Should it be allowed? Problem statement In my cluster I have two apps, app1 and app2, in namespaces app1-ns and app2-ns. For HTTP traffic, I want app1 to reach out to *.github.com but nothing else app2 to reach out to the REST API at api.weather.gov but nothing else, even other subdomains of weather.gov For non-HTTP traffic, I want app1 to be able to reach a partner's public IP address on port 22 app2 to reach Google's DNS server at 8.8.8.8 on port 25 I want no other traffic (TCP, UDP) to egress from my pods (ie., HTTP or non-HTTP). What about a Service Mesh? A service mesh will control traffic within your K8s cluster, both East-West (between services) and North-South (traffic to/from the cluster). Indeed, egress control is a feature of some service meshes, and a service mesh is a good solution to this problem. Istio's egress control is a great starting point to read more about a service mesh with egress control. By using an egress gateway, Istio's sidecars will force traffic destined for a particular destination through a proxy, and this proxy can enforce Istio policies. This solves our problem, although I've heard customers voice reasonable concerns: what about non-HTTP traffic? what if the egress gateway is bypassed? can our security team configure the mesh or configuration as code? a mesh may require an additional learning / admin overhead a mesh is often managed by a different team than traditional security What about a NetworkPolicy? A NetworkPolicy is a K8s resource that can define networking rules, including allow/deny rules by namespace, labels, src/dest pods, destination domains, etc. However NetworkPolicies must be enforced by the network plugin in your distribution (eg Calico), so they're not an option for everyone. They're also probably not a scalable solution when you consider the same concerns of service meshes above, but they are possible. Read more about NetworkPolicies for egress control with Istio, and check out this article from Monzo to see a potential solution involving NetworkPolicies. Other ideas Read the article from Monzo linked above to see what others have done. You could watch (or serve) DNS requests from Pods and then very quickly update outbound firewall rules to allow/disallow traffic to the IP address in the DNS response. NeuVector had a great article on this. You could also use a dedicated outbound proxy per domain name, as Monzo did, although this wouldn't scale to a large number, so some kind of exceptions would need to be made. I read an interesting article on Falco also, which is a tool that can monitor outbound connections from pods using eBPF. Generally speaking, these other ideas will bring the same concerns to teams without mesh skills: K8s and mesh networking can be unfamiliar and difficult to operate. Me and Kubernetes Solving egress control with F5 XC Managed Kubernetes Another way we can control outbound traffic specific to K8s namespace or labelsis by using a K8s distribution that includes these features. In a technical sense, this works just like a mesh. By injecting a sidecar container for security controls into pods, the platform can control networking. However the mesh is not managed separately in this case. The security policies of the platform provide a GUI, easy reuse of policies, and generally an experience identical to that used for traditional egress control with the platform. Solving for our problem statement If I am using Virtual K8s (vK8s) or Managed K8s (mK8s), my pods are running on the F5 XC platform. These containers may be on-prem or in F5's PoPs, but the XC platform is natively aware of each pod. Here's how to solve our problem with XC when you have a managed K8s cluster. 1. Firstly we will create a known key so we can have a label in XC to match a label we will apply to our K8s pods. I have created a known key egress-rulesetby following this how-to guide. 2. For HTTP and HTTPS traffic, create aforward proxy policy. Since we want rules to apply to pods based on their labels, choose "Custom Rule List" when creating rules. Rule 1: set the source to be anything with a known label of egress-ruleset=app1and allow access to TLS domain with suffix of github.com. Rule 2: Same as 1, but allow access to HTTP path of suffix github.com. Rules 3 and 4 are the same, but where the source endpoint matches egress-ruleset=app2. Rule 5, the last, can be a Deny All rule. 3. For non HTTP(S) traffic, create multiplefirewall policies for traffic ingressing, egressing, or originating from an F5 Gateway. I've recommended multiple policies because a policy applies to a group of endpoints defined by IP or label. I've used three policies in my examples (one for label egress-ruleset=app1and another for app2, and one for all endpoints). Use the policies to allow TCP traffic as desired. 4. Create and deploy a Managed K8s cluster and an App Stack site that is attached to this cluster. When creating the App Stack site, you can attached the policies you created from steps 1 and 2. You can have multiple policies layered in order, for policies of both types (forward proxy and firewall). 5. Deploy your K8s workload and label your pods with egress-rulesetand a value of app1or app2. Finally validate your policies are in effect by using kubectl execagainst a pod running in your cluster. We have now demonstrated that outbound traffic from our pods is allowed only to destinations we have configured. We can now control outbound traffic specific to the microservice that is the source of the traffic. Application namespaces Another way to solve this problem uses Namespaces only and not labels. If you create your Application Namespace in the XC console (not K8s Namespace) and deploy your workloads in the corresponding K8s namespace, you can use the built-in label of name.ves.io/namespace.This means you won't need to create your own label (Step 1) but you will need to have a 1:1 relationship between K8s namespaces and Application Namespaces in XC. Plus, your granularity for endpoints is not fine-grained at the level of pod labels, but instead is at the namespace level. Further Reading Enterprise-level outbound firewalling from products like F5's SSLO will do more than simple egress control, such as selectively pass traffic to 3rd party inspection devices. Egress control in XC is not integrating with other devices, but the security controls fit the nature of typical microservices. Still, we could layer simple outbound rules performed in K8s with enterprise-wide inspection rules performed by SSLO for further control of outbound traffic, including integration with 3rd party devices. While this example used mK8s, I'll make note of another helpful article that explains how labels can be used for controlling network traffic when using Virtual K8s (vK8s). Conclusion Egress control for Kubernetes workloads, where security policy can be based on namespace labels, can be enforced with a service mesh that supports egress control, or a managed K8s solution like F5 XC that integrates network security policies natively into the K8s networking layer. Consider practical concerns, like management overhead and existing skill sets, and reach out if I or another F5'er can help explain more about egress control using F5 XC! Finally thank you to my colleague Steve Iannetta@netta2who helped me prepare this. Please do reach out if you want to do this yourself or have more in-depth K8s traffic management questions.2.9KViews6likes3CommentsUnderstanding Modern Application Architecture - Part 2
To help our Community transfer their skills to handle Modern Applications, we've released a video series to explain the major points. This article is part 2 and here are the other parts: Understanding Modern Application Architecture - Part 1 Understanding Modern Application Architecture - Part 3 This next set of videos discuss the platforms and components that make up modern applications. In this video, we review containers. These have become a key building block of microservices. They help achieve the application portability by neatly packaging up everything needing to bring up an application within a container runtime such as Docker. One great example of a container is the f5-demo-httpd container. This small lightweight container can be downloaded quickly to run a web server. It's incorporated into a lot of F5 demo environments because it is lightweight and can be customized by simply forking the repository and making your own changes. In this next video, we talk about Kubernetes (or k8s for short). While there are container runtimes like Docker that can work individually on a server, the Kubernetes project has brought the concept into a form that can be scaled out. Worker nodes, where containers are run on, can be brought together into clusters. Commands can be issued to a Master Node via YAML files and have affect across the cluster. Containers can be scheduled efficiently across a cluster which is managed as one. In this next video, we break down the Kubernetes API. The Kubernetes API is the main interface to a k8s cluster. While there are GUI solutions that can be added to a k8s cluster, they are still interfacing with the API so it is important to understand what the API is capable of and what it is doing with the cluster. The main way to issue commands to the API is through YAML files and the kubectl command. From there, the API server will interact with the other parts of the cluster to perform operations. In this next video, we discuss Securing a Kubernetes cluster. There are a number of attack vectors that need to be understood and so we review them along with some of the actions that can be taken in order to increase the security for them. In this next video, we go over Ingress Controller. An Ingres Controller is one of the main ways that traffic is brought from outside of the cluster, into a pod. This role is of particular interest to F5 customers as they can use NGINX, NGINX+ or BIG-IP to play this strategic role within a Kubernetes cluster. In this next video, we talk about Microservices. As applications are decomposed from monolithic applications to modern applications, they are broken up into microservices that carry out individual functions of an application. The microservices then communicate with each other in order to deliver the overall application. It's important to then understand this service to service communication so that you can design application services around them such as load balancing, routing, visibility and security. We hope that you've enjoyed this video series so far. In the next article, we'll be reviewing the components that aid in the management of a Kubernetes platform.Understanding Modern Application Architecture - Part 3999Views2likes0CommentsProtect Your Kubernetes Cluster Against The Apache Log4j2 Vulnerability Using BIG-IP
Whenever a high profile vulnerability like Apache Log4j2 is announced, it is often a race to patch and remediate. Luckily, for those of us with BIG-IP's with AWAF (Advanced Web Application Firewall) in our environment, we can take care of some mitigation through updating and applying signatures. When there is a consolidation of duties, or both SecOps and NetOps work together on the same cluster of BIG-IP's then an AWAF policy can simply be applied to a virtual server. However, as we move into a world of modern application architectures, the Kubernetes administrators are very often a different set of individuals falling within DevOps. The DevOps team will work with NetOps to incorporate BIG-IP as the Ingress to the Kubernetes environment through the use of Container Ingress Services. This allows for a declarative configuration and objects can be called upon to incorporate into the Ingress configuration. In Container Ingress Services version 2.7, using the Policy CRD (Custom Resource Definitions) feature, an AWAF policy can be one of these objects incorporated. Here is some example code for defining the Policy CRD and specifying the WAF policy: apiVersion: cis.f5.com/v1 kind: Policy metadata: labels: f5cr: "true" name: policy-mysite namespace: default spec: l7Policies: waf: /Common/WAF_Policy profiles: http: /Common/Custom_HTTP logProfiles: - /Common/Log all requests And here is an example of associating this Policy CRD with the VirtualServer CRD: apiVersion: "cis.f5.com/v1" kind: VirtualServer metadata: name: vs-myapp labels: f5cr: "true" spec: # This is an insecure virtual, Please use TLSProfile to secure the virtual # check out tls examples to understand more. virtualServerAddress: "10.192.75.117" virtualServerHTTPSPort: 443 httpTraffic: redirect tlsProfileName: reencrypt-tls policyName: policy-mysite host: myapp.f5demo.com pools: - path: / service: f5-demo servicePort: 443 Mark Dittmer, Sr. Product Management Engineer here at F5, recently teamed up with Brandon Frelich, Security Solutions Architect, to create a how-to video on this. Mark's associated Github repo: https://github.com/mdditt2000/kubernetes-1-19/blob/master/cis%202.7/log4j/README.md This is going to now allow for the SecOps teams to focus on creating and providing AWAF policies while the DevOps can focus on their domain and incorporate the AWAF policy quickly. As we see microservices sprawl, we need every speed advantage we can get!899Views1like0CommentsUnderstanding Modern Application Architecture - Part 3
In this last article of the series discussing Modern Application Architecture, we will be discussion manageability with respect to the traffic. As the traffic patterns grow and look quite different from monolithic applications, different approaches need to take place in order to maintain the stability of the application. Understanding Modern Application Architecture - Part 1 Understanding Modern Application Architecture - Part 2 In this next video, we discuss Service Mesh. As modern applications expand and their communications change to microservice to microservice, a service mesh can be introduced to provide control, security and visibility to that traffic. Since individual microservices can be written by different individuals or groups, the service mesh can be the intermediary that allows them to understand what is happening when one piece of code needs to speak to another piece of code. At the same time, trust and verification can happen between the microservices to ensure they are talking to what they should be talking to. In this next video we discuss Sidecar Proxies. As mentioned in the Service Mesh video, the sidecar proxy is a key piece of the mesh implementation. It is responsible for functions such as TLS termination, mutual TLS and authentication. It can also be used for tracing and other observability. This means these functions don't have to be performed by the microservice itself. In this final video, we review NGINX as a Production Grade Kubernetes Solution. While Modern Applications will adopt Open Source Solutions where possible, these applications can be mission critical ones that require the highest level of service. As mentioned in the previous videos in this series, there are a number of important pieces of a Kubernetes cluster that can be augmented, or replaced by enhanced services. NGINX can actually perform as an enhanced Ingress Controller, giving a high level of control as well as performance for inbound traffic to the cluster. NGINX with App Protect can also provide finer grain controlled web application security for the inbound web based components of the application. And finally, NGINX Service Mesh can help with the microservice to microservice control, security and visibility, offloading that function from the microservice itself. We hope that this video series has helped shed some light for those who are curious about modern application architecture. As you have questions, don't hesistate to ask in our Technical Forums!799Views2likes0CommentsAn example of an AS3 Rest API call to create a GSLB configuration on BIG-IP.
Hi everyone, Below you can find an example of an AS3 Rest API call that creates a simple GSLB configuration on BIG-IP devices. The main purpose of this article is to share this configuration with others. Of course, on different sites (github, etc) you can find different bits of data, but I think this example will be useful, because it contains all the necessary information about how to create different GSLB objects at the same time, such as: Data Centers (DCs), Servers, Virtual Servers (VSs), Wide IPs, pools and more over. { "class": "AS3", "declaration": { "class": "ADC", "schemaVersion": "3.21.0", "id": "GSLB_test", "Common": { "class": "Tenant", "Shared": { "class": "Application", "template": "shared", "DC1": { "class": "GSLB_Data_Center" }, "DC2": { "class": "GSLB_Data_Center" }, "device01": { "class": "GSLB_Server", "dataCenter": { "use": "DC1" }, "virtualServers": [ { "name": "/ocp/Shared/ingress_vs_1_443", "address": "A.B.C.D", "port": 443, "monitors": [ { "bigip": "/Common/custom_icmp_2" } ] } ], "devices": [ { "address": "A.B.C.D" } ] }, "device02": { "class": "GSLB_Server", "dataCenter": { "use": "DC2" }, "virtualServers": [ { "name": "/ocp2/Shared/ingress_vs_2_443", "address": "A.B.C.D", "port": 443, "monitors": [ { "bigip": "/Common/custom_icmp_2" } ] } ], "devices": [ { "address": "A.B.C.D" } ] }, "dns_listener": { "class": "Service_UDP", "virtualPort": 53, "virtualAddresses": [ "A.B.C.D" ], "profileUDP": { "use": "custom_udp" }, "profileDNS": { "use": "custom_dns" } }, "custom_dns": { "class": "DNS_Profile", "remark": "DNS Profile test", "parentProfile": { "bigip": "/Common/dns" } }, "custom_udp": { "class": "UDP_Profile", "datagramLoadBalancing": true }, "testpage_local": { "class": "GSLB_Domain", "domainName": "testpage.local", "resourceRecordType": "A", "pools": [ { "use": "testpage_pool" } ] }, "testpage_pool": { "class": "GSLB_Pool", "resourceRecordType": "A", "members": [ { "server": { "use": "/Common/Shared/device01" }, "virtualServer": "/ocp/Shared/ingress_vs_1_443" }, { "server": { "use": "/Common/Shared/device02" }, "virtualServer": "/ocp2/Shared/ingress_vs_2_443" } ] } } } } } P.S. The AS3 scheme guide was very helpful: https://clouddocs.f5.com/products/extensions/f5-appsvcs-extension/latest/refguide/schema-reference.html698Views1like2Commentsk8s Scalability Best Practices with Justin Garrison - DevCentral Connects - Ep 127 - May 23, 2023
Guest Justin Garrison| LinkedIn | GitHub Resources EKS Best Practices Guide Cloud Native Infrastructure (Justin's book) The Economics of Writing a Technical Book Coda: The cultural effects of queueing at Disney Animation Cubernetes Cluster Build in a Mac G4 Disney+ Launch Badge545Views0likes0CommentsF5 kubernetes f5 controller failing to compose 'poolMemberAddrs' and failing to generate F5 objects
Hi - I set this up: http://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/v1.0/ and getting errors after uploading my configmap and an applicable service: 2017/04/28 23:43:30 [INFO] File "/app/python/_f5.py", line 393, in _create_ltm_config_kubernetes 2017/04/28 23:43:30 [INFO] for node in backend['poolMemberAddrs']: 2017/04/28 23:43:30 [INFO] TypeError: 'NoneType' object is not iterable The config file generated by /app/bin/k8s-bigip-ctlr does not populate "poolMemberAddrs" so the python f5 handler /app/python/bigipconfigdriver.py is crashing since it cannot figure out the nodeport targets: /app cat /tmp/k8s-bigip-ctlr.config281602422/config.json {"bigip":{"username":"xxxxxxxxx","password":"yyyyyyyy","url":";:["k8s"]},"global":{"log-level":"INFO","verify-interval":30},"services":[{"virtualServer":{"backend":{"serviceName":"av-service","servicePort":30000,"poolMemberPort":0,"poolMemberAddrs":null},"frontend":{"virtualServerName":"default_av-service","partition":"k8s","balance":"round-robin","mode":"http","virtualAddress":{"bindAddr":"1.2.3.4","port":80},"iappPoolMemberTable":{"name":"","columns":null}}}}]}/app I ran out of anything helpful with debug statements or documentation about the closed source go binary... io:$ kubectl get services/av-service -o wide NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR av-service 10.25.104.158 80:30000/TCP 2h app=av io:$ kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}' 10.25.82.193 10.25.82.65 10.25.83.54 Is this the right place to ask about what possible reasons the controller is crashing here?370Views0likes3Comments