orchestration
29 TopicsExploring Kubernetes API using Wireshark part 1: Creating, Listing and Deleting Pods
Related Articles: Exploring Kubernetes API using Wireshark part 2: Namespaces Exploring Kubernetes API using Wireshark part 3: Python Client API Quick Intro This article answers the following question: What happens when we create, list and delete pods under the hood? More specifically on the wire. I used these 3 commands: I'll show you on Wireshark the communication between kubectl client and master node (API) for each of the above commands. I used a proxy so we don't have to worry about TLS layer and focus on HTTP only. Creating NGINX pod pcap:creating_pod.pcap (use http filter on Wireshark) Here's our YAML file: Here's how we create this pod: Here's what we see on Wireshark: Behind the scenes, kubectl command sent an HTTP POST with our YAML file converted to JSON but notice the same thing was sent (kind, apiVersion, metadata, spec): You can even expand it if you want to but I didn't to keep it short. Then, Kubernetes master (API) responds with HTTP 201 Created to confirm our pod has been created: Notice that master node replies with similar data with the additional status column because after pod is created it's supposed to have a status too. Listing Pods pcap:listing_pods.pcap (use http filter on Wireshark) When we list pods, kubectl just sends a HTTP GET request instead of POST because we don't need to submit any data apart from headers: This is the full GET request: And here's the HTTP 200 OK with JSON file that contains all information about all pods from default's namespace: I just wanted to emphasise that when you list a pod the resource type that comes back isPodListand when we created our pod it was justPod. Remember? The other thing I'd like to point out is that all of your pods' information should be listed underitems. Allkubectldoes is to display some of the API's info in a humanly readable way. Deleting NGINX pod pcap:deleting_pod.pcap (use http filter on Wireshark) Behind the scenes, we're just sending an HTTP DELETE to Kubernetes master: Also notice that the pod's name is also included in the URI: /api/v1/namespaces/default/pods/nginx← this is pods' name HTTP DELETEjust likeHTTP GETis pretty straightforward: Our master node replies with HTTP 200 OK as well as some json file with all the info about the pod, including about it's termination: It's also good to emphasise here that when our pod is deleted, master node returns JSON file with all information available about the pod. I highlighted some interesting info. For example, resource type is now just Pod (not PodList when we're just listing our pods).4.6KViews3likes0CommentsManage Infrastructure and Services Lifecycle with Terraform and Ansible + Demo
Working as a Solution Architect for F5, Ioften need to have access to a lab environment. 'Traditionally', the method to implement a lab was to leverage tools like Vagrant,VMWare,or others. A lab environment on a laptop is limited by its computing capacities (CPU/Memory/disk/...).Today we are often asked to show how we can integrate our solutions with many different tools(Orchestration solutions, Version Control systems, CI Servers, containerized environments, ...). Except if your laptop is a powerful one, it's difficult to build such an environment and have itrun smoothly. If the lab requirements are too demanding for my laptop, Iwould access one of our lab facility to do my work. Thisapproach itself is fine but bring some challenges: If you travel like Ido, latency can become a hindrance and be frustrating. Lab facilities leverage "shared resources". Which means you may face issues due toconflicting IP addresses, switch misconfiguration, maintenance operations, ... Some resources may already be reserved/used by another fellow colleague and not be available. You may also face other constraints making both deployment models difficult: Need to share access to the lab. Not easy when it runs on your laptop or in a private cloud that is not always opened to the outside world. People may need to be able to replicate your lab in their own environment. Stability/time needed for maintenance: Using a lab over and over will make it messy. You usually At some point, you'll reach a stage where you want to create a "new" environment that is clean and "trustworthy" (until you played too much with it again) I'm sure i've missed other constraints but you get the idea: maintaining a lab and using it in a collaborativemanner is challenging. Luckily, it's easier today to achieve those objectices: Leverage Public Cloud! Public Cloud gives you access to "unlimited" computing services over Internet that can be automated/orchestrated. With Public Cloud, you have access to an API allowing you to spin up a new environment with all therelevant tools deployed. This way, you may go straight into work (after enjoying a nice cup of coffee/tea while yourinfrastructure is being deployed! ).Once your work is done, you can destroy this environment and save money. When you'll need a lab again, you'll be able to spin a new/clean environment in a matter of minutes and be confident that it's a "healthy lab" When working on Automation/Orchestration of Public cloud environments, I see two dominant tools: Terraform andAnsible. https://www.terraform.io Terraform is an open source command line tool that can be used to provision an infrastructure on dozensof different platforms and services (AWS, Azure, ...).One of the strength of Terraform is that it is declarative: You specify the expected "state" of yourinfrastructure and Terraform will take care of all the underlying complexities (Does it need to be provisioned? Should I update the settings of a component? Which components should be created first? Do we need to deleteresources that are not required anymore, ... ).Terraform will store the "state" of your infrastructure and configuration to be more efficient in its work. https://www.ansible.com Ansible is a provisioning and configuration management tool. It is designed to automate application deployments.One of the strength of Ansible is that it doesn't require any "agents" to run on the targetted systems. Ansibleworks by leveraging "Modules". Those modules are consumed to define the "state" of the targetted systems. They areusually executed over SSH (by default). So how to leverage those tools to have a lab available on-demand? In the following demo, we will: Leverage Terraform to manage the lifecycle of a new AWS environment: manage a dedicated VPC with external/internal subnets, Ubuntu instances, F5 solution) In addition to deploying our infrastructure, it will generate the relevant files for Ansible (inventory file to know theIPs of our systems, ansible variable files to know how to configure the F5 solution with AS3) Use Ansible to manage the configuration of our systems: update our ubuntu instances, install NGINX Web serviceon our Ubuntu instances, deploy a standard F5 configuration to load balance our web application with AS3 Here is a summary for the demo: Demo time! By leveraging tools like Terraform or Ansible (you can achieve the same results with other tools), it is easy to handle thelifecycle of an infrastructure and the services running on top of it. This is what people IaC (Infrastructure as Code) Useful links:- If you want to learn more about the setup of this demo, it is posted on Github: here- F5 provides a list of templates to automate deployment in public cloud. It's available here: AWS Templates, Azure Templates, GCP Templates- F5 Application Services 3 (AS3) documentation/examples: here- If you want to learn more about our API and how to automate/orchestrate F5 solutions (free training): F5 A&O Training1KViews2likes1CommentOrchestrated Infrastructure Security - Advanced WAF
The F5 Beacon capabilities referenced in this article hosted on F5 Cloud Services are planning a migration to a new SaaS Platform - Check out the latesthere. Introduction This article is part of a series on implementing Orchestrated Infrastructure Security. It includes High Availability, Central Management with BIG-IQ, Application Visibility with Beacon and the protection of critical assets using F5 Advanced WAF and Protocol Inspection (IPS) with AFM.It is assumed that SSL Orchestrator is already deployed, and basic network connectivity is working. If you need help setting up SSL Orchestrator for the first time, refer to the Dev/Central article series Implementing SSL Orchestrator here. This article focuses on configuring F5 Advanced WAF deployed as a Layer 2 solution. It covers the configuration of Advanced WAF protection on an F5 BIG-IP running version 16.0.0. Configuration files of BIG-IP deployed as Advanced WAF can be downloaded from here from GitLab. Please forgive me for using SSL and TLS interchangeably in this article. This article is divided into the following high level sections: Advanced WAF Network Configuration Attach Virtual Servers to an Advanced WAF Policy Advanced WAF: Network Configuration The BIG-IP will be deployed with VLAN Groups.This combines 2 interfaces to act as an L2 bridge where data flows into one interface and is passed out the other interface. Vwire configuration will be covered later. From the F5 Configuration Utility go to Network > VLANs.Click Create on the right. Give it a name, ingress1 in this example.Set the Interface to 2.1.Set Tagging to Untagged then click Add. Note: In this example interface 2.1 will receive decrypted traffic from sslo1 Interface 2.1 (untagged) should be visible like in the image below.Click Repeat at the bottom to create another VLAN. Give it a name, egress1 in this example.Set the Interface to 2.2.Set Tagging to Untagged then click Add. Note: In this example interface 2.2 will send decrypted traffic back to sslo1 Interface 2.2 (untagged) should be visible like in the image below.Click Finished. Note: This guide assumes you are setting up a redundant pair of SSL Orchestrators.Therefore, you should repeat these steps to configure VLANs for the two interfaces connected to sslo2.These VLANs should be named in a way that you can differentiate them from the others.Example: ingress2 and egress2 It should look something like this when done: Note: In this example Interface 2.3 and 2.4 are physically connected to sslo2. Click VLAN Groups then Create on the right. Give it a name, vlg1 in this example.Move ingress1 and egress1 from Available to Members.Set the Transparency Mode to Transparent.Check the box to Bridge All Traffic then click Finished. Note: This guide assumes you are setting up a redundant pair of SSL Orchestrators.Therefore, you should repeat these steps to configure a VLAN Group for the two interfaces connected to sslo2.This VLAN Group should be named in a way that you can differentiate it from the other, example: vlg1 and vlg2.It should look like the image below: For full Layer 2 transparency the following CLI option needs to be enabled: (tmos)# modify sys db connection.vgl2transparent value enable Attach Virtual Servers to an Advanced WAF Policy You can skip this step if you already have an Advanced WAF policy created and attached to one or more virtual servers.If not, we’ll cover it briefly.In this example we configured Comprehensive Protection which includes Bot Mitigation, Layer 7 DoS and Application Security. Give it a name, App_Protect1 in this example then click Save & Next. Select the Enforcement Mode and Policy Type.Click Save & Next. Configure the desired Bot Defense options.Click Save & Next on the lower right. Configure the desired DoS Profile Properties.Click Save & Next. Assign the policy to your application server(s) by moving them to Selected.Click Save & Next. Click Finish/Deploy when done. Summary In this article you learned how to configure BIG-IP in layer 2 transparency mode using VLAN groups.We also covered how to create an Advanced WAF policy and attach it to your Virtual Servers. Next Steps Click Next to proceed to the next article in the series.2KViews2likes4Comments