F5 Cloud Services
23 TopicsAgility sessions announced
Good news, everyone! This year's virtual Agilitywill have over 100 sessions for you to choose from, aligned to 3 pillars. There will be Breakouts (pre-recorded 25 minutes, unlimited audience) Discussion Forums (live content up to 45 minutes, interactive for up to 75 attendees) Quick Hits (pre-recorded 10 minutes, unlimited audience) So, what kind of content are we talking about? If you'd like to learn more about how to Simplify Delivery of Legacy Apps, you might be interested in Making Sense of Zero Trust: what’s required today and what we’ll need for the future (Discussion Forum) Are you ready for a service mesh? (breakout) BIG-IP APM + Microsoft Azure Active Directory for stronger cybersecurity defense (Quick Hits) If you'd like to learn more about how to Secure Digital Experiences, you might be interested in The State of Application Strategy 2022: A Sneak Peak (Discussion Forum) Security Stack Change at the Speed of Business (Breakout) Deploy App Protect based WAF Solution to AWS in minutes (Quick Hits) If you'd like to learn more about how to Enable Modern App Delivery at Scale, you might be interested in Proactively Understanding Your Application's Vulnerabilities (Discussion Forum Is That Project Ready for you? Open Source Maturity Models (Breakout) How to balance privacy and security handling DNS over HTTPS (Quick Hits) The DevCentral team will be hosting livestreams, and the DevCentral lounge where we can hang out, connect, and you can interact directly with session presenters and other technical SMEs. Please go to https://agility2022.f5agility.com/sessions.html to see the comprehensive list, and check back with us for more information as we get closer to the conference.440Views7likes1CommentAutomating certificate lifecycle management with HashiCorp Vault
One of the challenges many enterprises face today, is keeping track of various certificates and ensuring those which are associated with critical applications deployed across multi-cloud are current and valid.This integration helps you to improve your security poster with short lived dynamic SSL certificates using HashiCorp Vault and AS3 on BIG-IP. First, a bit about AS3… Application Services 3 Extension (referred to asAS3 Extensionor more often simplyAS3) is a flexible, low-overhead mechanism for managing application-specific configurations on a BIG-IP system. AS3 uses a declarative model, meaning you provide a JSON declaration rather than a set of imperative commands. The declaration represents the configuration which AS3 is responsible for creating on a BIG-IP system. AS3 is well-defined according to the rules of JSON Schema, and declarations validate according to JSON Schema. AS3 accepts declaration updates via REST (push), reference (pull), or CLI (flat file editing). What is Vault? Vault is a tool for securely accessingsecrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, or certificates. Vault provides a unified interface to any secret, while providing tight access control and recording a detailed audit log. A modern system requires access to a multitude of secrets: database credentials, API keys for external services, credentials for service-oriented architecture communication, etc. Understanding who is accessing what secrets is already very difficult and platform specific. Adding on key rolling, secure storage, and detailed audit logs is almost impossible without a custom solution. This is where Vault steps in. Public Key Infrastructure (PKI) provides a way to verify authenticity and guarantee secure communication between applications. Setting up your own PKI infrastructure can be a complex and very manual process. Vault PKI allows users to dynamically generate X.509 certificates quickly and on demand. Vault PKI can streamline distributing TLS certificates and allows users to create PKI certificates with a single command. Vault PKI reduces overhead around the usual manual process of generating a private key and CSR, submitting to a CA, and waiting for a verification and signing process to complete, while additionally providing an authentication and authorization mechanism to validate as well. Benefits of using Vault automation for BIG-IP Cloud and platform independent solution for your application anywhere (public cloud or private cloud) Uses vault agent and Leverages AS3 Templating to update expiring certificates No application downtime - Dynamically update configuration without affecting traffic Configuration: 1.Setting up the environment - deploy instances of BIG-IP VE and Vault in cloud or on-premises You can create instances in the cloud for Vault & BIG-IP using terraform. The repo https://github.com/f5devcentral/f5-certificate-rotate This will pretty much download Vault binary and start the Vault server. Also, it will deploy the F5 BIG-IP instance on the AWS Cloud. Once we have the instances ready, you can SSH into the Vault ubuntu server and change directory to /tmp and execute below commands. # Point to the Vault Server export VAULT_ADDR=http://127.0.0.1:8200 # Export the Vault Token export VAULT_TOKEN=root # Create roles and define allowed domains with TTL for Certificate vault write pki/roles/web-certs allowed_domains=demof5.com ttl=160s max_ttl=30m allow_subdomains=true # Enable the app role vault auth enable approle # Create a app policy and apply https://github.com/f5devcentral/f5-certificate-rotate/blob/master/templates/app-pol.hcl vault policy write app-pol app-pol.hcl # Apply the app policy using app role vault write auth/approle/role/web-certs policies="app-pol" # Read the Role id from the Vault vault read -format=json auth/approle/role/web-certs/role-id | jq -r '.data.role_id' > roleID # Using the role id use the secret id to authenticate vault server vault write -f -format=json auth/approle/role/web-certs/secret-id | jq -r '.data.secret_id' > secretID # Finally run the Vault agent using the config file vault agent -config=agent-config.hcl -log-level=debug 2.UseAS3 Template file certs.tmpl with the values as shown The template file shown below will be automatically uploaded to the Vault instance, the ubuntu server in the /tmp directory Here I am using an AS3 file called certs.tmpl which is templatized as shown below. {{ with secret "pki/issue/web-certs" "common_name=www.demof5.com" }} [ { "op": "replace", "path": "/Demof5/HTTPS/webcert/remark", "value": "Updated on {{ timestamp }}" }, { "op": "replace", "path": "/Demof5/HTTPS/webcert/certificate", "value": "{{ .Data.certificate | toJSON | replaceAll "\"" "" }}" }, { "op": "replace", "path": "/Demof5/HTTPS/webcert/privateKey", "value": "{{ .Data.private_key | toJSON | replaceAll "\"" "" }}" }, { "op": "replace", "path": "/Demof5/HTTPS/webcert/chainCA", "value": "{{ .Data.issuing_ca | toJSON | replaceAll "\"" "" }}" } ] {{ end }} 3.Vault will render a new JSON payload file called certs.json whenever the SSL Certs expires When the Certificate expires, Vault generates a new Certificate which we can use to update the BIG-IP using ssh script, below shows the certs.json created automatically. Snippet of certs.json being created [ { "op": "replace", "path": "/Demof5/HTTPS/webcert/remark", "value": "Updated on 2020-10-02T19:05:53Z" }, { "op": "replace", "path": "/Demof5/HTTPS/webcert/certificate", "value": "-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIUaMgYXdERwzwU+tnFsSFld3DYrkEwDQYJKoZIhvcNAQEL\nBQAwEzERMA8GA1UEAxMIZGVtby5jb20wHhcNMjAxMDAyMTkwNTIzWhcNMj 4.Use Vault Agent file to run the integration forever without application traffic getting affected Example Vault Agent file pid_file = "./pidfile" vault { address = "http://127.0.0.1:8200" } auto_auth { method "approle" { mount_path = "auth/approle" config = { role_id_file_path = "roleID" secret_id_file_path = "secretID" remove_secret_id_file_after_reading = false } } sink "file" { config = { path = "approleToken" } } } template { source = "./certs.tmpl" destination = "./certs.json" #command = "bash updt.sh" } template { source = "./https.tmpl" destination = "./https.json" } 5. For Integration with HCP Vault If you are using HashiCorp hosted Vault solution instead of standalone Vault you can still use this solution with making few changes in the vault agent file. Detail documentation when using HCP vault is here atREADME.You can map tenant application objects on BIG-IP to Namespace on HCP Vault which provides islotation. More details how to create this solution athttps://github.com/f5businessdevelopment/f5-hcp-vault Summary The integration has following components listed below, here the Venafi or Lets Encrypt can also be used as external CA. Using this solution, you are able to: Improve your security posture with short lived dynamic certificates Automatically update applications using templating and robust AS3 service Increased collaborating breaking down silos Cloud agnostic solution can be deployed on-prem or public cloud3.9KViews4likes0CommentsF5 Essential App Protect and AWS CloudFront - Better Together.
What is the function of any application? In the simplest of terms an application must provide a value to the user. While the definition of "value" may differ, it is universally true that the user experience is critical to the perception of value.The user experience, loyalty, and your brand are predicated on the fact that the application be useable. So, what does it mean to be useable?It means that the application must be available, it must be secure, and it must be fast. Recently we announced that F5 Essential App Protect (EAP) would be integrating with AWS CloudFront to provide both security and Content Delivery Network (CDN) services for customers. This is an evolutionary step for F5 in our transition to a software and services company. Now, from the same portal where customers deploy applications and employ F5 Essential App Protect for security, in just a couple of clicks or a single API call, they can also access a global CDN leveraging AWS CloudFront. Let's review why EAP is cool, why AWS CloudFront is interesting, and take a look at the steps to enable the feature. F5 EAP is a high value, low time investment, WAF solution allowing customers to leverage F5's leading WAF Engine as a service using an intuitive interface.Customers can leverage EAP to meet security needs while incurring minimal operational overhead and pay based on traffic patterns. F5 EAP offers customers access to the sensitive data masking, security signatures, new threat campaigns, and IP reputation services that we have developed over nearly 20 years and continue to evolve. By subscribing to EAP customers can increase the efficacy of their SOC without increasing headcount or adding to operational burdens while gaining access to security protections that go far beyond what other services can provide. Below, the overview dashboard of an application protected by F5 Essential App Protect AWS CloudFront is a global CDN that allows users to deploy cacheable content across the globe in over 200 edge points of presence (PoP). AWS CloudFront is based on AWS best in class technology, expansive know how on building cloud services and leverages the AWS global network between the PoP closest to your customers and your Essential App Protect deployment (built on AWS). AWS CloudFront becomes the entry point for all traffic into the application, caching the content that is cacheable and passing the dynamic or non-cacheable portions to the origin. So why would you want to enable a CDN for your application? The simple answer is user experience. The better performing an application is the more likely it is that it will be used. By leveraging a CDN your content can be placed closer to your users and cached. This decreases page load time, something that always makes users happy. A more complete answer includes attributes such as reducing your WAF costs by not having to process cacheable content for each request and removing that traffic load from your origin application systems. The use of a CDN provides benefits for both the content provider and the content consumer. In the graph below you can clearly see the impact of enabling caching had on the origin servers in one of our test applications. So what does the high level architecture look like? Below you can see that DNS traffic is processed by AWS Route 53, directing the traffic to one of the AWS CloudFront edge locations. Traffic that has passed basic validations such as the host header matches the certificate name is either served from the edge cache or directed to F5 EAP for further processing. Traffic that passes WAF validation by F5 EAP is then passed to your origin application no matter if it is in AWS or located in your data center. For applications that reside in AWS all of this network traffic uses the AWS global network between the AWS CloudFront edge, F5 EAP and your app. For applications that reside in a non-AWS based location you can still use F5 EAP with the AWS CloudFront Integration (no AWS account required on your part) noting the slight difference in the data path as shown in the architecture below. Enabling your CDN Assuming that you already have an application deployed and protected by F5 EAP let's take a look at updating and adding AWS CloudFront functionality. From our F5 EAP portal we can see that our application has an FQDN of na2-auction.cloudservicesdemo.net. This application is already deployed and the DNS flow is as follows, a user's computer performs a DNS look up for na2-auction.cloudservicesdemo.net and it is a CNAME for your enabled F5 EAP dataplane locations. First let's enable caching on our Application which will enable AWS CloudFront. Next we need to configure the EdgeTiers (which geographies) that we would like to leverage to place the content closer to the users. Now we should filter any cookies or headers that we would like forwarded and enable compression. Since our application was already deployed on F5 EAP we will use the same CNAME and direct our traffic into AWS CloudFront. After a brief period our F5 Essential App Protect deployment will now have an integrated AWS CloudFront distribution and you will start to see caching metrics as traffic traverses the data path as described in the architecture. What if you need to invalidate some content? Well that is just as easy as enabling the feature! First you need to create the invalidation (purge), and when you click save the process will run deleting the items that match from the cache based on your EdgeTier selections. What about support for AWS CloudFront? That is on F5. We will deploy the AWS CloudFront distribution for you and it is part of our service that you subscribed to. If you experience issues you contact us at F5. We will diagnose and, if necessary, work with AWS to resolve the issue. No more having to coordinate multiple vendors for your application and no more getting stuck in the middle of two vendors not agreeing to a resolution. That's it! You have enabled an integrated security and CDN based on F5 EAP and AWS CloudFront in a few clicks from a single user interface! F5 Essential App Protect with our integrated AWS CloudFront offering can handle the the full lifecycle of your application security and edge performance optimization needs. Take it for a test drive from the F5 CloudServices portal.539Views3likes0CommentsExploring Kubernetes API using Wireshark part 1: Creating, Listing and Deleting Pods
Related Articles: Exploring Kubernetes API using Wireshark part 2: Namespaces Exploring Kubernetes API using Wireshark part 3: Python Client API Quick Intro This article answers the following question: What happens when we create, list and delete pods under the hood? More specifically on the wire. I used these 3 commands: I'll show you on Wireshark the communication between kubectl client and master node (API) for each of the above commands. I used a proxy so we don't have to worry about TLS layer and focus on HTTP only. Creating NGINX pod pcap:creating_pod.pcap (use http filter on Wireshark) Here's our YAML file: Here's how we create this pod: Here's what we see on Wireshark: Behind the scenes, kubectl command sent an HTTP POST with our YAML file converted to JSON but notice the same thing was sent (kind, apiVersion, metadata, spec): You can even expand it if you want to but I didn't to keep it short. Then, Kubernetes master (API) responds with HTTP 201 Created to confirm our pod has been created: Notice that master node replies with similar data with the additional status column because after pod is created it's supposed to have a status too. Listing Pods pcap:listing_pods.pcap (use http filter on Wireshark) When we list pods, kubectl just sends a HTTP GET request instead of POST because we don't need to submit any data apart from headers: This is the full GET request: And here's the HTTP 200 OK with JSON file that contains all information about all pods from default's namespace: I just wanted to emphasise that when you list a pod the resource type that comes back isPodListand when we created our pod it was justPod. Remember? The other thing I'd like to point out is that all of your pods' information should be listed underitems. Allkubectldoes is to display some of the API's info in a humanly readable way. Deleting NGINX pod pcap:deleting_pod.pcap (use http filter on Wireshark) Behind the scenes, we're just sending an HTTP DELETE to Kubernetes master: Also notice that the pod's name is also included in the URI: /api/v1/namespaces/default/pods/nginx← this is pods' name HTTP DELETEjust likeHTTP GETis pretty straightforward: Our master node replies with HTTP 200 OK as well as some json file with all the info about the pod, including about it's termination: It's also good to emphasise here that when our pod is deleted, master node returns JSON file with all information available about the pod. I highlighted some interesting info. For example, resource type is now just Pod (not PodList when we're just listing our pods).4.6KViews3likes0CommentsIdentity-Aware Proxy in the Public Cloud
Or: VPN Security without the VPN Backstory Many organizations have relied on Virtual Private Network (VPN) software to extend their enterprise networks out to remote workers. This is a time‑honored strategy, but it comes with some significant caveats: Your current VPN solution may not be sized to support your full workforce connecting concurrently, Per‑user VPN license costs can quickly add up, VPN software must be managed on all your end user devices, Bandwidth requirements at the head end of the VPN termination point And, perhaps most importantly, You are extending your enterprise perimeter to devices that are also attached to untrusted networks. To be clear here, F5 is not saying to throw away a traditional VPN, but we have realized for ourselves is that with most applications being web enabled and using encryption for transport, some of the downfalls of a full VPN can be avoided with a web based portal system. Yes, you will have cases where you want a full VPN tunnel, but if we could minimize this, could we improve the user experience as well as security, while also reducing some of the overhead of VPN technologies? A VPN tunnel is indiscriminate in the traffic it allows to pass; without granular access control lists (and the maintenance they require) or split tunneling enabled, a compromised client could send any traffic it wants into your network, not just traffic for the applications you intend to expose. A hastily configured VPN creates a hole in your perimeter with a sledgehammer, when what you really want is a scalpel. Fortunately, there is another option. The Identity‑Aware Proxy, which is a function of F5 Access Policy Manager (APM), allows secure, contextual, per‑application access into the enterprise without the overhead of managing a full VPN tunnel. No client‑side software other than a web browser is needed, and because it’s built on the full‑proxy architecture of F5 BIG‑IP, it’s not dependent on a particular network topology and can be rapidly deployed in the public cloud for immediate scale. This article will take you through the combined F5 and public cloud solution, and how you can use the flexibility and scalability of the cloud to quickly get up and running. Traditional VPN vs Identity‑Aware Proxy Traditional VPN A traditional VPN relies on tunneling traffic from a remote client through your network perimeter into your network: Figure 1: A traditional Layer 3 VPN Modern VPNs use public‑key cryptography over protocols like TLS or DTLS to provide confidentiality, and integrate with your corporate directory—and, ideally, a multi‑factor authentication system—to authenticate the user attempting to connect. Once authentication is completed, however, a simple layer 3 connection is established, with little to no inspection or visibility into what is traversing that connection. Identity‑Aware Proxy The Identity‑Aware Proxy, by comparison, works at layer 7, sitting in the middle of the HTTPS conversation between your users and your applications, validating each request against the access policy you define: Figure 2: BIG‑IP Access Policy Manager as an Identity‑Aware Proxy Much like a traditional VPN, the Identity‑Aware Proxy uses strong, modern TLS encryption and supports a broad array of multi‑factor authentication solutions. Unlike a traditional VPN, there is no layer 3 tunnel created, and only valid HTTPS requests are forwarded. No client‑side software is required. The biggest difference compared to a traditional VPN is that—instead of making an allow or deny decision at the beginning of a session—each request is validated against the access policy you define. Throughout the lifetime of an access session, the proxy continually examines and assesses the requests being made by the client for authorization and context, including: IP Reputation: is this request coming from a known malicious source? Geo‑blocking: is the request originating from a country or region where you don’t normally provide services? Device security posture: Does the client have the correct security and anti‑malware services enabled?* Is the user a member of a group that is authorized to access this resource? * Requires the F5 APM Helper App, included with APM This is a core tenet of zero‑trust: never trust, always verify. Just because we let a user connect to our environment once doesn’t mean that any subsequent requests they make should be allowed. Per‑request policy enforcement enables this continuous and ongoing validation. Our Example Environment Figure 3: BIG‑IP APM Deployed in the public cloud For this example, we’re using the public cloud to quickly spin up a proxy that will allow users to connect to our applications inside the network without needing to deploy any additional hardware or software within our on‑premises environment. We’re using an IPSEC VPN from the cloud to our on‑prem firewall for backend communication as it’s quick to configure and widely supported by existing firewalls and routers. You could just as easily use other hybrid connectivity options like ExpressRoute or Direct Connect, or interconnect services through your ISP or a connectivity provider like Equinix. We’re also using ADFS to provide federated login between your on‑prem directory and BIG‑IP in the cloud, but this isn’t the only option. Active Directory, Azure AD, LDAP, Okta, SAML and oAuth/OIDC, amongst others, are all supported as user identity sources, as well as most multi‑factor authentication systems like Azure MFA, Duo, SafeNet, and more. F5 in Public Cloud The F5 BIG‑IP software that powers the Identity‑Aware Proxy is available natively in all major public clouds. Multiple deployment and licensing options are available, but for the purposes of this example, we’ll be using the cloud providers’ own marketplaces to rapidly deploy BIG‑IP with hourly billing and no up‑front costs. If you’re looking to get started with BIG‑IP quickly, hourly billing is a great option; however, if you’re planning on running this environment for an extended period of time, or if you want more flexibility and more control over costs than the utility model provides, F5 offers several other consumption models. Deploying BIG‑IP While we provide tools to generate your own BIG‑IP images for deployment in public cloud, the cloud marketplaces are the simplest and quickest way to get BIG‑IP into your cloud environment. With just a few clicks and answering a few questions about your deployment, you can go from zero to deployed within minutes. This is the method we used in our example environment to get the proxy up and running as quickly as possible. Of course, that’s not the only way to do it. If you want a more programmatic approach, we offer supported templates for major cloud providers, as well as the F5 Automation Toolchain for declarative, repeatable deployments using tools like Terraform or Ansible. For more details on running BIG‑IP in the cloud, refer to F5 CloudDocs for guides, examples, templates, and more. Access Guided Configuration There are many underlying configuration objects, policies, and profiles that need to be created to enable a granular, contextual access policy, but BIG‑IP’s Guided Configuration makes the configuration easy and intuitive. Guided Configuration provides a simple, wizard‑like interface that walks you through the steps to: Create the Identity‑Aware Proxy; Integrate with an identity provider like Active Directory Federation Services, Okta, and OpenID Connect, amongst others; Enable multi‑factor authentication for remote users (Azure MFA, Duo, RSA, SafeNet, etc.), Enable seamless Single Sign‑On to apps behind the proxy; Map access requirements to applications inside your environment; and, Create contextual access rules (geo‑blocking, IP reputation, device security posture, group membership, etc.). With Guided Configuration, the underlying complexity of building a granular, per‑request access policy is abstracted away. You don’t need to be an F5 guru to securely deploy applications through the proxy. Global Availability Now that you’ve deployed your proxy, you want to make sure that it’s scalable and resilient. Guided Configuration, templates, and the Automation Toolchain make deploying multiple instances across cloud environments easy, but directing traffic to those instances and getting users to the best proxy is another matter. If you’ve been working with F5 for a while, you’re probably familiar with F5 DNS Services, which provides intelligent, dynamic DNS resolution for globally‑available applications. Amongst other features, F5 DNS provides Global Server Load Balancing (GSLB), which allows you to direct users to any one of several globally‑distributed endpoints for a particular service. In our environment we use GSLB to make our proxy redundant and globally‑available. Users are, by default, routed to the closest proxy to serve their request. If one instance fails, reaches capacity, or if that cloud region otherwise has a bad day, then that failure is automatically detected and users are moved to an alternate site. Figure 4: Global Availability with F5 Cloud Services DNS Load Balancing If you don’t already have F5 DNS in your environment, you don’t need to run out and buy a bunch of BIG‑IPs just to enable GSLB. F5 Cloud Services DNS Load Balancer provides GSLB from an on‑demand, usage‑based SaaS platform. There’s even a free tier that offers a load balancing configuration with up to 3 million DNS queries per month, making it easy to get started. In our environment, we use CNAME records to direct multiple applications through the same DNS Load Balancer, so depending on your usage requirements, you can add global availability to your proxy. Summary Hopefully this article has shown you how quick and easy it is to get going with F5 in the cloud. Between Infrastructure‑as‑a‑Service from your cloud vendor, hourly billing through the cloud marketplace for BIG‑IP, and F5 Cloud Services for global availability, you can deploy this entire solution on utility billing without needing to go through cumbersome procurement processes. Please let me know in the comments below how you’re using cloud and infrastructure‑on‑demand to respond quickly to the changing needs of a workforce in these trying times. Stay safe, stay healthy, and wash your hands!1.8KViews3likes0Commentsimport f5!
We are super excited to announce a preview of the F5 SDK, a set of client tools which facilitate consuming some of our popular APIs/Services, and currently consists of the following: f5-sdk-python - a new python client library f5-cli - a remote CLI that is built-off the f5-sdk-python Getting existential, as the old saying goes, the most valuable commodity in the world is time. Our short time here on earth is finite, the average lifespan is only around 70-71 years (according to Wikipedia), and if we’re not doing everything we can to try to save you every sliver of time possible, we’re not doing one of our fiduciary duties as a technology company. As software developers know, code is the ultimate embodiment of saving time (after all, a “for loop” is nothing but a modern marvel that repeats something dutifully to save time). So, as a convenience to developers and system integrators, vendors / platforms provide Software Development Kits (SDKs) to help them save even more time on that last mile. Developers/Integrators also hate having to re-invent the wheel (which often translates to code “not related” to their business logic) and look to leverage something that already exists. In hardware, you may leverage drivers. In software, you import libraries/modules. In this case, these SDK client libraries provide objects in their own language to help abstract you from low level implementation details like having to know the service’s exact URLs, authentication mechanisms, transport required (ex. REST, SOAP, GRPC), file download/uploads, etc. and let you focus on your business logic.Here at F5, we rely on SDKs like the Broadcom SDK, Octeon SDK, Intel DPDK (for hardware), Cloud SDKs (for software), etc. and the age-old value of offering an SDK hasn’t gone anywhere. SDKs are getting released and making headlines every day. ex. https://www.engadget.com/2020/01/24/boston-dynamics-robotic-dog-spot-sdk/ We’ve actually had an unofficial SDK for a while with a project called the f5-common-python. It was a “common” library for the core underlying component in one our integrations (LBaaS driver for Openstack) but, when looking at it from a more current end-user’s perspective, there were some significant enough changes we wanted to implement that mandated a fresh start. Nevertheless, it gathered a fair amount of traction and the value of a generic reusable client library was a great pattern we wanted to officially expand upon. And now what’s this CLI?I thought it was all about the API now and clients didn't matter? Yes, the main value is obviously in the API itself since that is what scales and where you deliver your main value proposition. However, there is still a client user experience (UX) to your APIs and doing everything you can to optimize / enhance the overall client experience is key to firing on all cylinders. SDKs/CLIs are like the client browser UX to your service … but for programmatic access and automation. From articles like below, you can see some of the traditional values of having a CLI component. https://www.zdnet.com/article/good-news-for-developers-the-cli-is-back/ "CLI is arguably better for ad hoc tasksorquick exploratory operations since it is more human-readable." Source: https://nordicapis.com/the-return-of-the-cli-clis-being-used-by-api-related-companies/ The familiar CLI UX is a common entry point for everyone, whether it is developers testing/exploring your API or network/system admins automating common tasks. And don’t you already have a CLI called TMSH? Well, this is a little different. Everyone today knows how useful *remote* CLIs like AWS’s aws-cli, Azure’s az, Google's gcloud, Openstack’s osc, Kubernete’s kubectl, etc. are in consuming and developing against a service or platform. Instead of SSH’ing to a single device, remote CLIs use transports/protocols like HTTP or GRPC to talk to a service’s remote APIs. The f5-cli, inspired by popular public cloud shells (which are all built on python SDKs) will focus on providing the same ease and convenience to some our most popular APIs/Services and aims to be demand/use case driven (high usage APIs vs. exposing the entire portfolio of APIs). For example, the Automation Tool Chain APIs. For some of you that may have not heard about them yet, those are some of the powerful declarative APIs that allow you to provision entire configurations in a single API call / JSON payload. Application Services 3 (AS3) = Virtual Services Declarative Onboarding (DO) = System related config like licensing, provisioning, networking,clustering, etc. Telemetry Streaming (TS) – Analytics, Logging That would normally require 10s to 100s of imperative API calls.As with any API(s), however, there is some amount of overhead (authentication, async transaction handling, best practices, etc.) in consuming them that can be optimized and captured via client libraries. Probably the most common use case: "I want to create a Virtual Service" (using AS3). Putting the CLI in debug, you can see some of the low level details the SDK is handling: Managing Authentication (dependencies on other APIs) Managing Tasks for Async Operations Retry Attempt logic Even convenience functions like installing the required extension package (an RPM) if not already installed > export F5_SDK_LOG_LEVEL=DEBUG > f5 bigip extension service create --component as3 --install-component --declaration as3.json user1@desktop: f5-cli $ f5 bigip extension service create --component as3 --declaration as3.json --install-component 2020-03-10 00:17:49,779 - f5sdk.bigip.mgmt_client - DEBUG: Performing ready check using port 8443 2020-03-10 00:17:49,810 - f5sdk.bigip.mgmt_client - DEBUG: Logging in using user + password 2020-03-10 00:17:49,810 - f5sdk.bigip.mgmt_client - DEBUG: Getting authentication token 2020-03-10 00:17:49,814 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/authn/login 2020-03-10 00:17:50,009 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:17:50,010 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: PATCH /mgmt/shared/authz/tokens/EEPEH4IXVKX7TVP27NECH76VQJ 2020-03-10 00:17:50,131 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:17:50,134 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/iapp/package-management-tasks 2020-03-10 00:17:50,252 - f5sdk.utils.http_utils - DEBUG: HTTP response: 202 Accepted 2020-03-10 00:17:50,253 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: GET /mgmt/shared/iapp/package-management-tasks/11fc9258-a2c2-495b-8687-2e427fd64091 2020-03-10 00:17:50,363 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:17:59,798 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/file-transfer/uploads/f5-appsvcs-3.17.1-1.noarch.rpm 2020-03-10 00:18:01,605 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:01,608 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/file-transfer/uploads/f5-appsvcs-3.17.1-1.noarch.rpm 2020-03-10 00:18:03,814 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:03,815 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/file-transfer/uploads/f5-appsvcs-3.17.1-1.noarch.rpm 2020-03-10 00:18:05,678 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:05,680 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/file-transfer/uploads/f5-appsvcs-3.17.1-1.noarch.rpm 2020-03-10 00:18:07,905 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:07,908 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/file-transfer/uploads/f5-appsvcs-3.17.1-1.noarch.rpm 2020-03-10 00:18:09,642 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:09,644 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/file-transfer/uploads/f5-appsvcs-3.17.1-1.noarch.rpm 2020-03-10 00:18:11,707 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:11,709 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/file-transfer/uploads/f5-appsvcs-3.17.1-1.noarch.rpm 2020-03-10 00:18:13,915 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:13,918 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/file-transfer/uploads/f5-appsvcs-3.17.1-1.noarch.rpm 2020-03-10 00:18:16,050 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:16,052 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/file-transfer/uploads/f5-appsvcs-3.17.1-1.noarch.rpm 2020-03-10 00:18:18,042 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:18,044 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/file-transfer/uploads/f5-appsvcs-3.17.1-1.noarch.rpm 2020-03-10 00:18:20,623 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:20,625 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/file-transfer/uploads/f5-appsvcs-3.17.1-1.noarch.rpm 2020-03-10 00:18:22,823 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:22,826 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/file-transfer/uploads/f5-appsvcs-3.17.1-1.noarch.rpm 2020-03-10 00:18:24,864 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:24,866 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/file-transfer/uploads/f5-appsvcs-3.17.1-1.noarch.rpm 2020-03-10 00:18:26,624 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:26,625 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/file-transfer/uploads/f5-appsvcs-3.17.1-1.noarch.rpm 2020-03-10 00:18:28,922 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:28,924 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/file-transfer/uploads/f5-appsvcs-3.17.1-1.noarch.rpm 2020-03-10 00:18:31,068 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:31,070 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/file-transfer/uploads/f5-appsvcs-3.17.1-1.noarch.rpm 2020-03-10 00:18:31,503 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:31,508 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/iapp/package-management-tasks 2020-03-10 00:18:32,618 - f5sdk.utils.http_utils - DEBUG: HTTP response: 202 Accepted 2020-03-10 00:18:32,618 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: GET /mgmt/shared/iapp/package-management-tasks/a7c839c3-b340-4497-a520-437a237aef30 2020-03-10 00:18:32,727 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:33,733 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: GET /mgmt/shared/iapp/package-management-tasks/a7c839c3-b340-4497-a520-437a237aef30 2020-03-10 00:18:33,865 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:33,866 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: GET /mgmt/shared/appsvcs/declare 2020-03-10 00:18:37,066 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: GET /mgmt/shared/appsvcs/declare 2020-03-10 00:18:37,877 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/iapp/package-management-tasks 2020-03-10 00:18:37,998 - f5sdk.utils.http_utils - DEBUG: HTTP response: 202 Accepted 2020-03-10 00:18:37,999 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: GET /mgmt/shared/iapp/package-management-tasks/093dfa5a-eb52-4002-a3d8-88d3edf4ad71 2020-03-10 00:18:38,116 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:38,125 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/appsvcs/declare 2020-03-10 00:18:56,120 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK { "declaration": { "Sample_app_sec_Tenant": { "HTTPS_Service": { "Pool1": { "class": "Pool", "members": [ { "serverAddresses": [ "10.0.1.11" ], "servicePort": 80 } ], "monitors": [ "http" ] }, "WAFPolicy": { "class": "WAF_Policy", "ignoreChanges": true, "url": "https://raw.githubusercontent.com/f5devcentral/f5-asm-policy-templates/master/owasp_ready_template/owasp-no-auto-tune-v1.1.xml" }, "class": "Application", "serviceMain": { "class": "Service_HTTPS", "policyWAF": { "use": "WAFPolicy" }, "pool": "Pool1", "redirect80": false, "serverTLS": { "bigip": "/Common/clientssl" }, "snat": "auto", "virtualAddresses": [ "0.0.0.0" ] }, "template": "https" }, "class": "Tenant" }, "class": "ADC", "controls": { "archiveTimestamp": "2020-03-10T00:18:55.759Z" }, "id": "autogen_3a78cbd8-7aa4-4fb6-8db9-3e458f583513", "label": "ASM_VS1", "remark": "ASM_VS1", "schemaVersion": "3.0.0", "updateMode": "selective" }, "results": [ { "code": 200, "host": "localhost", "lineCount": 28, "message": "success", "runTime": 16658, "tenant": "Sample_app_sec_Tenant" } ] } That’s actually quite a bit of work offloaded to accelerate getting you up and going on an API. As our portfolio expands to offer more and more products and services, this will hopefully save our users some extra time. DISCLAIMER:it’s in public preview and we have many enhancements already planned F5 SDK Documentation: https://clouddocs.f5.com/sdk/f5-sdk-python/ Github repo: https://github.com/f5devcentral/f5-sdk-python F5 CLI Documentation: https://clouddocs.f5.com/sdk/f5-cli/ Github repo: https://github.com/f5devcentral/f5-cli Please take look soon and provide any feedback by filing an issue on Github repos.727Views3likes1CommentF5 powered API security and management
Editor's Note:The F5 Beacon capabilities referenced in this article hosted on F5 Cloud Services are planning a migration to a new SaaS Platform - Check out the latesthere. Introduction Application Programming Interfaces (APIs) enable application delivery systems to communicate with each other. According to a survey conducted by IDC, security is the main impediment to delivery of API-based services.Research conducted by F5 Labs shows that APIs are highly susceptible to cyber-attacks. Access or injection attacks against the authentication surface of the API are launched first, followed by exploitation of excessive permissions to steal or alter data that is reachable via the API.Agile development practices, highly modular application architectures, and business pressures for rapid development contribute to security holes in both APIs exposed to the public and those used internally. API delivery programs must include the following elements : (1) Automated Publishing of APIs using Swagger files or OpenAPI files, (2) Authentication and Authorization of API calls, (3) Routing and rate limiting of API calls, (4) Security of API calls and finally (5) Metric collection and visualization of API calls.The reference architecture shown below offers a streamlined way of achieving each element of an API delivery program. F5 solution works with modern automation and orchestration tools, equipping developers with the ability to implement and verify security at strategic points within the API development pipeline. Security gets inserted into the CI/CD pipeline where it can be tested and attached to the runtime build, helping to reduce the attack surface of vulnerable APIs. Common Patterns Enterprises need to maintain and evolve their traditional APIs, while simultaneously developing new ones using modern architectures. These can be delivered with on-premises servers, from the cloud, or hybrid environments. APIs are difficult to categorize as they are used in delivering a variety of user experiences, each one potentially requiring a different set of security and compliance controls. In all of the patterns outlined below, NGINX Controller is used for API Management functions such as publishing the APIs, setting up authentication and authorization, and NGINX API Gateway forms the data path.Security controls are addressed based on the security requirements of the data and API delivery platform. 1.APIs for highly regulated business Business APIs that involve the exchange of sensitive or regulated information may require additional security controls to be in compliance with local regulations or industry mandates.Some examples are apps that deliver protected health information or sensitive financial information.Deep payload inspection at scale, and custom WAF rules become an important mechanism for protecting this type of API. F5 Advanced WAF is recommended for providing security in this scenario. 2.Multi-cloud distributed API Mobile App users who are dispersed around the world need to get a response from the API backend with low latency.This requires that the API endpoints be delivered from multiple geographies to optimize response time.F5 DNS Load Balancer Cloud Service (global server load balancing) is used to connect API clients to the endpoints closest to them.In this case, F5 Cloud Services Essential App protect is recommended to offer baseline security, and NGINX APP protect deployed closer to the API workload, should be used for granular security controls. Best practices for this pattern are described here. 3.API workload in Kubernetes F5 service mesh technology helps API delivery teams deal with the challenges of visibility and security when API endpoints are deployed in Kubernetes environment. NGINX Ingress Controller, running NGINX App Protect, offers seamless North-South connectivity for API calls. F5 Aspen Mesh is used to provide East-West visibility and mTLS-based security for workloads.The Kubernetes cluster can be on-premises or deployed in any of the major cloud provider infrastructures including Google’s GKE, Amazon’s EKS/Fargate, and Microsoft’s AKS. An example for implementing this pattern with NGINX per pod proxy is described here, and more examples are forthcoming in the API Security series. 4.API as Serverless Functions F5 cloud services Essential App Protect offering SaaS-based security or NGINX App Protect deployed in AWS Fargate can be used to inject protection in front of serverless API endpoints. Summary F5 solutions can be leveraged regardless of the architecture used to deliver APIs or infrastructure used to host them.In all patterns described above, metrics and logs are sent to one or many of the following: (1) F5 Beacon (2) SIEM of choice (3) ELK stack.Best practices for customizing API related views via any of these visibility solutions will be published in the following DevCentral series. DevOps can automate F5 products for integration into the API CI/CD pipeline.As a result, security is no longer a roadblock to delivering APIs at the speed of business. F5 solutions are future-proof, enabling development teams to confidently pivot from one architecture to another. To complement and extend the security of above solutions, organizations can leverage the power of F5 Silverline Managed Services to protect their infrastructure against volumetric, DNS, and higher-level denial of service attacks.The Shape bot protection solutions can also be coupled to detect and thwart bots, including securing mobile access with its mobile SDK.820Views2likes0CommentsF5 Essential App Protect: Go !
App deployments nowadays tend to target API-driven distributed services and microservices-based topologies. How do you move fast when it comes to “securing an app”, when you have so many things to worry about: what services are part of the overall topology, where and how these services are deployed (VMs, containers, PaaS), and what technologies stacks and frameworks these services are built on? Secure Your Apps at “Ludicrous Speed” As evidenced by OWASP Top 10, we know that one of the most critical attack surfaces are web-facing app front-ends. From cross-site scripting (XSS) attacks, to injection, to exploits through third-party scripts, there is a lot to be concerned about; especially when you take into the account the common practices of using external libraries and open source components in JavaScript-based apps. And while security absolutely needs to be part of the development process, both tech and attacks are evolving so rapidly that most dev teams can’t be expected to keep up! With that in mind our team built F5 Essential App Protect, to enable “LUDICROUS SPEED” for securing your web apps. We architected it to deliver Web Application Firewall functionality with more capabilities, delivered as-a-Service with F5 Cloud Services. Sparing you what may sound like an obvious marketing spiel like the simplicity of the UI, applying F5’s 20+ years of security expertise, or the speed of deployment and integration (it’s a SaaS, duh)...let me focus on a few reasons why I’m personally excited about our implementation of this solution: Built for global app architectures It’s deployed on a global data plane, which means you can co-locate your service close to the application or service endpoint that’s being protected. For example, an HTTP request that would typically be routed to a US-EAST based app doesn’t need to “bounce” around the world to get processed; Essential App Protect automatically detects and recommends US-EAST as a region and deploys protection instance in the region closest to your web service, resulting in minimal latency. This supports the “any app on any cloud” mantra, without sacrificing performance. Forward-looking protection Besides using over 5,000+ signatures right out of the gate to check for malicious traffic, Essential App Protect continuously ingests new signatures from the F5 Threat Labs and stays current to ensure that we help defend against developing threats. On top of that, it also uses an advanced probability-based rating system that anticipates malicious requests and improves as the platform evolves. Simply put, we stay on top of the rapidly evolving threat landscape, so that you don’t have to! Simple on-ramp, easy APIs The north star of Essential App Protect is to make app security simple yet flexible not only from the UI, but to target DevOps scenarios with an API-first approach. This means you can onramp protection for your app with a couple of declarative API calls, from zero to ready in just aminute. Everything is defined through one simple JSON template, which makes it very easy to integrate into your CI/CD pipeline. All of the config, from tuning of protection option to accessing security event logs, are done through APIs. This makes automation a no-brainer, be itthe initial deployment, or managing a consistent security policy across your dev/test/prod environments for all of your app deployments. “Go ahead, take it for a spin!” F5 Essential App Protect provides the enterprise-grade security you need to keep your web-facing apps safe.It is delivered as-a-Service with no hardware to manage or software to download.And you don’t need to be a security expert, because the service is pre-configured using the best practices we’ve compiled while working with top enterprises for the last 20 years. We architected it for the cloud and global delivery, while focused on future-proofing your app protection, and making it DevOps ready out of the gate. Check out Essential App Protect. Go ahead, signup for the free trial, and check out the new Essential App Protect Lab on GitHub... Go!6.1KViews2likes0CommentsDevOps Explained to the Layman
Related articles: Containers: plug-and-play code in DevOps world - part 1 Containers: plug-and-play code in DevOps world - part 2 Quick Intro My goal here is to allow the big picture of DevOps to sink in so when you read a more detailed explanation you can take the big picture for granted, and focus only on the details, to solidify even further your DevOps knowledge. Does it make sense? How do we understand what DevOps is? After discussing the subject with loads of different people and thinking of it myself I found that one framework to understand DevOps at a very high level could be the following: Know how software was traditionally delivered in the past Know what was the problem with above traditional methodology Know how DevOps solves above problem - YES, here's where you understand what DevOps is at a high level Define what DevOps is yourself! - That's right, knowing what it does gives you the power to conceptualise it yourself. Know how software was delivered in the past It was pretty simple strategy. We gather all requirements with customer then design, code it all, hand it off to Q&A team for testing and lastly to Ops team to deploy it. Beautiful, isn't it? Until it is not... Know what is the problem with traditional (waterfall) methodology Let's imagine a situation where 2 teams of developers are contributing in parallel. That's quite common. After they finished coding their components (or even the whole application!) they found the components are incompatible and don't integrate well. What a waste of time, eh? Waiting for both teams to finish everything and only then merge their code. Now, say we finished integration but when we hand off our app to Q&A to test it a bug is found? Or even worse, what if apart from correcting our bug we also coded many days or weeks on top of it? Pretty bad, eh? Up to now that's all on the Dev ↔ Q&A side of things. Let me give you an example where IT Ops can also be a bottleneck here: If we take into account that Dev spent all this time coding on top of the above code then that's 8 days wasted. In reality, wasted time could easily go over 2 weeks as in larger organisations there might be other processes involved. For example, if IT Ops needs help from Database Administrator (DBA)or another key person to finish setting up test environment. How do we solve these problems? That takes us to DevOps. Know how DevOps solves above problems Some people say a picture is worth 1,000 words: Continuous Integration In DevOps approach, we use an Agile methodology where typicallyfunctional smaller chunks of code just enough to be meaningful (e.g. a new feature) from all Dev teams are regularly integrated so integration errors are spotted and corrected more quickly. This is called Continuous Integration (CI) The smaller chunks of code are usually plug-and-play components with an API to communicate with other components which makes it much more organised when compared to a singlelarge component. This is called Microservices. Continuous Delivery Following CI, integrated code is automatically tested through a couple of environments all the way to pre-production environment where it is "deployable" or ready to be deployed. This is called Continuous Delivery (CD) and as it follows CI we call it CI/CD. In CD, every change is proven to be deployable but not yet deployed to production. Some companies do not automate further than that and prefer to do manual deployment. Continuous Deployment Following CI/CD, when even deployment is automated we call it Continuous Integration followed by Continuous Delivery and Continuous Deployment (CI/CD²). This is the goal of DevOps. Infrastructure as Code For Dev team to be able to automatically deploy code quickly in different test environments all the way to production environment we need a heavily automated infrastructure. This is called Infrastructure as Code. Notice that this is the antidote for the Dev↔ Ops bottleneck in our previous example. I'm including here both infrastructure provisioning tools, the one that you actually build/deploy your infrastructure with a click of a button (e.g. using cloud services) as well as configuration management tools, the ones that you configure your infrastructure (e.g. how to upgrade 10,000 boxes with a single command?). Culture of Collaboration None of the above is possible without proper collaboration between customers, developers and IT operations. Collaboration is the trigger of continuous feedback and continuous improvement. Developers focus on coding, IT operations on managing automated infrastructure and both on innovation and improvement. Other key practices CI/CD, Infrastructure as Code and a culture of collaboration are key DevOps practice. Regularly updating code according to user feedback, developing smaller functional detached pieces of code that communicate with each other via API (microservices), standardising systems and tools, and continuous monitoring are also best DevOps practice. Cloud is also popular in DevOps community because it natively supports DevOps tools and has the necessary degree of automation and speed required for DevOps practice. Define what DevOps is yourself Now you might have a 10,000 feetidea ofwhat DevOps does so you should be able to conceptualise what DevOps is yourself, but here's one definition I came up with: DevOps is a practice that eliminates sources of waste from application delivery pipeline, making itefficient by means of optimising processes, removing silos, using automation tools, standardising platforms and establishing a strong culture of collaboration. I will create other articles to incrementally add more details about DevOps practice.1.2KViews2likes5CommentsF5 Cloud Services Lab Now on GitHub
As Developers and DevOps we strive to optimize and simplify app delivery. But staying on top of all of the new technologies and also getting hands-on experience can be challenging! With that in mind, we just published to GitHub a new free interactive hands-on lab that takes you through the several core F5 Cloud Services. It allows youto set up and manage a globally-distributed DNS infrastructure and web application security services -- all applied to a set of live deployed app instances. We provide you with a number of live web apps, and also a handy Lab service API that creates and updates DNS entries for your own lab instance, so that you can run through multiple scenarios with a live DNS service and real applications. This makes it easy to “kick the tires” on everything you need to get some quality face time with cutting-edge DNS technologies and security best practices! Here are the Top 3 Reasons Why You Should Check Out This Lab: 1) Set up DNS Failover: While frequently forgotten, DNS is the first step to your apps getting in front of your customers. Not only can you add DNS redundancy with the F5 DNS Cloud Service, but you can also take advantage of our global points of presence and built-in DDoS protection – all in just a few API calls. This will ensure your apps will happily resolve and push traffic to the right app endpoint lightning fast! 2) Configure Global DNS Load Balancing: From multi-cloud to geo-proximity rules the F5 DNS Load Balancer Cloud Service provides an easy way to create a true advanced 2-Tier load balancer architecture. We provide a set of global application endpoints, which you configure into load-balancing pools, add health checks, and finally push DNS records to ensure that traffic from specific regions flows to the designated endpoints. 3) Hack and protect a live app: You can test your hand at “hacking” one of our web facing apps, which was intentionally built to be vulnerable! Then you can stand up your own Web Application Firewall (WAF) instance using F5 Essential App Protect Service and run some automated attacks against that instance using the Lab Service API. We provide you a live environment to both attack and protect, so that you can explore various options available to users of this awesome new service. Finally, this lab is a quick way to get familiar with the best practices for deploying DNS and web application firewall without touching your own DNS or productions applications. You will have a handy reference and a library of useful API calls for when you’re ready to do something with your own apps & services. Check out the free F5 Cloud Services Lab on GitHub, and give it a try today!1.1KViews1like0Comments