AWS
133 TopicsUsing BIG-IP GTM to Integrate with Amazon Web Services
This is the latest in a series of DNS articles that I've been writing over the past couple of months. This article is taken from a fantastic solution that Joe Cassidy developed. So, thanks to Joe for developing this solution, and thanks for the opportunity to write about it here on DevCentral. As a quick reminder, my first six articles are: Let's Talk DNS on DevCentral DNS The F5 Way: A Paradigm Shift DNS Express and Zone Transfers The BIG-IP GTM: Configuring DNSSEC DNS on the BIG-IP: IPv6 to IPv4 Translation DNS Caching The Scenario Let's say you are an F5 customer who has external GTMs and LTMs in your environment, but you are not leveraging them for your main website (example.com). Your website is a zone sitting on your windows DNS servers in your DMZ that round robin load balance to some backend webservers. You've heard all about the benefits of the cloud (and rightfully so), and you want to move your web content to the Amazon Cloud. Nice choice! As you were making the move to Amazon, you were given instructions by Amazon to just CNAME your domain to two unique Amazon Elastic Load Balanced (ELB) domains. Amazon’s requests were not feasible for a few reasons...one of which is that it breaks the RFC. So, you engage in a series of architecture meetings to figure all this stuff out. Amazon told your Active Directory/DNS team to CNAME www.example.com and example.com to two AWS clusters: us-east.elb.amazonaws.com and us-west.elb.amazonaws.com. You couldn't use Microsoft DNS to perform a basic CNAME of these records because of the BIND limitation of CNAME'ing a single A record to multiple aliases. Additionally, you couldn't point to IPs because Amazon said they will be using dynamic IPs for your platform. So, what to do, right? The Solution The good news is that you can use the functionality and flexibility of your F5 technology to easily solve this problem. Here are a few steps that will guide you through this specific scenario: Redirect requests for http://example.com to http://www.example.com and apply it to your Virtual Server (1.2.3.4:80). You can redirect using HTTP Class profiles (v11.3 and prior) or using a policy with Centralized Policy Matching (v11.4 and newer) or you can always write an iRule to redirect! Make www.example.com a CNAME record to example.lb.example.com; where *.lb.example.com is a sub-delegated zone of example.com that resides on your BIG-IP GTM. Create a global traffic pool “aws_us_east” that contains no members but rather a CNAME to us-east.elb.amazonaws.com. Create another global traffic pool “aws_us_west” that contains no members but rather a CNAME to us-west.elb.amazonaws.com. The following screenshot shows the details of creating the global traffic pools (using v11.5). Notice you have to select the "Advanced" configuration to add the CNAME. Create a global traffic Wide IP example.lb.example.com with two pool members “aws_us_east” and “aws_us_west”. The following screenshot shows the details. Create two global traffic regions: “eastern” and “western”. The screenshot below shows the details of creating the traffic regions. Create global traffic topology records using "Request Source: Region is eastern" and "Destination Pool is aws_us_east". Repeat this for the western region using the aws_us_west pool. The screenshot below shows the details of creating these records. Modify Pool settings under Wide IP www.example.com to use "Topology" as load balancing method. See the screenshot below for details. How it all works... Here's the flow of events that take place as a user types in the web address and ultimately receives the correct IP address. External client types http://example.com into their web browser Internet DNS resolution takes place and maps example.com to your Virtual Server address: IN A 1.2.3.4 An HTTP request is directed to 1.2.3.4:80 Your LTM checks for a profile, the HTTP profile is enabled, the redirect request is applied, and redirect user request with 301 response code is executed External client receives 301 response code and their browser makes a new request to http://www.example.com Internet DNS resolution takes place and maps www.example.com to IN CNAME example.lb.example.com Internet DNS resolution continues mapping example.lb.example.com to your GTM configured Wide IP The Wide IP load balances the request to one of the pools based on the configured logic: Round Robin, Global Availability, Topology or Ratio (we chose "Topology" for our solution) The GTM-configured pool contains a CNAME to either us_east or us_west AWS data centers Internet DNS resolution takes place mapping the request to the ELB hostname (i.e. us-west.elb.amazonaws.com) and gives two A records External client http request is mapped to one of the returned IP addresses And, there you have it. With this solution, you can integrate AWS using your existing LTM and GTM technology! I hope this helps, and I hope you can implement this and other solutions using all the flexibility and power of your F5 technology.2.8KViews1like14CommentsCustomer-driven Site Deployment Using AWS and F5 Distributed Cloud Terraform Modules
Introduction and Problem Scope F5 Distributed Cloud Mesh’s Secure Networking provides connectivity and security services for your applications running on the Edge, Private Clouds, or Public Clouds. This simplifies the deployment and configuration of connectivity and security services for your Multi-Cloud and Edge Cloud deployment needs across heterogeneous environments. F5 Distributed Cloud Services leverages the “Site” construct to deploy our Secure Mesh or AppStack Site instances to manage workloads. A Site could be a customer location like AWS, Azure, GCP (Google Cloud Platform), private cloud, or an edge site. To run F5 Distributed Cloud Services, the site needs to be deployed with one or more instances of F5 Distributed Cloud Node, a software appliance that is managed by F5 Distributed Cloud Console. This site is where customer applications and F5 Distributed Cloud services are running. To deploy a Node, different options are available: Customer deployment topology description We will explain the above steps in the context of a greenfield deployment, the Terraform scripts of which are available here. The corresponding logical topology view of this deployment is shown in Fig.2. This deployment scenario instantiates the following resources: Single-node CE cluster AWS SLO interface AWS VPC AWS SLO interface subnet AWS route tables AWS Internet Gateway Assign AWS EIP to SLO The objective of this deployment is to create a Site with a single CE node in a new VPC for the provided AWS region and availability zone. The CE will be created as an AWS EC2 instance. An AWS subnet is created within the VPC. CE Site Local Outside (SLO) interface will be attached to VPC subnet and the created EC2 instance. SLO is a logical interface of a site (CE node) through which reachability is achieved to external (e.g. Internet or other services outside the public cloud site). To enable reachability to the Internet, the default route of the CE node will point to the AWS Internet gateway. Also, the SLO will be configured with an AWS External IP address (Elastic IP). Fig.2. Customer Deployment Topology in AWS List of terraform input parameters provided in vars file Parameters must be customized to adapt to the customer environment. The definition of the parameters in the “terraform.tfvars” show in below table. Parameters Definitions owner Identifies the email of the IT manager used to authenticate to the AWS system project_prefix Prefix that will be used to identify the resource objects in AWS and XC. project_suffix The suffix that will be used to identify the site’s resources in AWS and XC ssh_public_key_file Local file system’s path to ssh public key file f5xc_tenant Full F5XC tenant name f5xc_api_url F5XC API url f5xc_cluster_name Name of the Cluster f5xc_api_p12_file Local file system path to api_cert_file (downloaded from XC Console) aws_region AWS region for the XC Site aws_existing_vpc_id Existing VPC ID (brownfield) aws_vpc_cidr_block CIDR Block of the VPC aws_availability_zone AWS Availability Zone (a) aws_vpc_slo_subnet_node0 AWS Subnet in the VPC for the SLO subnet Configuring other environmental variables Export the following environment variables in the working shell, setting it to customer’s deployment context. Environment Variables Definitions AWS_ACCESS_KEY AWS Access key for authentication AWS_SECRET_ACCESS_KEY AWS Secret key for authentication VES_P12_PASSWORD XC P12 Password from Console TF_VAR_f5xc_api_p12_cert_password Same as VES_P12_PASSWORD Deploy Topology Deploy the topology with: terraform init terraform plan terraform deploy –auto-approve And monitor the status of the Sites on the F5 Distributed Cloud Services Console. Created site object will be available in Secure Mesh Site section of the F5Distributed CloudServices Console. Video-based description of the deployment Scenario This demonstration video shows the procedure for provisioning the deployment topology described above in three steps. References https://docs.cloud.f5.com/docs-v2/platform/services/mesh/secure-networking https://docs.cloud.f5.com/docs-v2/platform/concepts/site https://docs.cloud.f5.com/docs-v2/multi-cloud-network-connect/how-to/site-management https://docs.cloud.f5.com/docs-v2/multi-cloud-network-connect/how-to/site-management/deploy-aws-site-terraform https://docs.cloud.f5.com/docs-v2/multi-cloud-network-connect/troubleshooting/troubleshoot-manual-ce-deployment-registration-issues165Views0likes0CommentsIntegrate BIG-IP with AWS CloudWAN Service Insertion
AWS Cloud WAN is being adopted by many organizations and it is critical to secure traffic that traverses this service. By using F5 security solutions with AWS Cloud WAN service insertion you can enjoy the networking benefits of AWS Cloud WAN while providing the security, control and visibility your organization requires.140Views0likes0CommentsF5 High Availability - Public Cloud Guidance
This article will provide information about BIG-IP and NGINX high availability (HA) topics that should be considered when leveraging the public cloud. There are differences between on-prem and public cloud such as cloud provider L2 networking. These differences lead to challenges in how you address HA, failover time, peer setup, scaling options, and application state. Topics Covered: Discuss and Define HA Importance of Application Behavior and Traffic Sizing HA Capabilities of BIG-IP and NGINX Various HA Deployment Options (Active/Active, Active/Standby, auto scale) Example Customer Scenario What is High Availability? High availability can mean many things to different people. Depending on the application and traffic requirements, HA requires dual data paths, redundant storage, redundant power, and compute. It means the ability to survive a failure, maintenance windows should be seamless to user, and the user experience should never suffer...ever! Reference: https://en.wikipedia.org/wiki/High_availability So what should HA provide? Synchronization of configuration data to peers (ex. configs objects) Synchronization of application session state (ex. persistence records) Enable traffic to fail over to a peer Locally, allow clusters of devices to act and appear as one unit Globally, disburse traffic via DNS and routing Importance of Application Behavior and Traffic Sizing Let's look at a common use case... "gaming app, lots of persistent connections, client needs to hit same backend throughout entire game session" Session State The requirement of session state is common across applications using methods like HTTP cookies,F5 iRule persistence, JSessionID, IP affinity, or hash. The session type used by the application can help you decide what migration path is right for you. Is this an app more fitting for a lift-n-shift approach...Rehost? Can the app be redesigned to take advantage of all native IaaS and PaaS technologies...Refactor? Reference: 6 R's of a Cloud Migration Application session state allows user to have a consistent and reliable experience Auto scaling L7 proxies (BIG-IP or NGINX) keep track of session state BIG-IP can only mirror session state to next device in cluster NGINX can mirror state to all devices in cluster (via zone sync) Traffic Sizing The cloud provider does a great job with things like scaling, but there are still cloud provider limits that affect sizing and machine instance types to keep in mind. BIG-IP and NGINX are considered network virtual appliances (NVA). They carry quota limits like other cloud objects. Google GCP VPC Resource Limits Azure VM Flow Limits AWS Instance Types Unfortunately, not all limits are documented. Key metrics for L7 proxies are typically SSL stats, throughput, connection type, and connection count. Collecting these application and traffic metrics can help identify the correct instance type. We have a list of the F5 supported BIG-IP VE platforms on F5 CloudDocs. F5 Products and HA Capabilities BIG-IP HA Capabilities BIG-IP supports the following HA cluster configurations: Active/Active - all devices processing traffic Active/Standby - one device processes traffic, others wait in standby Configuration sync to all devices in cluster L3/L4 connection sharing to next device in cluster (ex. avoids re-login) L5-L7 state sharing to next device in cluster (ex. IP persistence, SSL persistence, iRule UIE persistence) Reference: BIG-IP High Availability Docs NGINX HA Capabilities NGINX supports the following HA cluster configurations: Active/Active - all devices processing traffic Active/Standby - one device processes traffic, others wait in standby Configuration sync to all devices in cluster Mirroring connections at L3/L4 not available Mirroring session state to ALL devices in cluster using Zone Synchronization Module (NGINX Plus R15) Reference: NGINX High Availability Docs HA Methods for BIG-IP In the following sections, I will illustrate 3 common deployment configurations for BIG-IP in public cloud. HA for BIG-IP Design #1 - Active/Standby via API HA for BIG-IP Design #2 - A/A or A/S via LB HA for BIG-IP Design #3 - Regional Failover (multi region) HA for BIG-IP Design #1 - Active/Standby via API (multi AZ) This failover method uses API calls to communicate with the cloud provider and move objects (IP address, routes, etc) during failover events. The F5 Cloud Failover Extension (CFE) for BIG-IP is used to declaratively configure the HA settings. Cloud provider load balancer is NOT required Fail over time can be SLOW! Only one device actively used (other device sits idle) Failover uses API calls to move cloud objects, times vary (see CFE Performance and Sizing) Key Findings: Google API failover times depend on number of forwarding rules Azure API slow to disassociate/associate IPs to NICs (remapping) Azure API fast when updating routes (UDR, user defined routes) AWS reliable with API regarding IP moves and routes Recommendations: This design with multi AZ is more preferred than single AZ Recommend when "traditional" HA cluster required or Lift-n-Shift...Rehost For Azure (based on my testing)... Recommend using Azure UDR versus IP failover when possible Look at Failover via LB example instead for Azure If API method required, look at DNS solutions to provide further redundancy HA for BIG-IP Design #2 - A/A or A/S via LB (multi AZ) Cloud LB health checks the BIG-IP for up/down status Faster failover times (depends on cloud LB health timers) Cloud LB allows A/A or A/S Key difference: Increased network/compute redundancy Cloud load balancer required Recommendations: Use "failover via LB" if you require faster failover times For Google (based on my testing)... Recommend against "via LB" for IPSEC traffic (Google LB not supported) If load balancing IPSEC, then use "via API" or "via DNS" failover methods HA for BIG-IP Design #3 - Regional Failover via DNS (multi AZ, multi region) BIG-IP VE active/active in multiple regions Traffic disbursed to VEs by DNS/GSLB DNS/GSLB intelligent health checks for the VEs Key difference: Cloud LB is not required DNS logic required by clients Orchestration required to manage configs across each BIG-IP BIG-IP standalone devices (no DSC cluster limitations) Recommendations: Good for apps that handle DNS resolution well upon failover events Recommend when cloud LB cannot handle a particular protocol Recommend when customer is already using DNS to direct traffic Recommend for applications that have been refactored to handle session state outside of BIG-IP Recommend for customers with in-house skillset to orchestrate (Ansible, Terraform, etc) HA Methods for NGINX In the following sections, I will illustrate 2 common deployment configurations for NGINX in public cloud. HA for NGINX Design #1 - Active/Standby via API HA for NGINX Design #2 - Auto Scale Active/Active via LB HA for NGINX Design #1 - Active/Standby via API (multi AZ) NGINX Plus required Cloud provider load balancer is NOT required Only one device actively used (other device sits idle) Only available in AWS currently Recommendations: Recommend when "traditional" HA cluster required or Lift-n-Shift...Rehost Reference: Active-Passive HA for NGINX Plus on AWS HA for NGINX Design #2 - Auto Scale Active/Active via LB (multi AZ) NGINX Plus required Cloud LB health checks the NGINX Faster failover times Key difference: Increased network/compute redundancy Cloud load balancer required Recommendations: Recommended for apps fitting a migration type of Replatform or Refactor Reference: Active-Active HA for NGINX Plus on AWS, Active-Active HA for NGINX Plus on Google Pros & Cons: Public Cloud Scaling Options Review this handy table to understand the high level pros and cons of each deployment method. Example Customer Scenario #1 As a means to make this topic a little more real, here isa common customer scenario that shows you the decisions that go into moving an application to the public cloud. Sometimes it's as easy as a lift-n-shift, other times you might need to do a little more work. In general, public cloud is not on-prem and things might need some tweaking. Hopefully this example will give you some pointers and guidance on your next app migration to the cloud. Current Setup: Gaming applications F5 Hardware BIG-IP VIRPIONs on-prem Two data centers for HA redundancy iRule heavy configuration (TLS encryption/decryption, payload inspections) Session Persistence = iRule Universal Persistence (UIE), and other methods Biggest app 15K SSL TPS 15Gbps throughput 2 million concurrent connections 300K HTTP req/sec (L7 with TLS) Requirements for Successful Cloud Migration: Support current traffic numbers Support future target traffic growth Must run in multiple geographic regions Maintain session state Must retain all iRules in use Recommended Design for Cloud Phase #1: Migration Type: Hybrid model, on-prem + cloud, and some Rehost Platform: BIG-IP Retaining iRules means BIG-IP is required Licensing: High Performance BIG-IP Unlocks additional CPU cores past 8 (up to 24) extra traffic and SSL processing Instance type: check F5 supported BIG-IP VE platforms for accelerated networking (10Gb+) HA method: Active/Standby and multi-region with DNS iRule Universal persistence only mirrors to only next device, keep cluster size to 2 scale horizontally via additional HA clusters and DNS clients pinned to a region via DNS (on-prem or public cloud) inside region, local proxy cluster shares state This example comes up in customer conversations often. Based on customer requirements, in-house skillset, current operational model, and time frames there is one option that is better than the rest. A second design phase lends itself to more of a Replatform or Refactor migration type. In that case, more options can be leveraged to take advantage of cloud-native features. For example, changing the application persistence type from iRule UIE to cookie would allow BIG-IP to avoid keeping track of state. Why? With cookies, the client keeps track of that session state. Client receives a cookie, passes the cookie to L7 proxy on successive requests, proxy checks cookie value, sends to backend pool member. The requirement for L7 proxy to share session state is now removed. Example Customer Scenario #2 Here is another customer scenario. This time the application is a full suite of multimedia content. In contrast to the first scenario, this one will illustrate the benefits of rearchitecting various components allowing greater flexibility when leveraging the cloud. You still must factor in-house skill set, project time frames, and other important business (and application) requirements when deciding on the best migration type. Current Setup: Multimedia (Gaming, Movie, TV, Music) Platform BIG-IP VIPRIONs using vCMP on-prem Two data centers for HA redundancy iRule heavy (Security, Traffic Manipulation, Performance) Biggest App: oAuth + Cassandra for token storage (entitlements) Requirements for Success Cloud Migration: Support current traffic numbers Elastic auto scale for seasonal growth (ex. holidays) VPC peering with partners (must also bypass Web Application Firewall) Must support current or similar traffic manipulating in data plane Compatibility with existing tooling used by Business Recommended Design for Cloud Phase #1: Migration Type: Repurchase, migration BIG-IP to NGINX Plus Platform: NGINX iRules converted to JS or LUA Licensing: NGINX Plus Modules: GeoIP, LUA, JavaScript HA method: N+1 Autoscaling via Native LB Active Health Checks This is a great example of a Repurchase in which application characteristics can allow the various teams to explore alternative cloud migration approaches. In this scenario, it describes a phase one migration of converting BIG-IP devices to NGINX Plus devices. This example assumes the BIG-IP configurations can be somewhat easily converted to NGINX Plus, and it also assumes there is available skillset and project time allocated to properly rearchitect the application where needed. Summary OK! Brains are expanding...hopefully? We learned about high availability and what that means for applications and user experience. We touched on the importance of application behavior and traffic sizing. Then we explored the various F5 products, how they handle HA, and HA designs. These recommendations are based on my own lab testing and interactions with customers. Every scenario will carry its own requirements, and all options should be carefully considered when leveraging the public cloud. Finally, we looked at a customer scenario, discussed requirements, and design proposal. Fun! Resources Read the following articles for more guidance specific to the various cloud providers. Advanced Topologies and More on Highly Available Services Lightboard Lessons - BIG-IP Deployments in Azure Google and BIG-IP Failing Faster in the Cloud BIG-IP VE on Public Cloud High-Availability Load Balancing with NGINX Plus on Google Cloud Platform Using AWS Quick Starts to Deploy NGINX Plus NGINX on Azure5.6KViews5likes2CommentsBuilding an OpenSSL Certificate Authority - Creating Your Root Certificate
Creating Your Root Certificate Authority In our previous article, Introductions and Design Considerations for Eliptical Curveswe covered the design requirements to create a two-tier ECC certificate authority based on NSA Suite B's PKI requirements. We can now begin creating our CA's root configuration. Creating the root CArequires us to generate a certificate and private key,since this is the first certificate we're creating, it will be self-signed. The root CA will not sign client and server certificates, it's job it only to create intermeidary certificates and act as the root of our chain of trust. This is standard practice across the public and private PKI configurations and so too should your lab environments. Create Your Directory Structure Create a directory to store your root CA pair and config files. # sudo bash # mkdir /root/ca Yep, I did that. This is for a test lab and permissions may not match real world requirements. I sudoed into bash and created everything under root; aka playing with fire. This affects ownership down the line if you chmod private key files and directoriesto user access only so determine for yourself what user/permission will be accessing files for certificate creation. I have a small team and trust them with root within a lab environment (snapshots allow me to be this trusting). Create your CA database to keep track of signed certificates # cd /root/ca # mkdir private certs crl # touch index.txt # echo 1000 > serial We begin by creating a working root directory with sub directories for the various files we'll be creating. This will allow you to apply your preferred security practices should you choose to do so. Since this is a test lab and I am operating as root, I won't be chmod'ing anything today. Create Your OpenSSL Config File OpenSSLuses configuration files to simplify/template the components of a certificate. Copy the GIST openssl_root.cnf file to /root/ca/openssl_root.cnf which is already prepared for this demo. For the root CA certificate creation, the [ CA ] section is required and will gather it's configuration from the [ CA_default ] section. [ ca ] # `man ca` default_ca = CA_default The [CA_default] section in the openssl_root.cnf file contains the variables OpenSSL will use for the root CA. If you're using alternate directory names from this demo, update the file accordingly. Note the long values for default days (10 years) as we don't care about renewing the root certificate anytime soon. [ CA_default ] # Directory and file locations. dir = /root/ca certs = $dir/certs crl_dir = $dir/crl new_certs_dir = $dir/certs database = $dir/index.txt serial = $dir/serial RANDFILE = $dir/private/.rand # The root key and root certificate. private_key = $dir/private/ca.cheese.key.pem certificate = $dir/certs/ca.cheese.crt.pem # For certificate revocation lists. crlnumber = $dir/crlnumber crl = $dir/crl/ca.cheese.crl.pem crl_extensions = crl_ext default_crl_days = 3650 # SHA-1 is deprecated, so use SHA-2 or SHA-3 instead. default_md = sha384 name_opt = ca_default cert_opt = ca_default default_days = 3650 preserve = no policy = policy_strict For the root CA, we define [policy_strict] which will later force the intermediary's certificateto match country, state/province, and organization name fields. [ policy_strict ] The root CA should only sign intermediate certificates that match. # See POLICY FORMAT section of `man ca`. countryName = match stateOrProvinceName = match organizationName = match organizationalUnitName = optional commonName = supplied emailAddress = optional The [ req ] section is used for OpenSSL certificate requests. Some of the values listed will not be used since we are manually specifying them during certificate creation. [ req ] # Options for the `req` tool (`man req`). default_bits = 4096 distinguished_name = req_distinguished_name string_mask = utf8only # SHA-1 is deprecated, please use SHA-2 or greater instead. default_md = sha384 # Extension to add when the -x509 option is used. x509_extensions = v3_ca I pre-populate the [ req_distinguished_name ] section with values I'll commonly used to save typing down the road. [ req_distinguished_name ] countryName = Country Name (2 letter code) stateOrProvinceName = State or Province Name localityName = Locality Name 0.organizationName = Organization Name organizationalUnitName = Organizational Unit Name commonName = Common Name emailAddress = Email Address # Optionally, specify some defaults. countryName_default = US stateOrProvinceName_default = WA localityName_default = Seattle 0.organizationName_default = Grilled Cheese Inc. organizationalUnitName_default = Grilled Cheese Root CA emailAddress_default = grilledcheese@yummyinmytummy.us The [ v3_ca ] section will further define the Suite B PKI requirements, namely basicConstraints and acceptable keyUsage values for a CA certificate. This section will be used for creating the root CA's certificate. [ v3_ca ] # Extensions for a typical CA (`man x509v3_config`). subjectKeyIdentifier = hash authorityKeyIdentifier = keyid:always,issuer basicConstraints = critical, CA:true keyUsage = critical, digitalSignature, cRLSign, keyCertSign Selecting the Suite B compliant elliptical curve We're creating a Suite B infrastructure so we'll need to pick an acceptable curve following P-256 or P-384. To do this, run the following OpenSSLcommand: openssl ecparam -list_curves This will give you a long list of options but which one to pick? Let's isolate the suites within the 256 & 384 prime fields; we can grep the results for easier curve identification. openssl ecparam -list_curves | grep '256\|384' And we get the following results (your results may vary depending on the version of OpenSSL running): # openssl ecparam -list_curves | grep '256\|384' secp256k1 : SECG curve over a 256 bit prime field secp384r1 : NIST/SECG curve over a 384 bit prime field prime256v1: X9.62/SECG curve over a 256 bit prime field brainpoolP256r1: RFC 5639 curve over a 256 bit prime field brainpoolP256t1: RFC 5639 curve over a 256 bit prime field brainpoolP384r1: RFC 5639 curve over a 384 bit prime field brainpoolP384t1: RFC 5639 curve over a 384 bit prime field I am going to use secp384r1 as my curve of choice. It's good to mention that RFC5480 notes secp256r1 (not listed) is referred to as prime256v1 for this output's purpose. Why not use something larger than 384? Thank Google. People absolutely were using secp521r1 then Google dropped support for it (read Chromium Bug 478225 for more). The theoryis since NSA Suite B PKI did not explicitly call out anything besides 256 or 384, the Chromium team quietly decided it wasn't needed and dropped support for it. Yea... it kinda annoyed a few people. So to avoid future browser issues, we're sticking with what's defined in public standards. Create the Root CA's Private Key Using the names defined in the openssl_root.cnf's private_key value and our selected secp384r1 ECC curve we will create and encrypt the root certificates private key. # openssl ecparam -genkey -name secp384r1 | openssl ec -aes256 -out private/ca.cheese.key.pem read EC key writing EC key Enter PEM pass phrase: ****** Verifying - Enter PEM pass phrase: ****** Note:The ecparam function within OpenSSL does not encrypt the private key like genrsa/gendsa/gendh does. Instead we combined the private key creation (openssl ecparam) with a secondary encryption command (openssl ec) to encrypt private key before it is written to disk. Keep the password safe. Create the Root CA's Certificate Using the new private key, we can now generate our root'sself-signed certificate. We do this because the root has no authority above it to request trust authority from;it is the absolute source of authority in our certificate chain. # openssl req -config openssl_root.cnf -new -x509 -sha384 -extensions v3_ca -key private/ca.cheese.key.pem -out certs/ca.cheese.crt.pem Enter pass phrase for private/ca.cheese.key.pem: ****** You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [US]: State or Province Name [WA]: Locality Name [Seattle]: Organization Name [Grilled Cheese Inc.]: Organizational Unit Name [Grilled Cheese Root CA]: Common Name []:Grilled Cheese Root Certificate Authority Email Address [grilledcheese@yummyinmytummy.us]: Using OpenSSL we can validate the Certificate contents to ensure we're following the NSA Suite B requirements. # openssl x509 -noout -text -in certs/ca.cheese.crt.pem Certificate: Data: Version: 3 (0x2) Serial Number: ff:bd:f5:2f:c5:0d:3d:02 Signature Algorithm: ecdsa-with-SHA384 Issuer: C = US, ST = WA, L = Seattle, O = Grilled Cheese Inc., OU = Grilled Cheese Root CA, CN = Grilled Cheese Inc. Root Certificate Authority, emailAddress = grilledcheese@yummyinmytummy.us Validity Not Before: Aug 22 23:53:05 2017 GMT Not After : Aug 20 23:53:05 2027 GMT Subject: C = US, ST = WA, L = Seattle, O = Grilled Cheese Inc., OU = Grilled Cheese Root CA, CN = Grilled Cheese Inc. Root Certificate Authority, emailAddress = grilledcheese@yummyinmytummy.us Subject Public Key Info: Public Key Algorithm: id-ecPublicKey Public-Key: (384 bit) pub: 04:a6:b7:eb:8b:9f:fc:95:03:02:20:ea:64:7f:13: ea:b7:75:9b:cd:5e:43:ca:19:70:17:e2:0a:26:79: 0a:23:2f:20:de:02:2d:7c:8f:62:6b:74:7d:82:fe: 04:08:38:77:b7:8c:e0:e4:2b:27:0f:47:01:64:38: cb:15:a8:71:43:b2:d9:ff:ea:0e:d1:c8:f4:8f:99: d3:8e:2b:c1:90:d6:77:ab:0b:31:dd:78:d3:ce:96: b1:a0:c0:1c:b0:31:39 ASN1 OID: secp384r1 NIST CURVE: P-384 X509v3 extensions: X509v3 Subject Key Identifier: 27:C8:F7:34:2F:30:81:97:DE:2E:FC:DD:E2:1D:FD:B6:8F:5A:AF:BB X509v3 Authority Key Identifier: keyid:27:C8:F7:34:2F:30:81:97:DE:2E:FC:DD:E2:1D:FD:B6:8F:5A:AF:BB X509v3 Basic Constraints: critical CA:TRUE X509v3 Key Usage: critical Digital Signature, Certificate Sign, CRL Sign Signature Algorithm: ecdsa-with-SHA384 30:65:02:30:77:a1:f9:e2:ab:3a:5a:4b:ce:8d:6a:2e:30:3f: 01:cf:8e:76:dd:f6:1f:03:d9:b3:5c:a1:3d:6d:36:04:fb:01: f7:33:27:03:85:de:24:56:17:c9:1a:e4:3b:35:c4:a8:02:31: 00:cd:0e:6c:e0:d5:26:d3:fb:88:56:fa:67:9f:e9:be:b4:8f: 94:1c:2c:b7:74:19:ce:ec:15:d2:fe:48:93:0a:5f:ff:eb:b2: d3:ae:5a:68:87:dc:c9:2c:54:8d:04:68:7f Reviewing the above we can verify the certificate details: The Suite B Signature Algorithm: ecdsa-with-SHA384 The certificate date validity when we specificed -days 3650: Not Before: Aug 22 23:53:05 2017 GMT Not After : Aug 20 23:53:05 2027 GMT The Public-Key bit length: (384 bit) The Issuer we defined in the openssl_root.cnf: C = US, ST = WA, L = Seattle, O = Grilled Cheese Inc., OU = Grilled Cheese Root CA, CN = Grilled Cheese Inc. Root Certificate Authority The Certificate Subject, since this is self-signed, refers back to itself: Subject: C = US, ST = WA, L = Seattle, O = Grilled Cheese Inc., OU = Grilled Cheese Root CA, CN = Grilled Cheese Inc. Root Certificate Authority The eliptical curve used when we created the private key: NIST CURVE: P-384 Verify the X.509 v3 extensions we defined within the openssl_root.cnf for a Suite B CA use: X509v3 Basic Constraints: critical CA:TRUE X509v3 Key Usage: critical Digital Signature, Certificate Sign, CRL Sign The root certificate and private key are now compete and we have the first part of our CA complete. Step 1 complete! In our next article we willcreate the intermediary certificate to complete the chain of trust in our two-tier hierarchy.16KViews0likes8CommentsBuilding an OpenSSL Certificate Authority - Creating Your Intermediary Certificate
Creating Your Intermediary Certificate Authority Previously we created the first part of our OpenSSL CA by building our root certificate. We are now ready to complete our CA chain by creating and signing the intermediary certificate.The intermediary will beresponsible for signing client and server certificate requests. It acts as an authoritative proxy for the root certificate hence the name intermediary. The chain of trust will extend from the root certificate to the intermediary certificate down to the certificates you'll deploy within your infrastructure. Create your directory structure Create a new subdirectory under /root/ca to segregate intermediary files our root configuration . # sudo bash # mkdir /root/ca/intermediate We're creating the same directory structure previously used under /root/ca within /root/ca/intermediary . It's your decision if you if you want to do something different. Some of my best friends are flat directory structures and we don't judge personal practices. Create your intermediary CA database to keep track of signed certificates # cd /root/ca/intermediate # mkdir certs crl csr private # touch index.txt # echo 1000 > serial Create a crlnumber file for the intermediary CA to use # echo 1000 > /root/ca/intermediate/crlnumber Similar to the earlier serial statement, this will create the crlnumber file and start the numerical iteration at 1000. This will be used for future certificate revocation needs. Create your OpenSSL intermediary config file Copy the GIST openssl_intermediate.cnf file to /root/ca/intermediate/openssl_intermediate.cnf and modify the contents for your own naming conventions. Similar to the root_ca.cnf , the [CA] is required and will gather it's configuration from the [CA_default] section. Changes to the [int_ca] include: [ CA_default ] # Directory and file locations. dir = /root/ca/intermediate private_key = $dir/private/int.cheese.key.pem certificate = $dir/cers/int.cheese.crt.pem crlnumber = $dir/crlnumber crl = $dir/crl/int.cheese.crl.pem crl_extensions = crl_ext policy = policy_loose We have new certificate names for our intermediary use and define policy_loose so future certificate requests don't have to match country, state/province, or organization. Create the Intermediary's Private Key and Certificate Signing Request Similar to the root certificate, we're following the NSA Suite B requirements and matching the root's elliptical curve, secp384r1. We'll alsocreate the CSR and private key all in one line, making your scripts and life a bit easier. # cd /root/ca # openssl req -config intermediate/openssl_intermediate.cnf -new -newkey ec:<(openssl ecparam -name secp384r1) -keyout intermediate/private/int.cheese.key.pem -out intermediate/csr/int.cheese.csr Generating an EC private key writing new private key to 'intermediate/private/int.cheese.key.pem' Enter PEM pass phrase: ****** Verifying - Enter PEM pass phrase: ****** ----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [US]: State or Province Name [WA]: Locality Name [Seattle]: Organization Name [Grilled Cheese Inc.]: Organizational Unit Name [Grilled Cheese Intermediary CA]: Common Name []:Grilled Cheese Inc. Intermediary Certificate Authority Email Address [grilledcheese@yummyinmytummy.us]: Sign the certificate request with the root certificate and use the openssl_intermediate.cnf config file to specify the [v3_intermediate_ca] extension instead of the [v3_ca] as we did for the root. The openssl_intermediate.cnf has a few changes which we need to note. [ v3_intermediate_ca ] # Extensions for a typical intermediate CA (`man x509v3_config`). subjectKeyIdentifier = hash authorityKeyIdentifier = keyid:always,issuer basicConstraints = critical, CA:true, pathlen:0 keyUsage = critical, digitalSignature, cRLSign, keyCertSign crlDistributionPoints = @crl_info authorityInfoAccess = @ocsp_info [crl_info] URI.0 = http://crl.grilledcheese.us/whoremovedmycheese.crl [ocsp_info] caIssuers;URI.0 = http://ocsp.grilledcheese.us/cheddarcheeseroot.crt OCSP;URI.0 = http://ocsp.grilledcheese.us/ The Certificate Revocation List (crl) and Online Certificate Status Protocol (OCSP) should be included within the intermediary certificate. This lets systems know where check and see if the intermediary certificate was revoked by the root at any given time. We will cover this in detail later and browsers do not necessarily check the intermediary certificates for revocation, but they absolutely do for the site certificates. We're adding CRL and OCSP to the Intermediary CA for best practices purpose. Create the intermediate certificate Sign the csr/int.cheese.cs r with the root's certificate. We are going to drop down to /root/ca so the creation of the intermediary certificate is stored within the root's index.txt and we'll also use the root's OpenSSL Config file openssl_root.cnf . # openssl ca -config openssl_root.cnf -extensions v3_intermediate_ca -days 3600 -md sha384 -in intermediate/csr/int.cheese.csr -out intermediate/certs/int.cheese.crt.pem Using configuration from openssl_root.cnf Enter pass phrase for /root/ca/private/ca.cheese.key.pem: Check that the request matches the signature Signature ok Certificate Details: Serial Number: 4097 (0x1001) Validity Not Before: Aug 24 21:51:07 2017 GMT Not After : Jul 3 21:51:07 2027 GMT Subject: countryName = US stateOrProvinceName = WA organizationName = Grilled Cheese Inc. organizationalUnitName = Grilled Cheese Intermediary CA commonName = Grilled Cheese Inc. Intermediary Certificate Authority emailAddress = grilledcheese@yummyinmytummy.us X509v3 extensions: X509v3 Subject Key Identifier: 7E:2D:A5:D0:9B:70:B9:E3:D2:F7:C0:0A:CF:70:9A:8B:80:38:B1:CD X509v3 Authority Key Identifier: keyid:27:C8:F7:34:2F:30:81:97:DE:2E:FC:DD:E2:1D:FD:B6:8F:5A:AF:BB X509v3 Basic Constraints: critical CA:TRUE, pathlen:0 X509v3 Key Usage: critical Digital Signature, Certificate Sign, CRL Sign X509v3 CRL Distribution Points: Full Name: URI:http://crl.grilledcheese.us/whomovedmycheese.crl Authority Information Access: CA Issuers - URI:http://ocsp.grilledcheese.us/cheddarcheeseroot.crt OCSP - URI:http://ocsp.grilledcheese.us/ Certificate is to be certified until Jul 3 21:51:07 2027 GMT (3600 days) Sign the certificate? [y/n]:y 1 out of 1 certificate requests certified, commit? [y/n]y Write out database with 1 new entries Data Base Updated Validate the Certificate Contents with OpenSSL. # openssl x509 -noout -text -in intermediate/certs/int.cheese.crt.pem Certificate: Data: Version: 3 (0x2) Serial Number: 4097 (0x1001) Signature Algorithm: ecdsa-with-SHA384 Issuer: C = US, ST = WA, L = Seattle, O = Grilled Cheese Inc., OU = Grilled Cheese Root CA, CN = Grilled Cheese Inc. Root Certificate Authority, emailAddress = grilledcheese@yummyinmytummy.us Validity Not Before: Aug 24 21:51:07 2017 GMT Not After : Jul 3 21:51:07 2027 GMT Subject: C = US, ST = WA, O = Grilled Cheese Inc., OU = Grilled Cheese Intermediary CA, CN = Grilled Cheese Inc. Intermediary Certificate Authority, emailAddress = grilledcheese@yummyinmytummy.us Subject Public Key Info: Public Key Algorithm: id-ecPublicKey Public-Key: (384 bit) pub: 04:9b:14:9a:55:6d:db:15:7f:d7:8b:fd:37:4d:ba: e8:50:8e:88:32:99:27:4e:20:36:25:8b:7b:ac:bb: 2f:d6:61:c1:5a:c8:e6:4c:98:20:3f:cf:86:3c:bf: f4:f3:b0:1c:1c:0b:cc:7f:e4:4b:13:59:58:a1:53: 87:cb:4c:17:66:04:21:01:6a:44:5f:22:31:7d:3d: fe:a2:e7:73:c8:77:7c:1a:f9:9c:4a:9d:e7:77:6a: c7:9e:3e:f0:4a:b0:37 ASN1 OID: secp384r1 NIST CURVE: P-384 X509v3 extensions: X509v3 Subject Key Identifier: 7E:2D:A5:D0:9B:70:B9:E3:D2:F7:C0:0A:CF:70:9A:8B:80:38:B1:CD X509v3 Authority Key Identifier: keyid:27:C8:F7:34:2F:30:81:97:DE:2E:FC:DD:E2:1D:FD:B6:8F:5A:AF:BB X509v3 Basic Constraints: critical CA:TRUE, pathlen:0 X509v3 Key Usage: critical Digital Signature, Certificate Sign, CRL Sign X509v3 CRL Distribution Points: Full Name: URI:http://crl.grilledcheese.us/whomovedmycheese.crl Authority Information Access: CA Issuers - URI:http://ocsp.grilledcheese.us/cheddarcheeseroot.crt OCSP - URI:http://ocsp.grilledcheese.us/ Signature Algorithm: ecdsa-with-SHA384 30:65:02:30:74:07:ba:fe:4b:71:78:d8:d2:7f:84:c0:50:b4: b6:df:6c:f6:57:f5:d9:2c:4b:e1:d4:d8:1d:78:fd:7e:bf:0a: 81:86:bb:40:c5:9b:97:6f:83:04:5f:d3:85:36:6c:d6:02:31: 00:d3:08:78:1c:da:6d:ef:1d:bb:27:df:0b:76:eb:ab:84:b2: 91:04:25:1a:85:5b:d5:c3:cd:66:e4:9e:14:b2:c0:ed:9c:59: b7:18:c3:26:eb:df:78:13:68:47:66:b5:43 Similar to the root, we can note the usage and algorithms but we have the addition of: * X509v3 CRL Distribution Points: Full Name: URI:http://crl.grilledcheese.us/whomovedmycheese.crl *Authority Information Access: CA Issuers - URI:http://ocsp.grilledcheese.us/cheddarcheeseroot.crt OCSP - URI:http://ocsp.grilledcheese.us/ Create the certificate chain The root certificate and intermediary certificate must be available to the requesting client/server in order to validate the chain of trust. To complete the trust validation, a certificate chain must be available to the client application. A certificate chain usually takes the form of separate certificates installed into Root and Intermediary containers (as the case for Windows), or bundled together either in a .pfx cert and certchain bundle or a PEM formatted text file. Concatenate the root and intermediate certificates together to create a PEM certificate chain text file. # cd /root/ca # $cat intermediate/certs/int.cheese.crt.pem certs/ca.cheese.crt.pem > intermediate/certs/chain.cheese.crt.pem The file should look similar to this with two separate BEGIN and END statements for each certificate (example condensed for space): # cat intermediate/certs/chain.cheese.crt.pem -----BEGIN CERTIFICATE----- MIID/TCCA4OgAwIBAgICEAEwCgYIKoZIzj0EAwMwgdQxCzAJBgNVBAYTAlVTMQsw CQYDVQQIDAJXQTEQMA4GA1UEBwwHU2VhdHRsZTEcMBoGA1UECgwTR3JpbGxlZCBD ...... hkjOPQQDAwNoADBlAjB0B7r+S3F42NJ/hMBQtLbfbPZX9dksS+HU2B14/X6/CoGG u0DFm5dvgwRf04U2bNYCMQDTCHgc2m3vHbsn3wt266uEspEEJRqFW9XDzWbknhSy wO2cWbcYwybr33gTaEdmtUM= -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- MIIDQTCCAsegAwIBAgIJAP+99S/FDT0CMAoGCCqGSM49BAMDMIHUMQswCQYDVQQG EwJVUzELMAkGA1UECAwCV0ExEDAOBgNVBAcMB1NlYXR0bGUxHDAaBgNVBAoME0dy ...... CgYIKoZIzj0EAwMDaAAwZQIwd6H54qs6WkvOjWouMD8Bz4523fYfA9mzXKE9bTYE +wH3MycDhd4kVhfJGuQ7NcSoAjEAzQ5s4NUm0/uIVvpnn+m+tI+UHCy3dBnO7BXS /kiTCl//67LTrlpoh9zJLFSNBGh/ -----END CERTIFICATE----- Note: In the real world hosting application should never have the entire chain available as it defeats a core principle of PKI. It's recommended in test labs to distribute the root certificate to all testing client applications and systems and include only the intermediary along with the server certificate. This way the client can establish the trust between the intermediary and root certificates. Next we'll move on to creating our CLR endpoint list and OCSP certificate. Our intermediary certificate is now created and signed and we are ready to move on. To complete the CAour next articlewe will create our certificate revocation list (CRL) endpointand online certificate status protocol (OCSP) certificate allowing us to revoke certificates. Lab environments rarely need revocation functionalitybut modern clients check for CLR and OCSP URIs so it's nessisary to have the configruation defined at minimum. Let's proceed.10KViews0likes2CommentsIngress/Egress VPC inspection with BIG-IP and GWLB
Introduction The previous article in this series reviewed the BIG-IP and AWS Gateway Load Balancer (GWLB) integration, in this article we will focus on a deployment pattern that is used to inspect traffic in and out of a VPC using BIG-IP security services and GWLB. Baseline: In this scenario we will focus on a single existing VPC with EC2 instances and no BIG-IP security services Goal: Inspect traffic in and out of the VPC, scale the BIG-IP deployment as needed Deployment pattern: Considering the requirements and tools available, the deployment pattern will use the following attributes: Separate the security devices (BIG-IP's) from the workloads they are protecting. The BIG-IP's will be deployed in their own VPC – the Security VPC. We will reference the workload VPC as the 'Consumer' VPC as it consumes security services. Use Routing tables in the 'consumer' VPC to send all traffic to the security VPC for inspection. Leverage transparent security services on the BIG-IP for inspection (BIG-IP security features configuration is out of scope for this article) We can dive into each of the individual tasks: The Security VPC Provisioning the 'consumer' VPC network to send traffic through the security VPC Security VPC: Here, we are deploying the BIG-IP fleet and exposing it using GWLB. Some of the considerations when creating this VPC: Deploy in the same region as the 'consumer' VPC Design based on your availability requirements, Number of availability zones(AZ) How many BIG-IP's in each AZ These are the actions we need to take in the provider VPC to inspect all ingress/egress traffic: Deploy a group of BIG-IP's Create a GWLB target group and associate the BIG-IP's to it Create a GWLB and assign the previously created target group to the listener Create a GWLB endpoint service and associate it with the GWLB Configure the BIG-IP's to receive traffic over the tunnel and inspect according to the desired policy Diagram: The security VPC - BIG-IP fleet behind a GWLB, exposed using GWLB service Consumer VPC: In the consumer VPC, the BIG-IP group is abstracted by the GWLB and consumesthe security services from the provider VPC viaa new component:GWLB endpoint. This endpoint acts as bridge between the consumer VPC and the provider VPC. It essentially creates an ENI in one of the consumer's VPC subnet.Please note that a single endpoint belongs to a single availability zone and design accordingly. Inspecting all ingress traffic requires the use of 'Ingress routing' – an AWS feature that allows sending all ingress traffic from the internet gateway to an ENI or to a GWLB endpoint. Here are the actions we need to take in the consumer VPC to inspect all ingress/egress traffic: Create GWLB endpoints in each relevant availability zone that are attached to the 'GWLB endpoint service' from the provider VPC Change the ingress routing table so that all traffic in each AZ will get routed to the respective GWLB endpoint. Change the subnets routing tables so that all traffic in each AZ will get routed to the respective GWLB endpoint. Diagram: Inspecting all ingress/egress in the Security provider VPC Traffic flow Ingress traffic flow between an external user and an EC2 instance in the consumer VPC: Egress traffic flow between an EC2 instance in the consumer VPC and an external user: Summary: With this deployment you can protect your AWS VPC using the robust security services offered by the BIG-IP platform and get the following benefits: Scalability - Deploy as many BIG-IP instances as you need based on performance and availability requirements Transparent inspection - Inspecting the traffic without any address translation Optimized compute - all BIG-IP devices are active and processing traffic Next steps: Test the deployment yourself - Check out our self-service lab that you can deploy in your own AWS account(Fully automated deployment using Terraform): https://github.com/f5devcentral/f5-digital-customer-engagement-center/tree/main/solutions/security/ingress-egress-fw1.6KViews2likes1CommentDeploy BIG-IP in AWS with HA across AZ’s - without using EIP’s
Background: The CloudFormation templates that are provided and supported by F5 are an excellent resource for customers to deploy BIG-IP VE in AWS. Along with these templates, documentation guiding your F5 deployment in AWS is an excellent resource. And of course, DevCentral articles are helpful. I recommend reading about HA topologies in AWS to start. I hope my article today can shed more light on an architecture that will suit a specific set of requirements: No Elastic IP's (EIP’s), High Availability (HA)across AZ’s, and multiple VPC’s. Requirements behind this architecture choice: I recently had a requirement to deploy BIG-IP appliances in AWS across AZ’s. I had read the official deployment guide, but I wasn’t clear on how to achieve failover without EIP’s. I was given 3 requirements: HA across AZ’s. In this architecture, we required a pair of BIG-IP devices in Active/Standby, where each device was in a different AZ. I needed to be able to fail over between devices. No EIP’s. This requirement existed because a 3 rd party firewall was already exposed to the Internet with a public IP address. That firewall would forward inbound requests to the BIG-IP VE in AWS, which in turn proxied traffic to a pair of web servers. Therefore, there was no reason to associate an EIP (a public IP address) with the BIG-IP interface. In my demo below I have not exposed a public website through a 3 rd party firewall, but to do so is a simple addition to this demo environment. Multiple VPC’s. This architecture had to allow for multiple VPC’s in AWS. There was already a “Security VPC” which contained firewalls, BIG-IP devices, and other devices, and I had to be able to use these devices for protecting applications that were across 1 or more disparate VPC’s. Meeting the requirements: HA across AZ’s This is the easiest of the requirements to meet because F5 has already provided templates to deploy these in AWS. I personally used a 2-nic device, with a BYOL license, deployed to an existing VPC, so that meant my template was this one. After this deployment is complete, you should have a pair of devices that will sync their configuration. At time of failover The supported F5 templates will deploy with the Advanced HA iApp. It is important that you configure this iApp after you have completed your AWS deployments. The iApp uses IAM permissions deployed with the template to make API calls to AWS at the time of failover. The API calls will update the route tables that you specify within the iApp wizard. Because this iApp is installed on both devices, either device can update the route in your route tables to point to its own interface. Update as of Dec 2019 This article was first written Feb 2019, and in Dec 2019 F5 released the Cloud Failover Extension (CFE), which is a cloud-agnostic, declarative way to configure failover in multiple public clouds. You can use the CFE instead of the Advanced HA iApp to achieve high availability between BIG-IP devices in cloud. Update as of Apr 2020 Your API calls will typically be sent to the public Internet endpoints for AWS EC2 API calls. Optionally, you can use AWS VPC endpoints to keep your API calls out to AWS EC2 from traversing the public Internet. My colleague Arnulfo Hernandez has written an article explaining how to do this. No EIP’s Configure an “alien IP range” I’m recycling another DevCentral solution here. You will need to choose an IP range for your VIP network that does not fall within the CIDR that is assigned to your VPC. Let’s call it an “alien range” because it “doesn’t belong” in your VPC and you couldn’t assign IP addresses from this range to your AWS ENI’s. Despite that, now create a route table within AWS that points this “alien range” to your Active BIG-IP device’s ENI (if you’re using a 2+ nic device, point it to the correct data plane NIC, not the Mgmt. interface). Don’t forget to associate the route table with specific subnets, per your design. Alternatively, you could add this route to the default VPC route table. Create a VIP on your active device Now create a VIP on your active device and configure the IP address as an IP within your alien range. Ensure the config replicates to your standby device. Ensure that source/destination checking is disabled on the ENI’s that your AWS routes are pointing to (on both Standby and Active devices). You should now have a VIP that you can target from other hosts in your VPC, provided that the route you created above is applied to the traffic destined to the VIP. Multiple VPC’s For extra credit, we’ll set up a Transit Gateway. This will allow other VPCs to route traffic to this “alien range” also, provided that the appropriate routes exist in the remote VPC’s, and also are applied to the appropriate Transit Gateway Route Table. Again, I’m recycling ideas that have already been laid out in other DevCentral posts. I won’t re-hash how to set up a transit gateway in AWS, because you can follow the linked post above. Sufficed to say, this is what you will need to set up if you want to route between multiple VPC’s using a transit gateway: 2 or more VPC’s A transit gateway in AWS Multiple transit gateway attachments that attach the transit gateway and each VPC you would like to route between. You will need one attachment per VPC. A transit gateway route table that is associated with each attachment. I will point out that you need to add a route for your “alien range” in your transit gateway route table, and in your remote VPC’s. That way, hosts in your remote VPC’s will forward traffic destined to your alien range (VIP network) to the transit gateway, and the transit gateway will forward it to your VPC, and the route you created in Step A will forward that traffic to your active BIG-IP device. Completed design: After the above configuration, you should have an environment that looks like the diagram below: Tips Internet access for deployments: When you deploy your BIG-IP devices, they will need Internet access to pull down some resources, including the iApp. So if you are deploying devices into your existing VPC, make sure you have a reachable Internet Gateway in AWS so that the devices have Internet access through both their management interface, and their data plane interface(s). Internet access for failover: Remember that an API call to AWS will still use an outbound request to the Internet. Make sure you allow the BIG-IP devices to make outbound connections over HTTPS to the Internet. If this is not available, you will find that your route tables are not updated at time of failover. (If you have a hard requirement that your devices should not have outbound access to internet, you can follow Arnulfo's guide linked to above and use VPC endpoints to keep this traffic on your local VPC) iApp logs: you can enable this in the iApp settings. I have used the logs (in /var/ltm/log) to troubleshoot issues that I have created myself. That’s how I learned not to accidentally cut off Internet access for your devices! Don’t forget about return routes if SNAT is disabled: Just like on-prem environments, if you disable SNAT, the pool member will need to route return traffic back to the BIG-IP device. You will commonly set up a default route (0.0.0.0/0) in AWS, point it at the ENI of the active BIG-IP device, and associate this route table with the subnet containing the pool members. If the pool members are in a remote VPC, you will need to create this route on the transit gw route table also. Don’t accidentally cut off internet access: When you configure the default route of 0.0.0.0/0 to point to eth1 of the BIG-IP device, don’t apply this route to every subnet in your Security VPC. It may be easy to do so accidentally, but remember that it could cause the API calls to update route tables to fail when the Standby device becomes Active. Don’t forget to disable source/dest check on your ENI’s. This is configured by the template, but if you have other devices that require it, remember to check this setting.7.9KViews5likes27CommentsF5 in AWS Part 4 - Orchestrating BIG-IP Application Services with Open-Source tools
Updated for Current Versions and Documentation Part 1 : AWS Networking Basics Part 2: Running BIG-IP in an EC2 Virtual Private Cloud Part 3: Advanced Topologies and More on Highly-Available Services Part 4: Orchestrating BIG-IP Application Services with Open-Source Tools Part 5: Cloud-init, Single-NIC, and Auto Scale Out of BIG-IP in v12 The following post references code hosted at F5's Github repository f5networks/aws-deployments. This code provides a demonstration of using open-source tools to configure and orchestrate BIG-IP. Full documentation for F5 BIG-IP cloud work can be found at Cloud Docs: F5 Public Cloud Integrations. So far we have talked above AWS networking basics, how to run BIG-IP in a VPC, and highly-available deployment footprints. In this post, we’ll move on to my favorite topic, orchestration. By this point, you probably have several VMs running in AWS. You’ve lost track of which configuration is setup on which VM, and you have found yourself slowly going mad as you toggle between the AWS web portal and several SSH windows. I call this ‘point-and-click’ purgatory. Let's be blunt, why would you move to cloud without realizing the benefits of automation, of which cloud is a large enabler. If you remember our second article, we mentioned CloudFormation templates as a great way to deploy a standardized set of resources (perhaps BIG-IP + the additional virtualized network resources) in EC2. This is a great start, but we need to configure these resources once they have started, and we need a way to define and execute workflows which will run across a set of hosts, perhaps even hosts which are external to the AWS environment. Enter the use of open-source configuration management and workflow tools that have been popularized by the software development community. Open-source configuration management and AWS APIs Lately, I have been playing with Ansible, which is a python-based, agentless workflow engine for IT automation. By agentless, I mean that you don’t need to install an agent on hosts under management. Ansible, like the other tools, provides a number of libraries (or “modules”) which provide the ability to manage a diverse collection of remote systems. These modules are typically implemented through the use of API calls, often over HTTP. Out of the box, Ansible comes with several modules for managing resources in AWS. While the EC2 libraries provided are useful for basic orchestration use cases, we decided it would be easier to atomically manage sets of resources using the CloudFormation module. In doing so, we were able to deploy entire CloudFormation stacks which would include items like VPCs, networking elements, BIG-IP, app servers, etc. Underneath the covers, the CloudFormation: Ansible module and our own project use the python module to interact with AWS service endpoints. Ansible provides some basic modules for managing BIG-IP configuration resources. These along with libraries for similar tools can be found here: Ansible Puppet SaltStack In the rest of this post, I’ll discuss some work colleagues and I have done to automate BIG-IP deployments in AWS using Ansible. While we chose to use Ansible, we readily admit that Puppet, Chef, Salt and whatever else you use are all appropriate choices for implementing deployment and configuration management workflows for your network. Each have their upsides and downsides, and different tools may lend themselves to different use cases for your infrastructure. Browse the web to figure out which tool is right for you. Using Standardized BIG-IP Interfaces Speaking of APIs, for years F5 has provided the ability to programmatically configure BIG-IP using iControlSOAP. As the audiences performing automation work have matured, so have the weapons of choice. The new hot ticket is REST (Representational State Transfer), and guess what, BIG-IP has a REST interface (you can probably figure out what it is called). Together, iControlSOAP and iControlREST give you the power to manage nearly every configuration element and feature of BIG-IP. These interfaces become extremely powerful when you combine them with your favorite open-source configuration management tool and a cloud that allows you to spin up and down compute and networking resources. In the project described below, we have also made use of iApps using iControlRest as a way to create a standard virtual server configuration with the correct policies and profiles. The documentation in Github describes this in detail, but our approach shows how iApps provide a strongly supported approach for managing network policy across engineering teams. For example, imagine that a team of software engineers has written a framework to deploy applications. You can package the network policy into iApps for various types of apps, and pass these to the teams writing the deployment framework. Implementing a Service Catalog To pull the above concepts together, a colleague and I put together the aws-deployments project.The goal was to build a simple service catalog which would enable a user to deploy a containerized application in EC2 with BIG-IP network services sitting in front. This is example code that is not supported by F5 support but is a proof of concept to show how you can fully automate production-like deployments in AWS. Some highlights of the project include: Use of iControlRest and iControlSoap within Ansible playbooks to setup advanced topologies of BIG-IP in AWS. Automated deployment of a basic ASM web application firewall policy to protect a vulnerable web app (Hackazon. Use of iApps to manage virtual server configurations, including the WAF policy mentioned above. Figure 1 - Generic Architecture for automating application deployments in public or private cloud In examination of the code, you will see that we provide the opportunity to provision all the development models outlined in our earlier post (a single standalone VE, standalones BIG-IP VEs striped availability zones, clusters within an availability zone, etc). We used Ansible and the interfaces on BIG-IP to orchestrate the workflows assoiated with these deployment models. To perform the clustering step, we have used the iControlSoap interface on BIG-IP. The final set of technology used is depicted in Figure 3. Figure 2 - Technologies used in the aws-deployments project on Github Read the Code and Test It Yourself All the code I have mentioned is available at f5networks/aws-deployments. We encourage you to download and run the code for yourself. Instructions for setting up a development environment which includes the necessary dependencies is easy. We have packaged all the dependencies for use with either Vagrant or Docker as development tools. The instructions for either of these approaches can be found in the README.md or in the /docs directory. The following video shows an end-to-end usage example. (Keep in mind that the code has been updated since this video was produced). At the end of the day, our goal for this work was to collect customer feedback. Please provide some by leaving a comment below, or by filing ‘pull requests’ or ‘issues’ in Github. In the next few weeks, we will be updating the project to include the Hackazon app mentioned above, show how to cluster BIG-IP across availability zones, and how to deploy an ASM profile with an iApp. Have fun!1.3KViews1like3Comments