ltm policy
20 TopicsExtract SAN from Client SSL Certificate & Insert into HTTP Header
Hi folks, I'm working with some co-workers to setup some Slack.com forwarding in our environment. Mutual TLS and the insertion of the SAN from the client certificate into a HTTP header is a requirement. Can anyone help me come up with an iRule or LTM Policy to extract the SAN/CN from the client SSl cert and insert it as a HTTP header? Here's some additional info from Slack: Configure your TLS-terminating server to request client certificates. Your server should accept client certificates issued by DigiCert SHA2 Secure Server CA, an intermediate CA under DigiCert Global Root CA. These CAs are included in many standard CA certificate bundles. 1- Extract either of the following fields in the certificate. Subject Alternative Name: DNS:platform-tls-client.slack.com. By RFC 6125, this is the recommended field to extract. or Subject Common Name: platform-tls-client.slack.com. 2- Inject the extracted domain into a header, and forward the request to your application server. Here's an example header you might add to the request: X-Client-Certificate-SAN: platform-tls-client.slack.com. Whatever you choose to call your header, check to make sure this header hasn't already been added to the request. Your upstream application server must know that the header was added by your TLS-terminating server as part of the Mutual TLS process.Solved2.5KViews1like8CommentsUsing LTM Policy to Redirect Host But Preserving Original URI
I am looking for some guidance and hopefully the community can help. We are trying to perform a host redirect using an LTM policy. The requirements are as follows: If URL contains uri /thisuri, forward request to pool http_server. If URL contains uri that is not /thisuri, redirect request to https://www.domain.com/[orginal_uri] We managed to configure our LTM policy to do everything except preserve the URI in the original request if URI is not /thisuri. Is preserving the URI from the original client request even possible when using an LTM policy? Has anyone tried doing something like this before? Our current logic is like this (we are using first match policy strategy): 1.test_uri_redirect Match all the following conditions HTTP URI > path > is > any of > /thisuri at request time. Do the following when the traffic is matched Forward Traffic > to pool > /Common/https_server > at request time. 2.test_host_redirect Match all the following conditions HTTP URI > path > is not > any of > /thisuri > at request time. Do the following when the traffic is matched Redirect > to location https://www.domain.com > at request time. All that we are missing is how to tell the BIG-IP to preserve the original URI path. Any help would be much appreciated.2.1KViews1like3CommentsHTTP Security Headers - LTM Policies
Hi folks, I'm trying to create some LTM Policies for the following: •X-XSS-Protection •X-Content-Type-Options •Content-Security-Policy •Strict-Transport-Security I already have the following working iRules, but would like to use Policies instead to limit impact on CPU: X-XSS-Protection when HTTP_RESPONSE { if { !([ HTTP::header exists "X-XSS-Protection" ])} { HTTP::header insert "X-XSS-Protection" "1; mode=block" } } X-Content-Type-Options when HTTP_RESPONSE { if { !([ HTTP::header exists "X-Content-Type-Options" ])} { HTTP::header insert "X-Content-Type-Options" "'nosniff'" } } Content-Security-Policy when HTTP_RESPONSE { if { !([ HTTP::header exists "content-security-policy " ])} { HTTP::header insert "content-security-policy" "default-src 'self';" } } Strict-Transport-Security when HTTP_RESPONSE { if { !([ HTTP::header exists "Strict-Transport-Security" ])} { HTTP::header insert "Strict-Transport-Security" "max-age=16070400" } } ...and here's what I've come up with so far for LTM Policy versions. Full disclosure, I'm a total novice with policies. Am I even close? ltm policy X-XSS-Protection { last-modified 2017-11-28:13:37:23 requires { http } rules { X-XSS-Protection { actions { 0 { http-header response insert name X-XSS-Protection value "1; mode=block" } } conditions { 0 { http-header response name X-XSS-Protection contains values { X-XSS-Protection } } } } } status published strategy first-match } ltm policy X-Content-Type-Options { last-modified 2017-11-28:13:37:19 requires { http } rules { X-Content-Type-Options { actions { 0 { http-header response insert name X-Content-Type-Options value "'nosniff'" } } conditions { 0 { http-header response name X-Content-Type-Options contains values { X-Content-Type-Options } } } } } status published strategy first-match } ltm policy content-security-policy { last-modified 2017-11-28:13:37:25 requires { http } rules { content-security-policy { actions { 0 { http-header response insert name content-security-policy value "default-src 'self';" } } conditions { 0 { http-header response name content-security-policy contains values { content-security-policy } } } } } status published strategy first-match } ltm policy Strict-Transport-Security { last-modified 2017-11-28:13:37:15 requires { http } rules { Strict-Transport-Security { actions { 0 { http-header response insert name Strict-Transport-Security value max-age=16070400 } } conditions { 0 { http-header response name Strict-Transport-Security contains values { Strict-Transport-Security } } } } } status published strategy first-match }1.4KViews1like5CommentsIncosistent forwarding of HTTP/2 connections with layered virtual
Hi, I'm using a layered virtual configuration: Tier1: Virtual applying SNI-Routing (only SSL persistence profile and LTM policy as described in https://www.devcentral.f5.com/kb/technicalarticles/sni-routing-with-big-ip/282018) Tier2: Virtual applies SSL termination and delivering the actual application, with the required profiles, iRules, .... If the required, an additional LTM policy is applied for URI-based routing and forwards to Tier3 VS. Tier3 (optional, if required): Virtual delivers specific applications, like microservices, usually no monolithical apps. This configuration is very robust and I'm working with it successfully since years. Important: The tier1 uses one single IP address and a single port. So all tier2 and tier3 virtuals MUST be externally available through the same IP address and port. Now I have to publish the first HTTP/2 applications over this concept and see strange behavior of the BIG-IP. User requests www.example.com. IP and port point to tier1 virtual. Tier1 LTM policy forwards the requests, based on the SNI, to tier2 virtuals "vs-int_www.example.com". Within www.example.com there are references to piwik.example.com, which is another tier2 virtual, behind my tier1 virtual. User requests piwik.example.com. IP and port point to tier1 virtual. Tier1 LTM policy forwards the requests to "vs-int_www.example.com" instead of "vs-int_piwik.example.com". Probably not based on SNI, but on the existing TCP connection. I'm afraid, that this bahvior is a result of HTTP/2, especially because of the persistent TCP connection. I assume that, because the connection ID (gathered from browser devtools) for requests to www.example.com and piwik.example.com is identical. From the perspective of the browser I wouldn't expect such a behavior, because the target hostname differs. I didn't configure HTTP/2 in full-proxy mode, as described in several articles. I've just enabled it on the client-side. I would be very happy for any input on that. Thanks in advance!1.2KViews0likes11CommentsiRule to Rewrite User Agent Header
We have a client that has an internal browser policy for IE to run in compatibility mode due to legacy applications. Unfortunately with the latest update to our application servers this causes issues with the content displaying properly. The application displays fine when the User Agent appears as one that is compatible. I have an iRule that I believe will work in concept; however I'm getting errors (See Below). when HTTP_REQUEST { Rewrite the User-Agent header value to show up supported browser if { [string toupper [HTTP::header User-Agent]] contains “MSIE 6” or “MSIE 7” or “MSIE 8” or “MSIE 9” or “MSIE 10”)}{ Replace the User-Agent header with supported user agent HTTP::header replace “User-Agent” "Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko" } } 01070151:3: Rule [/Common/HTTP-UserAgentHeader-Rewrite-iRule] error: /Common/HTTP-UserAgentHeader-Rewrite-iRule:3: error: [parse error: PARSE syntax 156 {syntax error in expression " [string toupper [HTTP::header User-Agent]] contains “...": unexpected operator &}][{ [string toupper [HTTP::header User-Agent]] contains “MSIE 6” or “MSIE 7” or “MSIE 8” or “MSIE 9” or “MSIE 10”)}] /Common/HTTP-UserAgentHeader-Rewrite-iRule:5: error: [undefined procedure: User-Agent&8221][User-Agent”] /Common/HTTP-UserAgentHeader-Rewrite-iRule:5: error: [undefined procedure: Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko]["Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko"]1.2KViews0likes5CommentsLTM policy to redirect uri to correct pool
I am trying to create a policy which redirect the request to correct pool based on URI. So i am trying to use a simple logic i.e. If uri starts with /help or /assistance at request time forward the traffic to POOL A at request time. Rest all traffic should go to default pool POOL B. In the VS default pool is configured POOL B. But it is not working for some reason & default traffic is also hitting the POOL A resulting 404 sometimes vice versa. I also tried opposite mean if uri doesn't start with /help or /assistance redirect the traffic to POOL B at request time. Default pool in VS is POOL A. Still it is doing mix n match & creating 404 errors. Can you please help me to understand if i need to create the default rule under policy? I am unable to find a knowledge article which describes how to add default rule which redirects all the default traffic to default pool. So my policy looks like If uri starts_with /help or /assistance --> Forward traffic to POOL A Rest/default goes to POOL B Thanks!809Views0likes1CommentWorking without trailing slash in LTM rewrite profile URI rules
Hi, I am trying to implement simple reverse proxy with load balancing based on URI path. Here is the example: F5 VIP 1 listening to main.example.com:80 - default HTTP to HTTPS redirect iRule is applied F5 VIP 2 listening to main.example.com:443 App server 1 listening to foo.example.com:443 App server 2 listening to bar.example.com:443 App server 3 listening to portal.example.com:443 Rewriting rules and load balancing rules examples: https://main.example.com -> https://portal.example.com/src/portal/ (App server 3) https://main.example.com/aa/ -> https://foo.example.com/aa/ (App server 1) https://main.example.com/bb/cc/ -> https://foo.example.com/bb/cc/ (App server 1) https://main.example.com/dd/ -> https://bar.example.com/dd/ (App server 2) https://main.example.com/dd -> https://bar.example.com/dd/ (App server 2) So basically there are 3 different back end app servers, each listening on different virtual host, and client requests should be redirected to these servers based on the URI path, while the host part of the URL must also be rewritten in all headers and whole HTML content. End user must always see only main.example.com in their browser's address field. In prior TMOS versions there was the ProxyPass iRule used for such functionality. But since my case is not too complicated and I am running 11.6, there is a way to supplement ProxyPass functionality with build in features: LTM Rewrite profile and LTM policy. I do the necessary URI rewrite in rewrite profile via URI rules and request forward in LTM policy rules. Everything works just fine, except one small annoying thing. Users want to have the option to ignore the trailing slash in URI path when calling a default resource within a directory. So for example, they want to be able to call main.example.com/dd and get the default resource from the /dd/ directory. My problem is that LTM rewrite profile does not allow me to specify URI rules without a slash at the end of URI. And without it, the whole concept does not work. Because when the user calls main.example.com/dd, F5 does not match this request to any URI rewrite rules, hence the host part stays "main" instead of being rewritten to "bar". The LTM policy actually forwards the request to correct app server because in the LTM policy I am able to declare a condition "if URI path begins with /dd". But the app server 2 does not accept request for virtual host 'main'. So I get an error. And I cannot do the URL rewrite in LTM policy. I need to rewrite all links in headers, cookies and content, so I need to use LTM rewrite profile to accomplish all that. Also something like 'main.example.com/zz' can be a legitimate request for a file called 'zz' inside the root directory of app server. So the F5 needs to be able to rewrite also requests without trailing slash and catch the HTTP redirects, rewrite them accordingly also in HTTP responses. Blindly inserting '/' at the end of each request is hence not possible. Any idea would be much appreciated! Thanks.801Views0likes1CommentPolicy action getting overwrote by irule
Hello Everyone, I'm currently using irule to publish application (filtering by URL). At the end of the irule I have a redirect by default. when HTTP_REQUEST {set path [string tolower [HTTP::path]] switch -glob [string tolower [HTTP::host]] { "example.com" { pool example-com-pool } default { HTTP::respond 301 Location "https://[HTTP::host][HTTP::uri]" } } I'm having an issue migrating from irule to ltm policies. I have this rule. My policy rule is matching (Thanks to the log action). But the forward traffic action is not working. I'm redirected to https by the default condition of the irule. So it looks like the irule is overwroting what my policy is doing. Anyone encountered this issue ? Thanks!Solved686Views0likes3CommentsConvert language set iRule to LTM Policy
Hi folks, Hoping someone can offer some advice on converting an iRule to a LTM Policy. The rule is used to set a cookie which specifies either English or Spanish language on one of our websites. I have very limited experience writing iRules or Policies (the iRule in question was written by a consultant for us long ago). Below is the existing iRule: Determine whether to write the cookie when HTTP_REQUEST { Log our host and IP address log local0. "Host = [HTTP::host]; Client = [IP::client_addr]" Set our values based on host name, then by cookie's existence + value if { [HTTP::host] contains "es.test.company.com" } { Does cookie exist? Is its value 'es'? if { ([HTTP::cookie exists "i18next"]) && ([HTTP::cookie value i18next] equals "es") } { log local0. "Cookie exists and is set properly to 'en'" set write_cookie "0" } else { log local0. "Cookie will be written as 'es'" Flag to write a cookie set write_cookie "1" set value "es" } } else { Does cookie exist? Is its value 'en'? if { ([HTTP::cookie exists "i18next"]) && ([HTTP::cookie value i18next] equals "en") } { log local0. "Cookie exists and is set properly to 'en' " set write_cookie "0" } else { log local0. "Cookie will be written as 'en'" Flag to write a cookie set write_cookie "1" set value "en" } } } Now we set/update the cookie when HTTP_RESPONSE { if { $write_cookie == "1" } { log local0. "Setting cookie to $value" HTTP::cookie insert name "i18next" value $value domain "company.com" path "/" HTTP::cookie secure i18next enable } } ...and here is what I've come up with so far for a LTM Policy (which isn't working): ltm policy es.test.company.com-locality { requires { http } rules { domain_es { actions { 0 { http-set-cookie response insert domain company.com name i18next path / value es } } conditions { 0 { http-host host values { es.test.company.com } } 1 { http-cookie name i18next not values { es } } } ordinal 2 } domain_es_i18next_es { conditions { 0 { http-host host values { es.test.company.com } } 1 { http-cookie name i18next values { es } } } ordinal 1 } en { actions { 0 { http-set-cookie response insert domain company.com name i18next path / value en } } conditions { 0 { http-host host not values { es.test.company.com } } 1 { http-cookie name i18next not contains values { en } } } ordinal 4 } i18next_en { conditions { 0 { http-host host not values { es.test.company.com } } 1 { http-cookie name i18next contains values { en } } } ordinal 3 } } strategy all-match } Thanks!670Views0likes1CommentF5 OpenStack Testing Methodology: Part Two
In our first article, we briefly discussed the why and the how of testing for the OpenStack team. In this article, we'd like to elaborate more on the testing methodology and some of the drivers behind the tests we create. Curiously, this isn't defined by the categories of unit/function/system tests, but in a more realistic way. We like to accumulate the above categories of tests in a way that gives us well-rounded coverage of our feature. Here's how we think about testing. Use-Case Tests: This is less of a type of test and more of a mindset to be in while developing a test plan. First and foremost, we develop our tests with the customer use-case(s) in mind. These are what we often refer to as 'happy-path' tests, because the customer certainly doesn't hope for something to go wrong. They hope for all things to come up roses, and the first tests we write ensure this is true. It may seem like the low-hanging fruit, but it is an important step in the testing automation process to convince ourselves that, all things being perfect in the universe, our new feature can accomplish what it needs to. Manual testing is often the true first step to vetting a new feature, but it's prone to error and it may assume some tribal knowledge that new developers don't have. The use-case tests give us a first-glimpse into new functionality, and if we find bugs here, they are very often critical in nature. The nature of a use-case is pretty simple: we use the requirements specifications for the feature, verbatim, as inspiration for the test. This may mean talking to the customer directly for further clarification on their expectations. They tell us they want X, Y, and Z when they provide the product with A, B, and C, and we write a test that does exactly that. We may go a step further and do additional validations based on our intimate knowledge of the feature, but the core of the test is driven by the customer's needs. One important thing to note here is that these tests may not all be full Tempest tests (end-to-end system tests). They are a mixture of unit, functional, and system tests. The unit tests are often written as we develop, to offer some white-box testing of functions or modules. The functional tests may require a real BIG-IP device, but no actual OpenStack installation. Negative Tests: The negative tests (unhappy path) aim to ensure the feature falls over very gracefully, like a ballet dancer, if something ever goes wrong. When developing this type of test, we put on our creative hats, and try to come up with interesting ways in which things could go awry. An example might be a link flapping on some network in the communication pathway or simple network lag. Another might be a mistake in the configuration for the feature. This type of test can be approached in two general ways, black-box and white-box. The black-box tests are better suited for Tempest and functional tests, because they reproduce real-world situations, such as an untested environment, with real infrastructure. The unit tests are good for white-box testing, because we can craft a specific set of arguments to a function/method that we know will fail. Then we ensure evasive action is taken appropriately or the proper log messages show up to tell someone that something went wrong. Negative tests are where we have a whole lot more fun. Happy path tests generally evoke a small smile of victory, but we're only really excited when we start breaking things. We make notes while writing code that 'this' function would be good for a unit test, and 'this' may be suited to a Tempest test. Our notes derive from knowledge that a specific algorithm is terrific when we are in a happy path situation, but outside of that, the code may be peanut brittle. Mentally, we say, "Yes, this works in one way, but we need to really batter it with tests." This often stems from a deep suspicion of ourselves and our own abilities, but that is a topic of another article. We are never happy with our negative tests until we find bugs. If we don't find issues, then we aren't writing our tests correctly. Stress/Fuzz/Performance Tests: The remainder of testing encapsulates a whole host of use-cases, edge testing, and stress testing. I think this would be a great topic for a third version of this article. There are many paths to go by here, and we can really turn on the creativity again to ensure our feature survives in the face of great adversity (like Sean Astin in Rudy). Story Time: As an example of most of the above, I'll demonstrate how we wrote tests for the new Layer 7 Content Switching feature in our OpenStack LBaaSv2 Agent and OpenStack LBaaSv2 Driver. The feature provides a way for customers to use the BIG-IP (via OpenStack's Neutron LBaaS product) to shape traffic on their networks. This is done by deploying LTM Policies and LTM Policy Rules on the BIG-IP. I developed the translation layer of the agent, which takes in the OpenStack description for layer 7 policies and rules and converts that into something the BIG-IP can understand. This translation takes in a JSON object and produces a new JSON object. So I naively started with unit tests that validated that the JSON object out is what I expected it to be based on the rules of translation. Then I quickly saw that it doesn't make a whole heap of difference what the result of the translation looks like if it cannot be deployed on the device. So I morphed these unit tests into functional-type tests with a real BIG-IP, where I deployed the policies and rules. This was just the first step in my happy path use-case test plan. It was kind of boring, but informative. I found a couple of bugs that will never embarrass me across the company. However, I knew much more exciting things were around the corner. I then moved into writing system tests (Tempest) for testing the deployment of these policies and running real traffic through the BIG-IP to determine if the set of policies and rules shape the traffic in the way I expected them to. This is also happy path, but it made me mindful of more interesting types of use-case tests that would be good to implement. What if I were to create five policies, each with two rules, then reorder the policies from an OpenStack perspective. Is my traffic now steered the way I expect? Does the BIG-IP look like I expect it to? Then we moved on to conducting negative tests. I say 'conducting' because they may not be automated, but we at least need to verify the feature fails in informative ways. If a policy fails to deploy, can I validate that the agent log has the proper message, so a customer isn't pulling their hair out trying to figure out what went wrong? One good gauge for such a thing is how informative are the failure messages to me as a developer? If I can't figure out what went wrong at first glance, then a customer likely would not either. And what happens if an OpenStack user attempts to create a rule we don't currently implement? These are the types of tests where we begin to rub our hands together like Dr. Evil. What happens when we want to steer traffic based on whether a request header called "W_AND_P" is defined and the value of the header is the first chapter of War and Peace? We want to find those flaws in our code before the customer does. That's our main driver. We hope you have enjoyed our second installment of testing in OpenStack. Leave us some feedback for possible next topics of discussion or if you're looking for further information.301Views0likes0Comments