cookies
42 TopicsPersistent and Persistence, What's the Difference?
The English language is one of the most expressive, and confusing, in existence. Words can have different meaning based not only on context, but on placement within a given sentence. Add in the twists that come from technical jargon and suddenly you've got words meaning completely different things. This is evident in the use of persistent and persistence. While the conceptual basis of persistence and persistent are essentially the same, in reality they refer to two different technical concepts. Both persistent and persistence relate to the handling of connections. The former is often used as a general description of the behavior of HTTP and, necessarily, TCP connections, though it is also used in the context of database connections. The latter is most often related to TCP/HTTP connection handling but almost exclusively in the context of load-balancing. Persistent Persistent connections are connections that are kept open and reused. The most commonly implemented form of persistent connections are HTTP, with database connections a close second. Persistent HTTP connections were implemented as part of the HTTP 1.1 specification as a method of improving the efficiency Related Links HTTP 1.1 RFC Persistent Connection Behavior of Popular Browsers Persistent Database Connections Apache Keep-Alive Support Cookies, Sessions, and Persistence of HTTP in general. Before HTTP 1.1 a browser would generally open one connection per object on a page in order to retrieve all the appropriate resources. As the number of objects in a page grew, this became increasingly inefficient and significantly reduced the capacity of web servers while causing browsers to appear slow to retrieve data. HTTP 1.1 and the Keep-Alive header in HTTP 1.0 were aimed at improving the performance of HTTP by reusing TCP connections to retrieve objects. They made the connections persistent such that they could be reused to send multiple HTTP requests using the same TCP connection. Similarly, this notion was implemented by proxy-based load-balancers as a way to improve performance of web applications and increase capacity on web servers. Persistent connections between a load-balancer and web servers is usually referred to as TCP multiplexing. Just like browsers, the load-balancer opens a few TCP connections to the servers and then reuses them to send multiple HTTP requests. Persistent connections, both in browsers and load-balancers, have several advantages: Less network traffic due to less TCP setup/teardown. It requires no less than 7 exchanges of data to set up and tear down a TCP connection, thus each connection that can be reused reduces the number of exchanges required resulting in less traffic. Improved performance. Because subsequent requests do not need to setup and tear down a TCP connection, requests arrive faster and responses are returned quicker. TCP has built-in mechanisms, for example window sizing, to address network congestion. Persistent connections give TCP the time to adjust itself appropriately to current network conditions, thus improving overall performance. Non-persistent connections are not able to adjust because they are open and almost immediately closed. Less server overhead. Servers are able to increase the number of concurrent users served because each user requires fewer connections through which to complete requests. Persistence Persistence, on the other hand, is related to the ability of a load-balancer or other traffic management solution to maintain a virtual connection between a client and a specific server. Persistence is often referred to in the application delivery networking world as "stickiness" while in the web and application server demesne it is called "server affinity". Persistence ensures that once a client has made a connection to a specific server that subsequent requests are sent to the same server. This is very important to maintain state and session-specific information in some application architectures and for handling of SSL-enabled applications. Examples of Persistence Hash Load Balancing and Persistence LTM Source Address Persistence Enabling Session Persistence 20 Lines or Less #7: JSessionID Persistence When the first request is seen by the load-balancer it chooses a server. On subsequent requests the load-balancer will automatically choose the same server to ensure continuity of the application or, in the case of SSL, to avoid the compute intensive process of renegotiation. This persistence is often implemented using cookies but can be based on other identifying attributes such as IP address. Load-balancers that have evolved into application delivery controllers are capable of implementing persistence based on any piece of data in the application message (payload), headers, or at in the transport protocol (TCP) and network protocol (IP) layers. Some advantages of persistence are: Avoid renegotiation of SSL. By ensuring that SSL enabled connections are directed to the same server throughout a session, it is possible to avoid renegotiating the keys associated with the session, which is compute and resource intensive. This improves performance and reduces overhead on servers. No need to rewrite applications. Applications developed without load-balancing in mind may break when deployed in a load-balanced architecture because they depend on session data that is stored only on the original server on which the session was initiated. Load-balancers capable of session persistence ensure that those applications do not break by always directing requests to the same server, preserving the session data without requiring that applications be rewritten. Summize So persistent connections are connections that are kept open so they can be reused to send multiple requests, while persistence is the process of ensuring that connections and subsequent requests are sent to the same server through a load-balancer or other proxy device. Both are important facets of communication between clients, servers, and mediators like load-balancers, and increase the overall performance and efficiency of the infrastructure as well as improving the end-user experience.4.9KViews0likes2CommentsSessions and Cookies and Persistence, oh my!
At some point (you hope!) it becomes necessary to implement load-balancing for your applications. So you went out and got one, either from a hardware vendor or maybe downloaded a solution, and put it into place. Now you're ready to go, right? Maybe not just yet. Do your applications require persistence? Yes? You did remember to validate that your solution is capable of performing persistence-based load-balancing, didn't you? If you're shaking your head wondering why this application thing is important to load balancing, read on. Persistence is one of the best examples of why it's so very important to understand how the applications you will be load-balancing work, because if an application needs persistence, you may break it without a persistent capable load-balancing solution. The Relationship between Sessions and Cookies Sessions are not cookies, but they can (and do) work together to create the illusion of persistence in an otherwise stateless protocol. Sometimes persistence is referred to as "stickiness", or "sticky connections." That's because what persistence does is ensure that a client connects to the "real" server on which his/her current session is active. When a user connects the first time to a site, a session is created on the server to which the user was directed. If the site is load balanced and the user is directed to a second server on the next request, a new session is created. Obviously this is not an optimal situation. What we need is some mechanism to ensure that a user is reconnected to the same server for the duration of a session. Persistence is the process of ensuring that a user is connected to the same server every time they make a request within the boundaries of a single session. Even though users could be persisted based on their IP address, this is rarely done due to the sharing of IP addresses. Persistence is most often implemented using a cookie containing the server session id because it is the most accurate method of determining where a user's session is currently stored. If you're a web developer or administrator, you might think "that sounds a lot like server affinity". You'd be right, of course, server affinity and persistence are two different terms that mean the same thing. Sessions are stored on the server, and are not reliant on cookies being enable in the client's browser. Sessions are where web developers store bits of application relevant data that they may wish to use across requests. Shopping carts are the most ubiquitous example of session data, but there are other uses for it, especially in complex web applications like CRM (customer relationship management) or SFA (sales force automation) applications. Cookies store bits of data on the client (the browser) and are passed to the server via the HTTP header Cookie. Without persistence, users would be unknowingly creating sessions willy-nilly across multiple web servers in a load-balanced environment. That's a waste of resources, as sessions will remain in memory on the web server until they time out according to the web server's configuration. Additionally, a lack of persistence breaks web applications, because the data stored in the session on previous requests is no longer accessible on Server 2 because it's still sitting over on Server 1. This is why it's so important that a load-balancing or application delivery solution is capable of handling persistence-based load distribution. If your load-balancing solution works based on an industry standard algorithm like round-robin, least-connections, or a weighted version of either, then you're likely to break those applications which require persistence because the load balancing algorithms aren't taking session persistence needs into consideration. Persisting Connections The most common data used to persist connections is SSL session id. SSL connections without persistence is like crust without the bread. Yeah, it's that bad. Basically, load balancing SSL without persistence doesn't work. The second most common data used to persist connections is application or server session id, like JSESSIONID or PHPSESSIONID. These IDs are automatically generated by applications and web servers, and are generally passed to the client as a cookie on the first response, and then used by the load balancer to determine to which server it should direct subsequent requests. An example of HTTP headers storing a JSESSIONID in a cookie: Cookie: JSESSIONID=9597856473431 Cache-Control: no-cache Host: 127.0.0.2:8080 Connection: Keep-Alive Your chosen load-balancing or application delivery solution needs to be able to take application session data into consideration when making routing decisions. It must be able to look at HTTP headers and extract the data the web application stored to determine which server it should direct the request to, or you risk breaking your web applications and wasting resources on your servers. Imbibing: Coffee ADDITIONAL RESOURCES Colin has a great entry in his 20LOL series that implements JSessionID based persistence. The iRule should be easily modified to support other types of application ID based persistence, as long as the value is stored in the HTTP Headers somewhere. And Joe has a short article on enabling session persistence on BIG-IP. Wikipedia has a great discussion on Web server session management here.2.2KViews0likes6CommentsCookies with Duplicate Names, but different values not getting Secure and HttpOnly attributes set
We had an ASV scan come back with one of our applications not setting the Secure and HttpOnly attributes. When they set at the application layer it seems to break their SSO functionality. We are digging into that, but in the meantime, we are using the following iRule to add Secure and HttpOnly attributes. It works; however I noticed that the application has two cookies they are sending with identical names, but different values. For one reason or another, the first cookie with the same name gets the attributes and the second is ignored. We are exploring if the application team needs these and if not we can remove them; however, until then I'm trying to see if anyone else has had this issue or thoughts on a solution. https://support.f5.com/csp/article/K84048752 when HTTP_RESPONSE { foreach mycookie [HTTP::cookie names] { set ck_value [HTTP::cookie value $mycookie] set ck_path [HTTP::cookie path $mycookie] HTTP::cookie remove $mycookie HTTP::cookie insert name $mycookie value $ck_value path $ck_path version 1 HTTP::cookie secure $mycookie enable HTTP::cookie httponly $mycookie enable } } /jeff1.8KViews0likes1CommentiRule to rewrite cookie domain to toplevel domain
I have to domains, 1. xyz.abc.com 2. xyz.abbc.com Now when I am trying to change the domain of the cookie at the toplevel the below iRule isn't working. when HTTP_RESPONSE { set a ".abc.com" set b ".abbc.local" foreach mycookie [HTTP::cookie names] { set cookieDomain [HTTP::cookie domain $mycookie] if {$cookieDomain contains $a } { HTTP::cookie domain $mycookie $a } elseif {$cookieDomain contains $b } { HTTP::cookie domain $mycookie $b } } } There is no response of the iRule. Expected: cookie domain for xyz.abc.com should be .abc.com and for xyz.abbc.com should be .abbc.com Help?Solved1.5KViews0likes17CommentsFive questions you need to ask about load balancing and the cloud
Whether you are aware of it or not, if you’re deploying applications in the cloud or building out your own “enterprise class” cloud, you’re going to be using load balancing. Horizontal scaling of applications is a fairly well understood process that involves (old skool) server virtualization of the network kind: making many servers (instances) look like one to the outside world. When you start adding instances to increase capacity for your application, load balancing necessarily gets involved as it’s the way in which horizontal scalability is implemented today. The fact that you may have already deployed an application in the cloud and scaled it up without recognizing this basic fact may lead you to believe you don’t need to care about load balancing options. But nothing could be further from the truth. If you haven’t asked already, you should. But more than that you need to understand the importance of load balancing and its implications to the application. That’s even more true if you’re considering an enterprise cloud, because it will most assuredly be your problem in the long run. Do not be fooled; the options available for load balancing and assuring availability of your application in the cloud will affect your application – if not right now, then later. So let’s start with the five most important things you need to ask about load balancing and cloud environments regardless of where they may reside. #5 DIRECT SERVER RETURN If you’re going to be serving up video or audio – real-time streaming media – you should definitely be interested in whether or not the load balancing solution is capable of supporting direct server return (DSR). While there are pros and cons to using DSR, for video and audio content it’s nearly an untouchable axiom of application delivery that you should enable this capability. DSR basically allows the server to return content directly to the client without being processed by any intermediary (other than routers/switches/etc… which of course need to process individual packets). In most load balancing situations the responses from the server are returned via the same path they took to get to the server, notably through the load balancer. DSR allows responses to return outside the path of the load balancer or, if still returning through it, to do so unmolested. In the latter scenario the load balancer basically acts as a simple packet forwarder and does no additional processing on the packets. The advantage to DSR is that it removes any additional latency imposed by additional processing by intermediaries. Because real-time streaming media is very sensitive to the effects of latency (jitter), DSR is often suggested as a best practice when load balancing servers responsible for serving such content. Question: Is it supported? #4 HEALTH CHECKING One of the ways in which load balancers/application delivery controllers make decisions regarding which server should handle which request is to understand the current status of the application. It’s part of being context-aware, and it provides information about the application that is invaluable not just to the load balancing decision but to the overall availability of the application. Health checking allows a load balancing solution to determine whether a server/instance is “available” based on a variety of factors. At the simplest level an ICMP ping can be used to determine whether the server is available. But that tells it nothing of the state of the application. A three-way TCP handshake is the next “step” up the ladder, and this will tell the load balancing solution whether an application is capable of accepting connections, but still tells it nothing of the state of the application. A simple HTTP GET takes it one step further, but what’s really necessary is the ability of the load balancing solution to retrieve actual data and ensure it is valid in order to consider an application “available”. As the availability of an application may be (and should be if it is not) one way to determine whether new instances are necessary or not, the ability to determine whether the actual application is available and responding appropriately are important in keeping costs down in a cloud environment lest instances be launched for no reason or – more dangerously – instances are not launched when necessary due to an outage or failure. In an external cloud environment it is important to understand how the infrastructure determines when an application is “available” or “not”, based on such monitoring, as the subtle differences in what is actually being monitored/tested can impact application availability. Question: What determines when an application (instance) is available and responding as expected? #3 PERSISTENCE Persistence is one of the most important facets of load balancing that every application developer, architect, and network professional needs to understand. Nearly every application today makes heavy use of application sessions to maintain state, but not every application utilizes a shared database model for its session management. If you’re using standard application or web server session features to manage state in your application, you will need to understand whether the load balancing solutions available supports persistence and how that persistence is implemented. Persistence basically ensures that once a user has been “assigned” a server/instance that all subsequent requests go to that same server/instance in order to preserve access to the application session. Persistence can be based on just about anything depending on the load balancing solution available, but most commonly takes the form of either source ip address or cookie-based. In the case of the former there’s very little for you to do, though you should be somewhat concerned over the use of such a rudimentary method of enabling persistence as it is quite possible – probable, in fact – that many users will be sharing the same source IP address based on NAT and masquerading at the edge of corporate and shared networks. If the persistence is cookie-based then you’ll need to understand whether you have the ability to determine what data is used to enable that persistence. For example, many applications used PHPSESSIONID or ASPSESSIONID as it is routine for those environments to ensure that these values are inserted into the HTTP header and are available for such use. But if you can’t configure the option yourself, you’ll need to understand what values are used for persistence and to ensure your application can support that value in order to match up users with their application state. Question: How is persistence implemented? #2 QUIESCING (BLEEDING) CONNECTIONS Part of the allure of a cloud architecture is the ability to provision resources on-demand. With that comes the assumption that you can also de-provision resources when they are no longer needed. One would further hope this process is automated and based on a policy configurable by the user, but we are still in the early days of cloud so that may be just a goal at this point. Load balancers and clustering solutions can usually be told to begin quiescing (bleeding off) connections. This means that they stop distributing requests to the specified servers/instances but allow existing users to continue using the application until they are finished. It basically takes a server/instance out of the “rotation” but keeps it online until all users have finished and the server/instance is no longer needed. At that point either through a manual or automated process the server/instance can be de-provisioned or taken offline. This is often used in traditional data centers to enable maintenance such as patching/upgrades to occur without interrupting application availability. By taking one server/instance at a time offline the other servers/instances remain in service, serving up requests. In an on-demand environment this is of course used to keep costs controlled by only keeping the instances necessary for current capacity online. What you need to understand is whether that process is manual, i.e. you need to push a button to begin the process of bleeding off connections, or automated. If the latter, then you’ll need to ask about what variables you can use to create a policy to trigger the process. Variables might be number of total connections, requests, users, or bandwidth. It could also, if the load balancing solution is “smart enough” include application performance (response time) or even time of day variables. Question: How do connections quiesce (bleed) off – manually or automatically based on thresholds? #1 FAILOVER We talk a lot about the cloud as a means to scale applications, but we don’t very often mention availability. Availability usually means there needs to be in place some sort of “failover” mechanism, in case an application or server fails. Applications crash, hardware fails, these things happen. What should not happen, however, is that the application becomes unavailable because of these types of inevitable problems. If one instance suddenly becomes unavailable, what happens? That’s the question you need to ask. If there is more than one instance running at that time, then any load balancing solution worth its salt will direct subsequent requests to the remaining available instances. But if there are no other instances running, what happens? If the provisioning process is manual, you may need to push a button and wait for the new instance to come online. If the provisioning process is not manual, then you need to understand how long it will take for the automated system to bring a new instance online, and perhaps ask about the ability to serve up customized “apology” pages that reassure visitors that the site will return shortly. Question: What kind of failover options are available (if any)? THERE ARE NO STUPID QUESTIONS Folks seem to talk and write as if cloud computing relieves IT staff (customers) of the need to understand the infrastructure and architecture of the environments in which applications will be deployed. Because there is an increasingly symbiotic relationship between applications and its infrastructure – both network and application network – this fallacy needs to be exposed for the falsehood it is. It is more important today, with cloud computing, than it ever has been for all of IT – application, network, and security – to understand the infrastructure and how it works together to deliver applications. That means there are no stupid questions when it comes to cloud computing infrastructure. There are certainly other questions you can – and should – ask a potential provider or vendor in order to make the right decision about where to deploy your applications. Because when it comes down to it it’s your application and your customers, partners, and users are not going to be calling/e-mailing/tweeting the cloud provider; they’re going to be gunning for you if things don’t work as expected. Getting the answers to these five questions will provide a better understanding of how your application will handle unexpected failures, allow you to plan appropriately for maintenance/upgrades/patches, and formulate the proper policies for dealing with the nuances of a load balanced application environment. Don’t just ask about product/vendor and hope that will answer your questions. Sure, your cloud provider may be using F5 or another advanced application delivery platform, but that doesn’t mean that they’re utilizing the product in a way that would offer the features you need to ensure your application is always available. So dig deeper and ask questions. It’s your application, it’s your responsibility, no matter where it ends up running. And the Killer App for Private Cloud Computing Is… Not All Virtual Servers are Created Equal Infrastructure 2.0: The Feedback Loop Must Include Applications Cloud Computing: Is your cloud sticky? It should be The Disadvantages of DSR (Direct Server Return) Cloud Computing: Vertical Scalability is Still Your Problem Server Virtualization versus Server Virtualization1.4KViews0likes3Comments"Always Send Cookie" problems?
Is there a downside to choosing "Always Send Cookie" in an "HTTP Cookie Insert" persistency profile? I am troubleshooting an issue with Cloudflare and a potential issue with my current F5 settings. The below is specifically called out by CF (re: the F5), but I am not 100% that it correlates to the "Always Send Cookie" setting. Per Cloudflare, via https://support.cloudflare.com/hc/en-us/articles/212794707-General-Best-Practices-for-Load-Balancing-with-Cloudflare; // Session cookies section above Cloudflare article If using HTTP cookies to track and bind user sessions to a specific application server at the load balancer, it is best is to configure the load balancer to parse HTTP requests by cookie headers and directing each request to the correct application server even if HTTP requests share the same TCP connection due to keep-alive. For example: F5 BIG-IP load balancers will set a session cookie (if none exists) at the beginning of a TCP connection and then ignore all cookies passed on subsequent HTTP requests made on the same TCP socket. This tends to break session affinity because Cloudflare will send multiple different HTTP sessions on the same TCP connection. (HTTP cookie-based session affinity).1.3KViews0likes1Commentlist element in quotes followed by <something> instead of space
The following iRule occasionally produces errors: foreach cookie [HTTP::cookie names] { switch -glob -- $cookie { "xxxxx" - "xxxx" - "xxxx" - "xyxyxyx" - "xxyxyx" - "xyxyxy*" { HTTP::cookie remove $cookie } } } } I was scrolling through logs and noticed this: - list element in quotes followed by "de25df2-103438293-"" instead of space while executing "foreach cookie [HTTP::cookie names] { switch -glob -- $cookie { "xxxxx" - "xxxx" - "xxx" - "xxxx" - "xxx" - "xxxxxxx" { ..." Initially I tried to use the catch command to prevent execution failures. Mostly I wanted to see the cookie name. I wasn't able to figure out where to place it, however, as the error seems to fire on the line that initiates the switch. I was able to prevent this error form occurring by doing the following, but it seems less efficient. when HTTP_REQUEST { set cookies [HTTP::cookie names] set cookie_list [split $cookies " "] foreach cookie $cookie_list { switch -glob -- $cookie { "xxxx" - "xxxxx" - "xxxxx" - "xxxx" - "xxxx*" - "xxxxxx*" { HTTP::cookie remove $cookie } } } } I would like to understand how to catch the error, or figure out why the native [HTTP::cookie names] list doesn't parse well unless I split it out specifying the " " character.1.2KViews0likes2CommentsHow to maintain Cookie persistence across web application with multiple ports (i.e. 80, 8443)
We have an F5 LB. There are two back-end servers that sit behind it. SSL termination is at the LB. Mapping to the back-end servers is 443 to 80, 80 to 80, and 8443 to 8443. That is, on the back-end servers, we have ports 80 and 8443 open. The LB was first setup with source IP persistence. We just moved to Cookie persistence to alleviate some issues with IP addresses switching mid-session and the like. The cookie persistence is session cookie, meaning no expiry and the cookie should expire when the user closes the browser. The cookie is also encrypted with a passphrase to comply with security practices (not sure why F5 would set the cookie value to some obfuscated value that maps to the back-end server IP and port, since that is apparently not very difficult to un-obfuscate). In testing the web application behind the LB, everything seemed to be OK. Then, we got a report from users with a particular piece in the web application, which loads pages over multiple ports (i.e. 80, 8443). What I see happening is, whether the request first starts with 80 or 8443, the LB cookie value is being generated again when the page is requested from the other port. It doesn't happen 100% but it happens frequently. Then the application reports that the application session is invalid. My guess is even if the cookie value changes, if it happens to hit the same back-end server, there will be no issue. However, if the request happens to hit the other back-end server, the web application will see the request as a new application session and, thus, report that the application session is invalid. What I think what might fix it is maintaining the cookie persistence across the multiple ports we have configured (i.e. 80, 8443). The problem is I'm not quite sure how to do that. That's why I'm here, to ask people who are smarter than me and more experienced and may know some solutions that would work. I would prefer to keep using HTTP Cookie Insert method, although I understand cookie hash does have options similar to source ip persistence such as match across virtual servers and the like. I don't know how much that would help me here, if at all. Can I use an iRule? If so, what might that look like? Maybe an iRule that checks if there is an existing LB cookie and if the request is coming from 80 but going to 8443 or vice-versa, then insert the cookie into said request such that no new cookie is generated and the same cookie is shared across port 80 and port 8443. Sounds good in theory but I'm not even sure of the first place to start to even attempt it. I hope what I'm asking is clear. If it isn't, please feel free to ask me to clarify. I want to maintain the LB cookie across multiple ports. Example: User goes to testsite.com/index.html. LB generates a cookie with value 1234. User is sent to back-end server 1. User then goes to testsite.com:8443/index2.html. LB will generate a new cookie with a different value--let's say it's 5678--and it may send the user to back-end server 1 or 2. If it goes to 1, the web app should be OK. If the request goes to 2, the web app will complain because the web app session is on 1. I want testsite.com and testsite.com:8443 to both have the same exact cookie, which, in this example, would be 1234. Does that make sense? Any help would be appreciated. Thank you in advance.899Views0likes2CommentsPriority Group Activation Failback with HTTP Cookie Insert
Hello All, Can someone help me the below issue? We have a pool with 3 members. 2 members have high priority (Round Robin) and 1 member has low priority. When both the primary members go down, the low priority member should take over the traffic. We have Cookie Insert persistence enabled on the virtual server. In Cookie persistence, "Expiration: Session Cookie" enabled. When both the primary members were made down, the low priority member took over the traffic. When both the primary members came back UP, the traffic continued to go to low priority backend member. When the browser tab is closed and tried to access the URL in new tab, the traffic went to low priority backend member. When the browser window is closed and tried to access the URL in new tab, the traffic still went to low priority backend member. When the browser cookies were deleted and tried to access the URL in new tab, the traffic was taken over by the high priority members. This behavior is not desired and we need to force the LB to use high priority backend members as soon as they come UP. When user tries the connection from new browser or new tab, the traffic should go to high priority pool members. Please let me know how i can achieve the desired behavior. Regards799Views0likes4CommentsWhat happens if the ASM sees a TS cookie it did not set.
I have a configurtion and I am wondering if this causing Timestamp Expired cookie violations. I have a configuration where a TS cookie can pass back through a policy that did not set it. In this config we have different policies for different Url's. In the responce the set-cookie is for a 'higher' point in the domain tree. For exmaple the policy sets the cookie for .abcdef.com in a policy that is mapped to xyz.abcdef.com. Another request is made to another policy that is mapped to 123.abcdef.com but in the request the cookie from the other policy is the TS cookie from the last request, but this policy did not set it so is not aware of it. Policy one set cookie TSxxxxxx in domain abcdef.com from request to zyx.abcdef.com Policy two gets request to 123.abcdef.com and receices the TS cookie for the sub domain abcdef.com Would this create a Cookie Violation - Expired TimeStamp. I am think the ASM reconises it as a TS cookie but also knows it was not set by the policy that is inspeciting it. Any clues would be great Graham798Views0likes1Comment