network
73 TopicsHow to use Ansible with Cisco routers
Quick Intro For those who don't know, there is an Ansible plugin callednetwork_clito retrieve network device configuration for backup, inspection and even execute commands. So, let's assume we have Ansible already installed and 2 routers: I used Debian Linux here and I had to install Python 3: I've also installed pip as we can see above because I wanted to install a specific version of Ansible (2.5.+) that would allow myself to use network_cli plugin: Note: we can list available Ansible versions by just typingpip install ansible== I've also created a user namedansible: Edited Linuxsudoersfile withvisudocommand: And addedAnsible user permission to run root commands without prompting for password so my file looked liked this: Quick Set Up This is my directory structure: These are the files I used for this lab test: Note: we can replace cisco1.rodrigo.example for an IP address too. In Ansible, there is a default config file (ansible.cfg) where we store the global config, i.e. how we want Ansible to behave. We also keep the list of our hosts into an inventory file (inventory.yml here). There is a default folder (group_vars) where we can store variables that would apply to any router we ran Ansible against and in this case it makes sense as my router credentials are the same. Lastly, retrieve_backup.yml is my actual playbook, i.e. where I tell Ansible what to do. Note: I manually logged in to cisco1.rodrigo.example and cisco2.rodrigo.example to populate ssh known_hosts files, otherwise Ansible complains these hosts are untrusted. Populating our Playbook file retrieve_backup.yml Let's say we just want to retrieve the OSPF configuration from our Cisco routers. We can useios_commandto type in any command to Cisco router and useregisterto store the output to a variable: Note: be careful with the indentation. I used 2 spaces here. We can then copy the content of the variable to a file in a given directory. In this case, we copied whatever is inospf_outputvariable toospf_configdirectory. From the Playbook file above we can work out that variables are referenced between{{ }}and we might probably be wondering why do we need to append stdout[0] to ospf_output right? If you know Python, you might be interested in knowing a bit more about what's going on under the hood so I'll clarify things a bit more here. The variableospf_outputis actually a dictionary andstdoutis one of its keys. In reality, ospf_output.stdout could be represented as ospf_output['stdout'] We add the[0] because the object retrieved by the key stdout is not a string. It's a list! And[0] just represents the first object in the list. Executing our Playbook I'll create ospf_config first: And we execute our playbook by issuingansible-playbookcommand: Now let's check if our OSPF config was retrieved: We can pretty much type in any IOS command we'd type in a real router, either to configure it or to retrieve its configuration. We could also append the date to the file name but it's out of the scope of this article. That's it for now.3.6KViews1like1CommentBidirectional Forwarding Detection (BFD) Protocol Cheat Sheet
Definition This is a protocol initially described inRFC5880and IPv4/IPv6 specifics inRFC5881. I would say this is an aggressive 'hello-like' protocol with shorter timers but very lightweight on the wire and requiring very little processing as it is designed to be implemented in forwarding plane (although RFC does not forbid it to be implemented in control plane). It also contains a feature called Echo that further leaves cpu processing cycle to roughly ZERO which literally just 'loops' BFD control packets sent from peer back to them without even 'touching' (processing) it. BFD helps routing protocol detects peers failure at sub-second level and BIG-IP supports it on all its routing protocols. On BIG-IP it is control-plane independent as TMM that takes care of BFD sending/receivingunicastprobes (yes,no multicast!) and BIG-IP's Advanced Routing Module® being responsible only for its configuration (of course!) and to receive state information from TMM that is displayed in show commands. BIG-IP's control plane daemon communicates with TMM isoamd. This daemon starts when BFD is enabled in the route domain like any other routing protocol. BFD Handshake Explained 218: BFD was configured on Cisco Router but not on BIG-IP so neighbour signals BIG-IP sessionstate is Down and no flags 219: I had just enabled BFD on BIG-IP, session state is now Init and only Control Plane Independent flag set¹ 220: Poll flag is set to validate initial bidirectional connectivity (expecting Final flag set in response) 221: BIG-IP sets Final flag and handshake is complete² ¹Control Plane Independentflag is set because BFD is not actively performed by BIG-IP's control plane. BIG-IP's BFD control plane daemon (oamd) just signals TMM what BFD sessions are required and TMM takes care of sending/receiving of all BFD control traffic and informs session state back to Advanced Routing Module's daemon. ²Packets 222-223are just to show that after handshake is finished all flags are cleared unless there is another event that triggers them. Packet 218 is how Cisco Router sees BIG-IP when BFD is not enabled. Control Plane Independent flag on BIG-IP remains though for the reasons already explained above. Protocol fields Diagnostic codes 0 (No Diagnostic): Typically seen when BFD session is UP and there are no errors. 1 (Control Detection Time Expired):BFD Detect Timer expired and session was marked down. 2 (Echo Function Failed):BFD Echo packet loop verification failed, session is marked down and this is the diagnostic code number. 3 (Neighbor Signaled Session Down):If either neighbour advertised state or our local state is down you will see this diagnostic code 4 (Forwarding Planet Reset):When forwarding-plane (TMM) is reset and peer should not rely on BFD so session is marked down. 5 (Path Down):Ondemand mode external application can signal BFD that path is down so we can set session down using this code 6 Concatenated Path Down): 7 (Administratively Down):Session is marked down by an administrator 8 (Reverse Concatenated Path Down): 9-31:Reserved for future use BFD verification 'show' commands ³Type IP address to see specific session Modes Asynchronous(default): hello-like mode where BIG-IP periodically sends (and receives) BFD control packets and if control detection timer expires, session is marked as down.It uses UDP port3784. Demand: BFD handshake is completed but no periodic BFD control packets are exchanged as this mode assumes system has its own way of verifying connectivity with peer and may trigger BFD verification on demand, i.e. when it needs to use it according to its implementation.BIG-IP currently does not support this mode. Asynchronous + Echo Function: When enabled, TMM literally loops BFDecho-specificcontrol packetson UDP port 3785sent from peers back to them without processing it as it wasn't enough that this protocol is already lightweight. In this mode, a less aggressive timer (> 1 second) should be used for regularBFD control packets over port 3784 and more aggressive timer is used by echo function.BIG-IP currently does not support this mode. Header Fields Protocol Version:BFD version used. Latest one is v1 (RFC5880) Diagnostic Code: BFD error code for diagnostics purpose. Session State: How transmitting system sees the session state which can be AdminDown, Down, Init or Up. Message Flags:Additional session configuration or functionality (e.g. flag that says authentication is enabled) Detect Time Multiplier:Informs remote peer BFD session is supposed to be marked down ifDesired Min TX Intervalmultiplied by this value is reached Message Length(bytes):Length of BFD Control packet My Discriminator:For each BFD session each peer will use a unique discriminator to differentiate multiple session. Your Discriminator:When BIG-IP receives BFD control message back from its peer we add peer's My Discriminator to Your Discriminator in our header. Desired Min TX Interval(microseconds):Fastest we can send BFD control packets to remote peer (no less than configured value here) Required Min RX Interval(microseconds):Fastest we can receive BFD control packets from remote peer (no less than configured value here) Required Min Echo Interval(microseconds):Fastest we can loop BFD echo packets back to remote system (0 means Echo function is disabled) Session State AdminDown:Administratively forced down by command Down: Either control detection time expired in an already established BFD session or it never came up.If probing time (min_tx) is set to 100ms for example, and multiplier is 3 then no response after 300ms makes system go down. Init:Signals a desire to bring session up in the beginning of BFD handshake. Up:Indicates session is Up Message Flags Poll:Pool flag is just a 'ping' that requires peer box to respond with Final flag. In BFD handshake as well as in Demand mode pool message is a request to validate bidirectional connectivity. Final:Sent in response to packet with Poll bit set Control Plane Independent:Set if BFD can continue to function if control plane is disrupted¹ Authentication Present:Only set if authentication is being used Demand:If set, it is implied that periodic BFD control packets are no longer sent and another mechanism (on demand) is used instead. Multipoint:Reserved for future use of point-to-multipoint extension. Should be 0 on both sides. ¹ This is the case for BIG-IP as BFD is implemented in forwarding plane (TMM) BFD Configuration Configure desired transmit and receive intervals as well as multiplier on BIG-IP. And Cisco Router: You will typically configure the above regardless of routing protocol used. BFD BGP Configuration And Cisco Router: BFD OSPFv2/v3 Configuration BFD ISIS Configuration BFD RIPv1/v2 Configuration BFD Static Configuration All interfaces no matter what: Specific interface only: Tie BFD configuration to static route:9.4KViews0likes7CommentsThe Order of (Network) Operations
Thought those math rules you learned in 6 th grade were useless? Think again…some are more applicable to the architecture of your data center than you might think. Remember back when you were in the 6 th grade, learning about the order of operations in math class? You might recall that you learned that the order in which mathematical operators were applied can have a significant impact on the result. That’s why we learned there’s an order of operations – a set of rules – that we need to follow in order to ensure that we always get the correct answer when performing mathematical equations. Rule 1: First perform any calculations inside parentheses. Rule 2: Next perform all multiplications and divisions, working from left to right. Rule 3: Lastly, perform all additions and subtractions, working from left to right. Similarly, the order in which network and application delivery operations are applied can dramatically impact the performance and efficiency of the delivery of applications – no matter where those applications reside.361Views0likes1CommentWhat Does Mobile Mean, Anyway?
We tend to assume characteristics upon hearing the term #mobile. We probably shouldn’t… There are – according to about a bazillion studies - 4 billion mobile devices in use around the globe. It is interesting to note that nearly everyone who notes this statistic and then attempts to break it down into useful data (usually for marketing) that they almost always do so based on OS or device type – but never, ever, ever based on connectivity. Consider the breakdown offered by W3C for October 2011. Device type is the chosen taxonomy, with operating system being the alternative view. Unfortunately, aside from providing useful trending on device type for application developers and organizations, this data does not provide the full range of information necessary to actually make these devices, well, useful. Consider that my Blackberry can either connect to the Internet via 3G or WiFi. When using WiFi my user experience is infinitely better than via 3G and, if one believes the hype, will be even better once 4G is fully deployed. Also not accounted for is the ability to pair my Blackberry Playbook to my Blackberry phone and connect to the Internet via that (admittedly convoluted) chain of connectivity. Bluetooth to 3G or WiFi (which in my house has an additional chain on the LAN and then back out through a fairly unimpressive so-called broadband connection). But I could also be using the Playbook’s built-in WiFi (after trying both this is the preferred method, but in a pinch…) You also have to wonder how long it will be before “mobile” is the GPS in your car, integrated with services via Google Map or Bing to “find nearby” while you’re driving? Or, for some of us an even better option, find the nearest restroom off this highway because the four-year old has to use it – NOW. Trying to squash “mobile” into a little box is about as useful as trying to squash “cloud” into a bigger box. It doesn’t work. The variations in actual implementation in communication channels across everything that is “mobile” require different approaches to mitigating operational risk, just as you approach SaaS differently than IaaS differently than PaaS. Defining “mobile” by its device characteristics is only helpful when you’re designing applications or access management policies. In order to address real user-experience issues you have to know more about the type of connection over which the user is connecting – and more. CONTEXT is the NEW BLACK in MOBILE This is not to say that device type is not important. It is, and luckily device type (as well as browser and often operating system), are an integral part of the formula we all “context.” Context is the combined set of variables that make it possible to interpret any given connection with respect to its unique client, server, network, and application needs. It’s what allows organizations to localize, to hyperlocalize, and to provide content based on location. It’s what enables the ability to ensure performance whether over 3G, 4G, LAN, or congested WAN connections. It’s the agility to route application requests to the best server-side location based on a combination of client location, connection type, and current capacity across multiple sites – whether cloud, managed hosting, or secondary data centers. Context is the ‘secret sauce’ to successful application delivery. It’s the ingredient that makes it possible to make the right decisions at the right time based on current conditions that address operational risk – performance, security, and availability. Context is what makes the application delivery tier of the modern data center able to adapt dynamically. It’s the shared data that forms the foundation for the collaboration between application delivery network infrastructure and provisioning systems both local and in the cloud, enabling on-demand scalability and at some point, instant mobility in an inter-cloud architecture. Context is a key component to an agile data center, because it is only be inspecting all the variables that you can interpret them in a way that leads to optimal decisions with respect to the delivery of an application, which includes choosing the right application instance whether it’s deployed remotely in a cloud computing environment or locally on an old-fashioned piece of hardware. Knowing what device a given request is coming from is not enough, especially when the connection type and conditions cannot be assumed. The same user on the same device may connect via two completely different networking methods within the same day – or even same hour. It is the network connection which becomes a critical decision point around which to apply proper security and performance-related policies, as different networks vary in their conditions. So while we all like to believe that our love of our chosen mobile platform is vindicated by statistics, we need to dig deeper when we talk about mobile strategies within the walls of IT. The device type is only one small piece of a much larger puzzle called context. “Mobile” is as much about the means of connectivity as it is the actual physical characteristic of a small untethered device. We need to recognize that, and incorporate it into our mobile delivery strategies sooner rather than later. [Updated: This post was updated 2/17/2012 - the graphic was updated to reflect the proper source of the statistics, w3schools ] Long-distance live migration moves within reach HTML5 Web Sockets Changes the Scalability Game At the Intersection of Cloud and Control… F5 Friday: The Mobile Road is Uphill. Both Ways More Users, More Access, More Clients, Less Control Cloud Needs Context-Aware Provisioning Call Me Crazy but Application-Awareness Should Be About the Application The IP Address – Identity Disconnect The Context-Aware Cloud405Views0likes2CommentsHTTP Pipelining: A security risk without real performance benefits
Everyone wants web sites and applications to load faster, and there’s no shortage of folks out there looking for ways to do just that. But all that glitters is not gold, and not all acceleration techniques actually do all that much to accelerate the delivery of web sites and applications. Worse, some actual incur risk in the form of leaving servers open to exploitation. A BRIEF HISTORY Back in the day when HTTP was still evolving, someone came up with the concept of persistent connections. See, in ancient times – when administrators still wore togas in the data center – HTTP 1.0 required one TCP connection for every object on a page. That was okay, until pages started comprising ten, twenty, and more objects. So someone added an HTTP header, Keep-Alive, which basically told the server not to close the TCP connection until (a) the browser told it to or (b) it didn’t hear from the browser for X number of seconds (a time out). This eventually became the default behavior when HTTP 1.1 was written and became a standard. I told you it was a brief history. This capability is known as a persistent connection, because the connection persists across multiple requests. This is not the same as pipelining, though the two are closely related. Pipelining takes the concept of persistent connections and then ignores the traditional request – reply relationship inherent in HTTP and throws it out the window. The general line of thought goes like this: “Whoa. What if we just shoved all the requests from a page at the server and then waited for them all to come back rather than doing it one at a time? We could make things even faster!” Tada! HTTP pipelining. In technical terms, HTTP pipelining is initiated by the browser by opening a connection to the server and then sending multiple requests to the server without waiting for a response. Once the requests are all sent then the browser starts listening for responses. The reason this is considered an acceleration technique is that by shoving all the requests at the server at once you essentially save the RTT (Round Trip Time) on the connection waiting for a response after each request is sent. WHY IT JUST DOESN’T MATTER ANYMORE (AND MAYBE NEVER DID) Unfortunately, pipelining was conceived of and implemented before broadband connections were widely utilized as a method of accessing the Internet. Back then, the RTT was significant enough to have a negative impact on application and web site performance and the overall user-experience was improved by the use of pipelining. Today, however, most folks have a comfortable speed at which they access the Internet and the RTT impact on most web application’s performance, despite the increasing number of objects per page, is relatively low. There is no arguing, however, that some reduction in time to load is better than none. Too, anyone who’s had to access the Internet via high latency links can tell you anything that makes that experience faster has got to be a Good Thing. So what’s the problem? The problem is that pipelining isn’t actually treated any differently on the server than regular old persistent connections. In fact, the HTTP 1.1 specification requires that a “server MUST send its responses to those requests in the same order that the requests were received.” In other words, the requests are return in serial, despite the fact that some web servers may actually process those requests in parallel. Because the server MUST return responses to requests in order that the server has to do some extra processing to ensure compliance with this part of the HTTP 1.1 specification. It has to queue up the responses and make certain responses are returned properly, which essentially negates the performance gained by reducing the number of round trips using pipelining. Depending on the order in which requests are sent, if a request requiring particularly lengthy processing – say a database query – were sent relatively early in the pipeline, this could actually cause a degradation in performance because all the other responses have to wait for the lengthy one to finish before the others can be sent back. Application intermediaries such as proxies, application delivery controllers, and general load-balancers can and do support pipelining, but they, too, will adhere to the protocol specification and return responses in the proper order according to how the requests were received. This limitation on the server side actually inhibits a potentially significant boost in performance because we know that processing dynamic requests takes longer than processing a request for static content. If this limitation were removed it is possible that the server would become more efficient and the user would experience non-trivial improvements in performance. Or, if intermediaries were smart enough to rearrange requests such that they their execution were optimized (I seem to recall I was required to design and implement a solution to a similar example in graduate school) then we’d maintain the performance benefits gained by pipelining. But that would require an understanding of the application that goes far beyond what even today’s most intelligent application delivery controllers are capable of providing. THE SILVER LINING At this point it may be fairly disappointing to learn that HTTP pipelining today does not result in as significant a performance gain as it might at first seem to offer (except over high latency links like satellite or dial-up, which are rapidly dwindling in usage). But that may very well be a good thing. As miscreants have become smarter and more intelligent about exploiting protocols and not just application code, they’ve learned to take advantage of the protocol to “trick” servers into believing their requests are legitimate, even though the desired result is usually malicious. In the case of pipelining, it would be a simple thing to exploit the capability to enact a layer 7 DoS attack on the server in question. Because pipelining assumes that requests will be sent one after the other and that the client is not waiting for the response until the end, it would have a difficult time distinguishing between someone attempting to consume resources and a legitimate request. Consider that the server has no understanding of a “page”. It understands individual requests. It has no way of knowing that a “page” consists of only 50 objects, and therefore a client pipelining requests for the maximum allowed – by default 100 for Apache – may not be seen as out of the ordinary. Several clients opening connections and pipelining hundreds or thousands of requests every second without caring if they receive any of the responses could quickly consume the server’s resources or available bandwidth and result in a denial of service to legitimate users. So perhaps the fact that pipelining is not really all that useful to most folks is a good thing, as server administrators can disable the feature without too much concern and thereby mitigate the risk of the feature being leveraged as an attack method against them. Pipelining as it is specified and implemented today is more of a security risk than it is a performance enhancement. There are, however, tweaks to the specification that could be made in the future that might make it more useful. Those tweaks do not address the potential security risk, however, so perhaps given that there are so many other optimizations and acceleration techniques that can be used to improve performance that incur no measurable security risk that we simply let sleeping dogs lie. IMAGES COURTESTY WIKIPEDIA COMMONS4.6KViews0likes5CommentsBig-IP and ADFS Part 2 - APM: An Alternative to the ADFS Proxy
So let’s talk Application Delivery Controllers, (ADC). In part one of this series we deployed both an internal ADFS farm as well as a perimeter ADFS proxy farm using the Big-IP’s exceptional load balancing capabilities to provide HA and scalability. But there’s much more the Big-IP can provide to the application delivery experience. Here in part 2 we’ll utilize the Access Policy Manager, (APM) module as a replacement for the ADFS Proxy layer. To illustrate this approach, we’ll address one of the most common use cases; ADFS deployment to federate with and enable single sign-on to Microsoft Office 365 web-based applications. The purpose of the ADFS Proxy server is to receive and forward requests to ADFS servers that are not accessible from the Internet. As noted in part one, for high availability this typically requires a minimum of two proxy servers as well as an additional load balancing solution, (F5 Big-IPs of course). By implementing APM on the F5 appliance(s) we not only eliminate the need for these additional servers but, by implementing pre-authentication at the perimeter and advanced features such as client-side checks, (antivirus validation, firewall verification, etc.), arguably provide for a more secure deployment. Assumptions and Product Deployment Documentation - This deployment scenario assumes the reader is assumed to have general administrative knowledge of the BIG-IP LTM module and basic understanding of the APM module. If you want more information or guidance please check out F5’s support site, ASKF5. The following diagram shows a typical internal and external client access AD FS to Office 365 Process Flow, (used for passive-protocol, “web-based” access). Both clients attempts to access the Office 365 resource; Both clients are redirected to the resource’s applicable federation service, (Note: This step may be skipped with active clients such as Microsoft Outlook); Both client are redirected to their organization’s internal federation service; The AD FS server authenticates the client to active directory; * Internal clients are load balanced directly to an ADFS server farm member; and * External clients are: * Pre-authenticated to Active Directory via APM’s customizable sign-on page; *Authenticated users are directed to an AD FS server farm member. The ADFS server provides the client with an authorization cookie containing the signed security token and set of claims for the resource partner; The client connects to the Microsoft Federation Gateway where the token and claims are verified. The Microsoft Federation Gateway provides the client with a new service token; and The client presents the new cookie with included service token to the Office 365 resource for access. Virtual Servers and Member Pool – Although all users, (both internal and external) will access the ADFS server farm via the same Big-IP(s), the requirements and subsequent user experience differ. While internal authenticated users are load balanced directly to the ADFS farm, external users must first be pre-authenticated, (via APM) prior to be allowed access to an ADFS farm member. To accomplish this two, (2) virtual servers are used; one for the internal access and another dedicated for external access. Both the internal and external virtual servers are associated with the same internal ADFS server farm pool. INTERNAL VIRTUAL SERVER – Refer to Part 1 of this guidance for configuration settings for the internal ADFS farm virtual server. EXTERNAL VIRTUAL SERVER – The configuration for the external virtual server is similar to that of the virtual server described in Part 1 of this guidance. In addition an APM Access Profile, (see highlighted section and settings below) is assigned to the virtual server. APM Configuration – The following Access Policy Manager, (APM) configuration is created and associated with the external virtual server to provide for pre-authentication of external users prior to being granted access to the internal ADFS farm. As I mentioned earlier, the APM module provides advanced features such as client-side checks and single sign-on, (SSO) in addition to pre-authentication. Of course this is just the tip of the iceberg. Take a deeper look at client-side checks at AskF5. AAA SERVER - The ADFS access profile utilizes an Active Directory AAA server. ACCESS POLICY - The following access policy is associated with the ADFS access profile. * Prior to presenting the logon page client machines are checked for the existence of updated antivirus. If the client lacks either antivirus software or does not have updated, (within 30 days) virus definitions the user is redirected to a mitigation site. * An AD query and simple iRule is used to provide single-url OWA access for both on-premise and Office365 Exchange users. SSO CONFIGURATION - The ADFS access portal uses an NTLM v1 SSO profile with multiple authentication domains, (see below). By utilizing multiple SSO domains, clients are required to authenticate only once to gain access to both hosted applications such as Exchange Online and SharePoint Online as well as on-premise hosted applications. To facilitate this we deploy multiple virtual servers, (ADFS, Exchange, SharePoint) utilizing the same SSO configuration. CONNECTIVITY PROFILE – A connectivity profile based upon the default connectivity profile is associated with the external virtual server. Whoa! That’s a lot to digest. But if nothing else, I hope this inspires you to further investigate APM and some of the cool things you can do with the Big-IP beyond load balancing. Additional Links: Big-IP and ADFS Part 1 – “Load balancing the ADFS Farm” Big-IP and ADFS Part 3 - “ADFS, APM, and the Office 365 Thick Clients” BIG-IP Access Policy Manager (APM) Wiki Home - DevCentral Wiki Latest F5 Information F5 News Articles F5 Press Releases F5 Events F5 Web Media F5 Technology Alliance Partners F5 YouTube Feed4.2KViews0likes7CommentsBig-IP and ADFS Part 4 – “What about Single Sign-Out?”
Why stop at 3 when you can go to 4? Over the past few posts on this ever-expanding topic, we’ve discussed using ADFS to provide single sign-on access to Office 365. But what about single sign-out? A customer turned me onto Tristan Watkins’ blog post that discusses the challenges of single sign-out for browser-based, (WS-Federation) applications when fronting ADFS with a reverse-proxy. It’s a great blog post and covers the topic quite well so I won’t bother re-hashing it here. However, I would definitely recommend reading his post if you want a deeper dive. Here’s the sign-out process: 1. User selects ‘Sign Out’ or ‘Sign in as Different User’, (if using SharePoint Online); 2. The user is signed out of the application; 3. The user is redirected to the ADFS sign out page; and 4. The user is redirected back to the Microsoft Federation Gateway and the user’s tokens are invalidated. In a nutshell, claims-unaware proxies, (Microsoft ISA and TMG servers for example) are unable to determine when this process has occurred and subsequently the proxy session remains active. This in turn will allow access to ADFS, (and subsequently Office 365) without be prompted for new credentials, (not good!). Here’s where I come clean with you dear readers. While the F5 Big-IP with APM is a recognized replacement for the AD FS 2.0 Federation Server Proxy this particular topic was not even on my radar. But now that it is…… Single Sign-Out with Access Policy Manager You’ll may have noticed that although the Big-IP with APM is a claims-unaware proxy I did not include it in the list above. Why you ask? Well, although the Big-IP is currently “claims-unaware”, it certainly is “aware” of traffic that passes through. With the ability to analyze traffic as it flows from both the client and the server side, the Big-IP can look for triggers and act upon them. In the case of the ADFS sign-out process, we’ll use the MSISSignOut cookie as our trigger to terminate the proxy session accordingly. During the WS-Federation sign out process, (used by browser-based applications) the MSISSignOut cookie is cleared out by the ADFS server, (refer to the HttpWatch example below). Once this has been completed, we need to terminate the proxy session. Fortunately, there’s an iRule for that. The iRule below analyzes the HTTP response back from the ADFS server and keys off of the MSISSignOut cookie. If the cookie’s value has been cleared, the APM session will be terminated. To allow for a clean sign-out process with the Microsoft Federation Gateway, the APM session termination is delayed long enough for the ADFS server to respond. Now, APM’s termination can act in concert with the ADFS sign-out process. 1: when HTTP_RESPONSE { 2: # Review server-side responses for reset of WS-Federation sign-out cookie - MSISSignOut. 3: # If found assign ADFS sign-out session variable and close HTTP connection 4: if {[HTTP::header "Set-Cookie"] contains "MSISSignOut=;"} { 5: ACCESS::session data set session.user.adfssignout 1 6: HTTP::close 7: } 8: } 9: 10: when CLIENT_CLOSED { 11: # Remove APM session if ADFS sign-out variable exists 12: if {[ACCESS::session data get session.user.adfssignout] eq 1} { 13: after 5000 14: ACCESS::session remove 15: } 16: } What? Another iRule? Actually, the above snippet can be combined with the iRule we implemented in Part 3 creating a single iRule addressing all the ADFS/Office 365 scenarios. 1: when HTTP_REQUEST { 2: # For external Lync client access all external requests to the 3: # /trust/mex URL must be routed to /trust/proxymex. Analyze and modify the URI 4: # where appropriate 5: HTTP::uri [string map {/trust/mex /trust/proxymex} [HTTP::uri]] 6: 7: # Analyze the HTTP request and disable access policy enforcement WS-Trust calls 8: if {[HTTP::uri] contains "/adfs/services/trust"} { 9: ACCESS::disable 10: } 11: 12: # OPTIONAL ---- To allow publishing of the federation service metadata 13: if {[HTTP::uri] ends_with "FederationMetadata/2007-06/FederationMetadata.xml"} { 14: ACCESS::disable 15: } 16: } 17: 18: when HTTP_RESPONSE { 19: # Review serverside responses for reset of WS-Federation sign-out cookie - MSISSignOut. 20: # If found assign ADFS sign-out session variable and close HTTP connection 21: if {[HTTP::header "Set-Cookie"] contains "MSISSignOut=;"} { 22: ACCESS::session data set session.user.adfssignout 1 23: HTTP::close 24: } 25: } 26: 27: when CLIENT_CLOSED { 28: # Remove APM session if ADFS sign-out variable exists 29: if {[ACCESS::session data get session.user.adfssignout] eq 1} { 30: after 5000 31: ACCESS::session remove 32: } 33: } Gotta love them iRules! That’s all for now. Additional Links: Big-IP and ADFS Part 1 – “Load balancing the ADFS Farm” Big-IP and ADFS Part 2 – “APM–An Alternative to the ADFS Proxy” Big-IP and ADFS Part 3 – “ADFS, APM, and the Office 365 Thick Clients” BIG-IP Access Policy Manager (APM) Wiki Home - DevCentral Wiki AD FS 2.0 - Interoperability with Non-Microsoft Products MS TechNet - AD FS: How to Invoke a WS-Federation Sign-Out Tristan Watkins - Office 365 Single Sign Out with ISA or TMG as the ADFS Proxy Technorati Tags: load balancer,ADFS,Office365,active directory,F5,federation,exchange,microsoft,network,blog,APM,LTM,Coward,SSO,single sign-on,single sign-out979Views0likes2CommentsSuperSizing the Data Center: Big Data and Jumbo Frames
#centaur #40GBE Data center transformation discussions too often overlook the impact on the network – and its necessary transformation. For many of the same reasons IPv6 migration is moving slower than perhaps it should given the urgent need for more IP addresses (to support all those cows connecting to the Internet) is the sheer magnitude of such an effort. Without the ability for IPv6-only nodes to talk to IPv4-only nodes, there’s a lot of careful planning that has to happen around the globe to ensure success and continued communication between the two incompatible protocols. In many ways, Jumbo Frames – despite performance advantages – suffer from the same technological incompatibility. Remember that Jumbo Frames – 9000 bytes – are incompatible with regular old sized Ethernet frames (1500 bytes). It makes sense for much the same reasons – you simply can’t stuff 9000 bytes into a frame designed to hold 1500. And one of the basic rules of Ethernet is that the smallest MTU (maximum transmission unit) used by any component in a network path determines the maximum MTU for all traffic that flows along that path. And yet the benefits of Jumbo Frames have been demonstrated many times. It reduces fragmentation overhead (the process of splitting data into chunks small enough to fit into a 1500 byte frame) which translates into lower CPU overhead on hosts. It also allows for more aggressive TCP dynamics, which results in greater throughput and better responses to some kinds of loss. But even though Jumbo Frames can deliver an increase in throughput along with a simultaneous decrease in CPU utilization they are rarely, if ever, used in a data center network. That, however, is changing. You might recall some predictions with respect to 10GB adoption in the data center: "We expect the Ethernet Switch market to experience two significant years of market growth in 2013 and 2014 from the migration of servers towards 10 Gigabit Ethernet," said Alan Weckel, Senior Director of Dell'Oro Group. "We believe that in 2013, most large enterprises will upgrade to 10 Gigabit Ethernet for server access through a mix of connectivity options ranging from blade servers, SFP+ direct attach and 10G Base-T. -- Data Center to Drive Ethernet Switch Revenue Growth through 2016, According to Dell'Oro Group Forecast Historically in the switching market the deployment of 10G in the core networks and the use of Jumbo Frames went pretty much hand-in-hand. Until recently, however, 10GB just wasn’t making its way into the data center (costs were too high) and the only place Jumbo Frames were really seen was within storage networks, particularly in conjunction with FCIP implementations. For the most part, a lack of support within the data center infrastructure and no real urgency for the efficiency gains that come from Jumbo Frames (and the fact that the Internet is not using Jumbo Frames from end-to-end, which pretty much kills the value proposition) meant enterprise organizations looked at Jumbo Frames with a “someday, but not right now” attitude. But with the increasing adoption of virtualization and movement of 10G networks into datacenters (in part driven by virtualization), Jumbo Frames are becoming more of a reality for a larger population of organizations. Consider the following support and recommendations for jumbo frames within VMware’s documentation: TCP Segmentation Offload and Jumbo Frames: Jumbo frames must be enabled at the host level using the command-line interface to configure the MTU size for each vSwitch. TCP Segmentation Offload (TSO) is enabled on the VMkernel interface by default, but must be enabled at the virtual machine level. -- ESX 4.0 Config Guide, page 57 Optimizing vMotion Performance Use of Jumbo Frames is recommend for best vMotion performance. -- Page 188 vSphere 4.0 System Admin Guide vSphere 4 Performance Jumbo Frames is one of the suggested means of improving CPU performance with respect to vSphere -- CPU Performance Enhancement Advice (Table 22-6, page 278) Add in cloud computing and a desire to more quickly move big data (virtual machines) over the WAN to cloud providers for a variety of business initiatives – a process in which the number of frames sent and low latency is key to success - and Jumbo Frames suddenly start looking a lot more like a requirement than a “Yeah, yeah, we’ll get to that eventually. Maybe.” Virtualization and cloud computing are transformative technologies. As some have often – and loudly – reminded us, the network is part of the data center, and indeed an integral part of the data center. While we tend to focus on the management and provisioning and automation of the data center and its cultural impact, we should not overlook the impact that these technologies and the changes they bring are having – and will have – on the network. If cloud and virtualization and consumerization and emerging technologies like HTML5 are going to transform the data center, that’s going to necessarily include the network. Ultimately, support for Jumbo Frames will be a requirement – a checkbox item – for every component in the data center. Sometimes It Is About the Hardware Live Migration versus Pre-Positioning in the Cloud F5 and VMware–One Step Closer to the Cloud as a Seamless Data Center Extension Cloud is an Exercise in Infrastructure Integration Performance in the Cloud: Business Jitter is Bad Like Cars on a Highway.422Views0likes1CommentBig-IP and ADFS Part 1 – “Load balancing the ADFS Farm”
Just like the early settlers who migrated en masse across the country by wagon train along the Oregon Trail, enterprises are migrating up into the cloud. Well okay, maybe not exactly like the early settlers. But, although there may not be a mass migration to the cloud, it is true that more and more enterprises are moving to cloud-based services like Office 365. So how do you provide seamless, or at least relatively seamless, access to resources outside of the enterprise? Well, one answer is federation and if you are a Microsoft shop then the current solution is ADFS, (Active Directory Federation Services). The ADFS server role is a security token service that extends the single sign-on, (SSO) experience for directory-authenticated clients to resources outside of the organization’s boundaries. As cloud-based application access and federation in general becomes more prevalent, the role of ADFS has become equally important. Below, is a typical deployment scenario of the ADFS Server farm and the ADFS Proxy server farm, (recommended for external access to the internally hosted ADFS farm). Warning…. If the ADFS server farm is unavailable then access to federated resources will be limited if not completely inaccessible. To ensure high-availability, performance, and scalability the F5 Big-IP with LTM, (Local Traffic Manager), can be deployed to load balance the ADFS and ADFS Proxy server farms. Yes! When it comes to a load balancing and application delivery, F5’s Big-IP is an excellent choice. Just had to get that out there. So let’s get technical! Part one of this blog series addresses deploying and configuring the Big-IP’s LTM module for load balancing the ADFS Server farm and Proxy server farm. In part two I’m going to show how we can greatly simplify and improve this deployment by utilizing Big-IP’s APM, (Access Policy Manager) so stay tuned. Load Balancing the Internal ADFS Server Farm Assumptions and Product Deployment Documentation - This deployment scenario assumes an ADFS server farm has been installed and configured per the deployment guide including appropriate trust relationships with relevant claims providers and relying parties. In addition, the reader is assumed to have general administrative knowledge of the BIG-IP LTM module. If you want more information or guidance please check out F5’s support site, ASKF5. The following diagram shows a typical, (albeit simplified) process flow of the Big-IP load balanced ADFS farm. Client attempts to access the ADFS-enabled external resource; Client is redirected to the resource’s applicable federation service; Client is redirected to its organization’s internal federation service, (assuming the resource’s federation service is configured as trusted partner); The ADFS server authenticates the client to active directory; The ADFS server provides the client with an authorization cookie containing the signed security token and set of claims for the resource partner; The client connects to the resource partner federation service where the token and claims are verified. If appropriate, the resource partner provides the client with a new security token; and The client presents the new authorization cookie with included security token to the resource for access. VIRTUAL SERVER AND MEMBER POOL – A virtual server, (aka VIP) is configured to listen on port 443, (https). In the event that the Big-IP will be used for SSL bridging, (decryption and re-encryption), the public facing SSL certificate and associated private key must be installed on the BIG-IP and associated client SSL profile created. However, as will be discussed later SSL bridging is not the preferred method for this type of deployment. Rather, SSL tunneling, (pass-thru) will be utilized. ADFS requires Transport Layer Security and Secure Sockets Layer (TLS/SSL). Therefore pool members are configured to listen on port 443, (https). LOAD BALANCING METHOD – The ‘Least Connections (member)’ method is utilized. POOL MONITOR – To ensure the AD FS service is responding as well as the web site itself, a customized monitor can be used. The monitor ensures the AD FS federation service is responding. Additionally, the monitor utilizes increased interval and timeout settings. The custom https monitor requires domain credentials to validate the service status. A standard https monitor can be utilized as an alternative. PERSISTENCE – In this AD FS scenario, clients establish a single TCP connection with the AD FS server to request and receive a security token. Therefore, specifying a persistence profile is not necessary. SSL TUNNELING, (preferred method) – When SSL tunneling is utilized, encrypted traffic flows from the client directly to the endpoint farm member. Additionally, SSL profiles are not used nor are SSL certificates required to be installed on the Big-IP. In this instance Big-IP profiles requiring packet analysis and/or modification, (ex. compression, web acceleration) will not be relevant. To further boost the performance, a Fast L4 virtual server will be used. Load Balancing the ADFS Proxy Server Farm Assumptions and Product Deployment Documentation - This deployment scenario assumes an ADFS Proxy server farm has been installed and configured per the deployment guide including appropriate trust relationships with relevant claims providers and relying parties. In addition, the reader is assumed to have general administrative knowledge of the BIG-IP LTM module. If you want more information or guidance please check out F5’s support site, ASKF5. In the previous section we configure load balancing for an internal AD FS Server farm. That scenario works well for providing federated SSO access to internal users. However, it does not address the need of the external end-user who is trying to access federated resources. This is where the AD FS proxy server comes into play. The AD FS proxy server provides external end-user SSO access to both internal federation-enabled resources as well as partner resources like Microsoft Office 365. Client attempts to access the AD FS-enabled internal or external resource; Client is redirected to the resource’s applicable federation service; Client is redirected to its organization’s internal federation service, (assuming the resource’s federation service is configured as trusted partner); The AD FS proxy server presents the client with a customizable sign-on page; The AD FS proxy presents the end-user credentials to the AD FS server for authentication; The AD FS server authenticates the client to active directory; The AD FS server provides the client, (via the AD FS proxy server) with an authorization cookie containing the signed security token and set of claims for the resource partner; The client connects to the resource partner federation service where the token and claims are verified. If appropriate, the resource partner provides the client with a new security token; and The client presents the new authorization cookie with included security token to the resource for access. VIRTUAL SERVER AND MEMBER POOL – A virtual server is configured to listen on port 443, (https). In the event that the Big-IP will be used for SSL bridging, (decryption and re-encryption), the public facing SSL certificate and associated private key must be installed on the BIG-IP and associated client SSL profile created. ADFS requires Transport Layer Security and Secure Sockets Layer (TLS/SSL). Therefore pool members are configured to listen on port 443, (https). LOAD BALANCING METHOD – The ‘Least Connections (member)’ method is utilized. POOL MONITOR – To ensure the web servers are responding, a customized ‘HTTPS’ monitor is associated with the AD FS proxy pool. The monitor utilizes increased interval and timeout settings. "To SSL Tunnel or Not to SSL Tunnel” When SSL tunneling is utilized, encrypted traffic flows from the client directly to the endpoint farm member. Additionally, SSL profiles are not used nor are SSL certificates required to be installed on the Big-IP. However, some advanced optimizations including HTTP compression and web acceleration are not possible when tunneling. Depending upon variables such as client connectivity and customization of ADFS sign-on pages, an ADFS proxy deployment may benefit from these HTTP optimization features. The following two options, (SSL Tunneling and SSL Bridging) are provided. SSL TUNNELING - In this instance Big-IP profiles requiring packet analysis and/or modification, (ex. compression, web acceleration) will not be relevant. To further boost the performance, a Fast L4 virtual server will be used. Below is an example of the Fast L4 Big-IP Virtual server configuration in SSL tunneling mode. SSL BRIDGING – When SSL bridging is utilized, traffic is decrypted and then re-encrypted at the Big-IP device. This allows for additional features to be applied to the traffic on both client-facing and pool member-facing sides of the connection. Below is an example of the standard Big-IP Virtual server configuration in SSL bridging mode. Standard Virtual Server Profiles - The following list of profiles is associated with the AD FS proxy virtual server. Well that’s it for Part 1. Along with the F5 business development team for the Microsoft global partnership I want to give a big thanks to the guys at Ensynch, an Insight Company - Kevin James, David Lundell, and Lutz Mueller Hipper for reviewing and providing feedback. Stay tuned for Big-IP and ADFS Part 2 – “APM – An Alternative to the ADFS Proxy”. Additional Links: Big-IP and ADFS Part 2 – “APM–An Alternative to the ADFS Proxy” Big-IP and ADFS Part 3 - “ADFS, APM, and the Office 365 Thick Clients”5.2KViews0likes3CommentsVerify, but Never Trust?
Much is being written lately about so-called "Zero Trust Model" security, which prompts me to ask, "Since when did we security folk trust anyone?" On the NIST site, you'll find a thorough report NIST commissioned from Forrester. A main theme of this report is that the old axiom of security "trust, but verify" is now obsolete. Hardened perimeters, once successfully traversed, leave infrastructures that trust the user and traffic implicitly, to their unending peril. What does all this mean for those of us tasked with security? Well, it's not a new concept, just a new label. We have known for years that the notion of a perimeter in a data center is evaporating, largely due to the increasingly browser-driven nature of all apps, and threats moving up the stack to the application. The network "perimeter" is largely intact, but with seemingly everything of importance transported via HTTP (and increasingly TLS-encrypted), our infrastructures may as well be open at the network level. Let's consider the fundamental tenets set forth in the report linked above: Zero Trust is applicable for every organization/industry. Zero Trust is technology and vendor agnostic. Zero Trust is scalable. Zero Trust protects Civil Liberties by protecting personal/confidential data. First, if we're in security, we should be considering how Zero Trust applies and can help improve my organization's security posture. We should be evangelizing this new way of thinking internally, in an effort to educate all aspects of the organization - networking, platform, application development, and any other team that may have a vested stake. Since Zero Trust is vendor- and technology-agnostic, it's incumbent upon everyone to evaluate current technologies, solutions and architectures to determine whether current implementations adhere to a Zero Trust model. No one piece of technology or one vendor will bring you to Zero Trust nirvana. Next, we must consider what is meant by "scalable" in this context. F5 has long been in the business of highly-scalable solutions, whether for offloading encryption, web application security, access management, or good old fashioned load-balancing. However, that's only part of what is meant by scalable here. Does our implementation of a Zero Trust Model scale across the organization? Does it apply to both internal and external users and applications? Is access to data cumbersome and overwhelmed by security controls? Does it consider all paths to sensitive data? On that last question, regarding paths to data, we hit upon the most important tenet above: the protection of data. In the end, "data wants to be free" and it is up to the security measures in place to ensure that it still travels freely, but only to those individuals who are properly authorized. This implies that web-based access paths (Internet and Intranet apps) along with other non-HTTP paths such as drive mounts or direct database access must all be considered and properly secured. Protecting data then requires good access management, good input validation, and at-rest data encryption. In order to be scalable, these security measures must be more or less frictionless from a UX perspective. These are high bars, indeed. The BIG-IP platform is uniquely instrumented to deliver business applications, and facilitate a Zero Trust model. Whether it is providing good input validation to prevent data exfiltration via CSRF or SQL injection with Application Security Manager (ASM), or integrating diverse access management mechanisms via Access Policy Manager (APM) without need of any special clients or portals, BIG-IP has a part to play in your Zero Trust implementation. Zero Trust is nothing new, we have been working for years to improve our application layer defenses through better coding, better frameworks, and new web technologies. Zero Trust does provide a codified framework to measure our success in developing highly secure and scalable infrastructures. Has your organization begun considering Zero Trust Model security? What challenges are you seeing, and how are F5 technologies factoring in (or not) along the way to overcoming those challenges? I look forward to your comments below.338Views0likes3Comments