industry
51 TopicsRADIUS Load Balancing with iRules
What is RADIUS? “Remote Authentication Dial In User Service” or RADIUS is a very mature and widely implemented protocol for exchanging ”Triple A” or “Authentication, Authorization and Accounting” information. RADIUS is a relatively simple, transactional protocol. Clients, such as remote access server, FirePass, BIG-IP, etc. originate RADIUS requests (for example, to authenticate a user based on a user/password combination) and then wait for a response from the RADIUS server. Information is exchanged between a RADIUS client and server in the form of attributes. User-name, user-password, IP Address, port, and session state are all examples of attributes. Attributes can be in the format of text, string, IP address, integer or timestamp. Some attributes are variable in length, some are fixed. Why is protocol-specific support valuable? In typical UDP Load Balancing (not protocol-specific), there is one common challenge: if a client always sends requests with the same source port, packets will never be balanced across multiple servers. This behavior is the default for a UDP profile. To allow load balancing to work in this situation, using "Datagram LB" is the common recommendation or the use of an immediate session timeout. By using Datagram LB, every packet will be balanced. However, if a new request comes in before the reply for the previous request comes back from the server, BIG-IP LTM may change source port of that new request before forwards it to the server. This may result in an application not acting properly. In this later case, “Immediate timeout” must then be used. An additional virtual server may be needed for outbound traffic in order to route traffic back to the client. In short, to enable load balancing for RADIUS transaction-based traffic coming from the same source IP/source port, Datagram LB or immediate timeout should be employed. This configuration works in most cases. However, if the transaction requires more than 2 packets (1 request, 1 response), then, further BIG-IP LTM work is needed. An example where this is important occurs in RADIUS challenge/response handshakes, which require 4 packets: * Client ---- access-request ---> Server * Client <-- access-challenge --- Server * Client --- access-request ----> Server * Client <--- access-accept ----- Server For this traffic to succeed, all packets associated with the same transaction must be returned to the same server. In this case, custom layer 7 persistence is needed. iRules can provide the needed persistency. With iRules that understand the RADIUS protocol, BIG-IP LTM can direct traffic based on any attribute sent by client or persist sessions based on any attribute sent by client or server. Session management can then be moved to the BIG-IP, reducing server-side complexity. BIG-IP can provide almost unlimited intelligence in an iRule that can even re-calculate md5, modify usernames, detect realms, etc. BIG-IP LTM can also provide security at the application level of the RADIUS protocol, rejecting malformed traffic, denial of service attacks, or similar threats using customized iRules. Solution Datagram LB UDP profile or immediate timeout may be used if requests from client always use the same source IP/port. If immediate timeout is used, there should be an additional VIP for outbound traffic originated from server to client and also an appropriate SNAT (same IP as VIP). Identifier or some attributes can be used for Universal Inspection Engine (UIE) persistence. If immediate timeout/2-side-VIP technique are used, these should be used in conjunction with session command with "any" option. iRules 1) Here is a sample iRule which does nothing except decode and log some attribute information. This is a good example of the depth of fluency you can achieve via an iRule dealing with RADIUS. when RULE_INIT { array set ::attr_code2name { 1 User-Name 2 User-Password 3 CHAP-Password 4 NAS-IP-Address 5 NAS-Port 6 Service-Type 7 Framed-Protocol 8 Framed-IP-Address 9 Framed-IP-Netmask 10 Framed-Routing 11 Filter-Id 12 Framed-MTU 13 Framed-Compression 14 Login-IP-Host 15 Login-Service 16 Login-TCP-Port 17 (unassigned) 18 Reply-Message 19 Callback-Number 20 Callback-Id 21 (unassigned) 22 Framed-Route 23 Framed-IPX-Network 24 State 25 Class 26 Vendor-Specific 27 Session-Timeout 28 Idle-Timeout 29 Termination-Action 30 Called-Station-Id 31 Calling-Station-Id 32 NAS-Identifier 33 Proxy-State 34 Login-LAT-Service 35 Login-LAT-Node 36 Login-LAT-Group 37 Framed-AppleTalk-Link 38 Framed-AppleTalk-Network 39 Framed-AppleTalk-Zone 60 CHAP-Challenge 61 NAS-Port-Type 62 Port-Limit 63 Login-LAT-Port } } when CLIENT_ACCEPTED { binary scan [UDP::payload] cH2SH32cc code ident len auth \ attr_code1 attr_len1 log local0. "code = $code" log local0. "ident = $ident" log local0. "len = $len" log local0. "auth = $auth" set index 22 while { $index < $len } { set hsize [expr ( $attr_len1 - 2 ) * 2] switch $attr_code1 { 11 - 1 { binary scan [UDP::payload] @${index}a[expr $attr_len1 - 2]cc \ attr_value attr_code2 attr_len2 log local0. " $::attr_code2name($attr_code1) = $attr_value" } 9 - 8 - 4 { binary scan [UDP::payload] @${index}a4cc rawip \ attr_code2 attr_len2 log local0. " $::attr_code2name($attr_code1) =\ [IP::addr $rawip mask 255.255.255.255]" } 13 - 12 - 10 - 7 - 6 - 5 { binary scan [UDP::payload] @${index}Icc attr_value \ attr_code2 attr_len2 log local0. " $::attr_code2name($attr_code1) = $attr_value" } default { binary scan [UDP::payload] @${index}H${hsize}cc \ attr_value attr_code2 attr_len2 log local0. " $::attr_code2name($attr_code1) = $attr_value" } } set index [ expr $index + $attr_len1 ] set attr_len1 $attr_len2 set attr_code1 $attr_code2 } } when SERVER_DATA { binary scan [UDP::payload] cH2SH32cc code ident len auth \ attr_code1 attr_len1 log local0. "code = $code" log local0. "ident = $ident" log local0. "len = $len" log local0. "auth = $auth" set index 22 while { $index < $len } { set hsize [expr ( $attr_len1 - 2 ) * 2] switch $attr_code1 { 11 - 1 { binary scan [UDP::payload] @${index}a[expr $attr_len1 - 2]cc \ attr_value attr_code2 attr_len2 log local0. " $::attr_code2name($attr_code1) = $attr_value" } 9 - 8 - 4 { binary scan [UDP::payload] @${index}a4cc rawip \ attr_code2 attr_len2 log local0. " $::attr_code2name($attr_code1) =\ [IP::addr $rawip mask 255.255.255.255]" } 13 - 12 - 10 - 7 - 6 - 5 { binary scan [UDP::payload] @${index}Icc attr_value \ attr_code2 attr_len2 log local0. " $::attr_code2name($attr_code1) = $attr_value" } default { binary scan [UDP::payload] @${index}H${hsize}cc \ attr_value attr_code2 attr_len2 log local0. " $::attr_code2name($attr_code1) = $attr_value" } } set index [ expr $index + $attr_len1 ] set attr_len1 $attr_len2 set attr_code1 $attr_code2 } } This iRule could be applied to many areas of interest where a particular value needs to be extracted. For example, the iRule could detect the value of specific attributes or realm and direct traffic based on that information. 2) This second iRule allows UDP Datagram LB to work with 2 factor authentication. Persistence in this iRule is based on "State" attribute (value = 24). Another great example of the kinds of things you can do with an iRule, and how deep you can truly dig into a protocol. when CLIENT_ACCEPTED { binary scan [UDP::payload] ccSH32cc code ident len auth \ attr_code1 attr_len1 set index 22 while { $index < $len } { set hsize [expr ( $attr_len1 - 2 ) * 2] binary scan [UDP::payload] @${index}H${hsize}cc attr_value \ attr_code2 attr_len2 # If it is State(24) attribute... if { $attr_code1 == 24 } { persist uie $attr_value 30 return } set index [ expr $index + $attr_len1 ] set attr_len1 $attr_len2 set attr_code1 $attr_code2 } } when SERVER_DATA { binary scan [UDP::payload] ccSH32cc code ident len auth \ attr_code1 attr_len1 # If it is Access-Challenge(11)... if { $code == 11 } { set index 22 while { $index < $len } { set hsize [expr ( $attr_len1 - 2 ) * 2] binary scan [UDP::payload] @${index}H${hsize}cc attr_value \ attr_code2 attr_len2 if { $attr_code1 == 24 } { persist add uie $attr_value 30 return } set index [ expr $index + $attr_len1 ] set attr_len1 $attr_len2 set attr_code1 $attr_code2 } } } Conclusion With iRules, BIG-IP can understand RADIUS packets and make intelligent decisions based on RADIUS protocol information. Additionally, it is also possible to manipulate RADIUS packets to meet nearly any application need. Contributed by: Nat Thirasuttakorn Get the Flash Player to see this player.2.7KViews0likes4CommentsThe (hopefully) definitive guide to load balancing Lync Edge Servers with a Hardware Load Balancer
Having worked on a few large Lync deployments recently, I have realized that there is still a lot of confusion around properly architecting the network for load balancing Lync Edge Servers. Guidance on this subject has changed from OCS 2007 to OCS 2007 R2 and now to Lync Server 2010, and it's important that care is taken while planning the design. It's also important to know that although a certain architecture may seem to work, it could be very far from best practice. I'll explain what I mean by that below. The main purpose of Edge Services is to allow remote (whether they are corporate, anonymous, federated, etc) users to communicate with other external/internal users and vice versa. If you're looking to extend your Lync deployment to support communication with federated partners, public IM services, remote users and such, then you'll want to make sure you deploy your Edge Servers properly. This post will discuss some requirements and best practices for deploying Edge Servers, and then we'll go into some suggested architectures. For this discussion, let's assume that there are 3 device types within your DMZ; your firewall, your BIG-IP LTM, and your Lync Edge Server farm. Requirement 1: Your Edge Servers need at least 2 network interfaces; one or more dedicated to the external network, and one dedicated to the internal. The external and internal interfaces need to be on separate IP networks. The Edge Server will host 3 separate external services; Access, Web Conferencing, and Audio/Visual (A/V). If you plan on exposing all 3 services for remote users, you have a choice of using one IP for all 3 services on each server and differentiate them by TCP/UDP port value, or go with a separate IP for each service and use standard ports. Best Practice: This is more preference than best practice, but I like to use 3 separate IPs for these services. With alternative ports/port mapping, you can consolidate to a single IP, but unless you have a very specific reason for doing so, its best to stick with 3 separate IPs. You do burn more IPs by doing this, but you'll have to use non-standard ports for certain services if you use a single IP, and this could lead to issues with certain network devices that like certain traffic types on certain ports. Plus, troubleshooting, traffic statistics, logging are all cleaner if you are using 3 separate IPs. Requirement 2: Traffic that is load balanced to the Lync Edge servers needs to return through the load balancer. In other words, if the hardware load balancer sends traffic to an Edge Server, the return traffic from that Edge Server needs to flow back through the load balancer. There are 2 common ways to ensure that return traffic flows through the load balancer. You can… Use routing, and have the Edge Servers point to the load balancer as their default gateway. Enable SNAT on the load balancer, which rewrites the source IP of the connection to a local network address as the traffic passes through the load balancer. In this case, the Edge Servers will believe that a local client generated the connection and send the responses back to that local address. So there are your two options, which I will refer to as Routing and SNATting. With Routing, your Edge Server will rely on its routing table to route the return traffic out through the load balancer. No obscuring of the source IP address will happen on the load balancer, but you will have to make sure your default gateway & routing tables are correct. With SNATting, you can ensure return traffic goes back through the load balancer and not have to worry about the routing table to take care of this. The drawback to SNATting is that the load balancer will obscure the source IP of the packet as it passes through the load balancer. I will explain below why the SNAT idea is less than ideal, primarily for A/V traffic. Best Practice: You can SNAT traffic to the Web Conferencing and Access services on the Edge Server, but do not SNAT traffic to the A/V Edge Services. By obscuring the client's IP Address when using SNAT, you limit the ability for the A/V Services to connect clients directly to each other, and this is important when clients try to set up peer 2 peer communication, such as a phone call. When using SNAT, The A/V services will not see the client's true IP, so the likelihood of the Edge Server being able to orchestrate the 2 clients to communicate directly with each other is reduced to nil. You'll force the A/V services to utilize its fallback method, in which the P2P traffic will actually have to use the A/V server as a proxy between the 2 clients. Now this 'proxy' fallback mode will still happen from time to time even when your not SNATting at the BIG-IP (for example, multiparty calls will always use 'proxy'), but when you can, its best to minimize the times that users have to leverage this fallback method. So even though SNATting connections to the A/V Edge Service will seem to work, it is far from desirable from a network perspective! FYI - Every load balanced service in a Lync Environment (including Lync FE's, Directors, etc) can be SNAT'ed except for the A/V Edge Service. Requirement 3: Certain connections will need to be load balanced to the Edge Services, while certain connections will need to be made directly to those Edge Services. Best Practice: Make sure clients can connect to the Virtual IP(s) that are load balancing the Edge Services, as well as make sure that clients can connect directly to the Edge Servers themselves. Typically users will hit the load balancer on their first incoming connection and get load balanced, but if a user gets invited to a media session that has started on an Edge Server, the invite they receive will point them directly to that server. NAT awareness was built into Lync 2010 to help in environments in which Edge Servers are deployed behind NATs. By enabling the NAT awareness, Edge Servers will refer clients to their respective NAT address in order to route the users in correctly. Do I need to use routable IPs on the external interface of my Edge Servers? Microsoft says you do, and I would recommend doing so if you can. I have worked on deployments where non-routable IPs are being used (leveraging NATs to allow direct access) and not run into any issues. Just be sure that the Edge Servers are aware of their NAT address. Best Practice: Suggested Deployment "DNAT in, SNAT out" on the Load Balancer ”DSNAT in, SNAT out” was derived from discussions with a certain MSFT engineer who helped me build this guidance. I’d love to give him credit (he knows Lync networking better than anyone I have ever talked to!!), but if named this person, his/her phone would never stop ringing for architecture guidance !!. Back to the subject, if you keep to "DNAT in, and SNAT out” for external-side Lync Edge traffic, your deployment will work! It sums it up very well! So you're ready to architect your Edge Server Deployment. Lets take all the information from above and build a deployment. Keep these things mind….. External Side of the Edge Servers -Plan for VIPs on your BIG-IP to load balance the 3 external services that your Edge Server Provides (Access, WebConferencing, A/V) -Plan for direct (non-load balanced) access to your Edge Servers by external clients -Plan a method to allow Edge Servers to make outbound connections (forwarding VIP or SNAT on BIG-IP) -Point the Edge Server's Default Gateway to the Self IP of the BIG-IP -Point the BIG-IP's Default Gateway to the Router -Do not SNAT traffic to the A/V Services on the Edge Servers If you use non-routable IPs on the external Interfaces of the Edge Servers, create a NAT on the BIG-IP for each Edge Server. Make sure the Edge Servers are aware of these NAT addresses so they can hand them out to clients who need to connect directly to Edge Server. Internal Side of the Edge Servers -Plan for VIPs on your BIG-IP to load balance ports 443, 3478, 5061, and 5062 on the internal interfaces of your Edge Servers -Plan for direct (non-load balanced) access to your Edge Servers -Make sure your Edge Servers have routes to the internal network(s) -You can SNAT traffic to the internal interface of the Edge Servers I'll leave you with an example of a fully supported configuration (i.e. using routable IP Addresses all around). Keep in mind, this is not the only way to architect this, but if you have the available public IP address space, this will work. Wow… so much for a short post. I welcome any and all feedback, and I promise to update this post with new information as it comes in. I'll also augment this post with more details & deployments as I find time to write them up, so check back for updates. This may even end up as a guide some day! Version 1.0 date 7/14/2011 Version 1.1 date 2/15/2011 - Fixed a few typos. Fixed some heinous formatting1.3KViews0likes8CommentsThe Challenges of SQL Load Balancing
#infosec #iam load balancing databases is fraught with many operational and business challenges. While cloud computing has brought to the forefront of our attention the ability to scale through duplication, i.e. horizontal scaling or “scale out” strategies, this strategy tends to run into challenges the deeper into the application architecture you go. Working well at the web and application tiers, a duplicative strategy tends to fall on its face when applied to the database tier. Concerns over consistency abound, with many simply choosing to throw out the concept of consistency and adopting instead an “eventually consistent” stance in which it is assumed that data in a distributed database system will eventually become consistent and cause minimal disruption to application and business processes. Some argue that eventual consistency is not “good enough” and cite additional concerns with respect to the failure of such strategies to adequately address failures. Thus there are a number of vendors, open source groups, and pundits who spend time attempting to address both components. The result is database load balancing solutions. For the most part such solutions are effective. They leverage master-slave deployments – typically used to address failure and which can automatically replicate data between instances (with varying levels of success when distributed across the Internet) – and attempt to intelligently distribute SQL-bound queries across two or more database systems. The most successful of these architectures is the read-write separation strategy, in which all SQL transactions deemed “read-only” are routed to one database while all “write” focused transactions are distributed to another. Such foundational separation allows for higher-layer architectures to be implemented, such as geographic based read distribution, in which read-only transactions are further distributed by geographically dispersed database instances, all of which act ultimately as “slaves” to the single, master database which processes all write-focused transactions. This results in an eventually consistent architecture, but one which manages to mitigate the disruptive aspects of eventually consistent architectures by ensuring the most important transactions – write operations – are, in fact, consistent. Even so, there are issues, particularly with respect to security. MEDIATION inside the APPLICATION TIERS Generally speaking mediating solutions are a good thing – when they’re external to the application infrastructure itself, i.e. the traditional three tiers of an application. The problem with mediation inside the application tiers, particularly at the data layer, is the same for infrastructure as it is for software solutions: credential management. See, databases maintain their own set of users, roles, and permissions. Even as applications have been able to move toward a more shared set of identity stores, databases have not. This is in part due to the nature of data security and the need for granular permission structures down to the cell, in some cases, and including transactional security that allows some to update, delete, or insert while others may be granted a different subset of permissions. But more difficult to overcome is the tight-coupling of identity to connection for databases. With web protocols like HTTP, identity is carried along at the protocol level. This means it can be transient across connections because it is often stuffed into an HTTP header via a cookie or stored server-side in a session – again, not tied to connection but to identifying information. At the database layer, identity is tightly-coupled to the connection. The connection itself carries along the credentials with which it was opened. This gives rise to problems for mediating solutions. Not just load balancers but software solutions such as ESB (enterprise service bus) and EII (enterprise information integration) styled solutions. Any device or software which attempts to aggregate database access for any purpose eventually runs into the same problem: credential management. This is particularly challenging for load balancing when applied to databases. LOAD BALANCING SQL To understand the challenges with load balancing SQL you need to remember that there are essentially two models of load balancing: transport and application layer. At the transport layer, i.e. TCP, connections are only temporarily managed by the load balancing device. The initial connection is “caught” by the Load balancer and a decision is made based on transport layer variables where it should be directed. Thereafter, for the most part, there is no interaction at the load balancer with the connection, other than to forward it on to the previously selected node. At the application layer the load balancing device terminates the connection and interacts with every exchange. This affords the load balancing device the opportunity to inspect the actual data or application layer protocol metadata in order to determine where the request should be sent. Load balancing SQL at the transport layer is less problematic than at the application layer, yet it is at the application layer that the most value is derived from database load balancing implementations. That’s because it is at the application layer where distribution based on “read” or “write” operations can be made. But to accomplish this requires that the SQL be inline, that is that the SQL being executed is actually included in the code and then executed via a connection to the database. If your application uses stored procedures, then this method will not work for you. It is important to note that many packaged enterprise applications rely upon stored procedures, and are thus not able to leverage load balancing as a scaling option. Depending on your app or how your organization has agreed to protect your data will determine which of these methods are used to access your databases. The use of inline SQL affords the developer greater freedom at the cost of security, increased programming(to prevent the inherent security risks), difficulty in optimizing data and indices to adapt to changes in volume of data, and deployment burdens. However there is lively debate on the values of both access methods and how to overcome the inherent risks. The OWASP group has identified the injection attacks as the easiest exploitation with the most damaging impact. This also requires that the load balancing service parse MySQL or T-SQL (the Microsoft Transact Structured Query Language). Databases, of course, are designed to parse these string-based commands and are optimized to do so. Load balancing services are generally not designed to parse these languages and depending on the implementation of their underlying parsing capabilities, may actually incur significant performance penalties to do so. Regardless of those issues, still there are an increasing number of organizations who view SQL load balancing as a means to achieve a more scalable data tier. Which brings us back to the challenge of managing credentials. MANAGING CREDENTIALS Many solutions attempt to address the issue of credential management by simply duplicating credentials locally; that is, they create a local identity store that can be used to authenticate requests against the database. Ostensibly the credentials match those in the database (or identity store used by the database such as can be configured for MSSQL) and are kept in sync. This obviously poses an operational challenge similar to that of any distributed system: synchronization and replication. Such processes are not easily (if at all) automated, and rarely is the same level of security and permissions available on the local identity store as are available in the database. What you generally end up with is a very loose “allow/deny” set of permissions on the load balancing device that actually open the door for exploitation as well as caching of credentials that can lead to unauthorized access to the data source. This also leads to potential security risks from attempting to apply some of the same optimization techniques to SQL connections as is offered by application delivery solutions for TCP connections. For example, TCP multiplexing (sharing connections) is a common means of reusing web and application server connections to reduce latency (by eliminating the overhead associated with opening and closing TCP connections). Similar techniques at the database layer have been used by application servers for many years; connection pooling is not uncommon and is essentially duplicated at the application delivery tier through features like SQL multiplexing. Both connection pooling and SQL multiplexing incur security risks, as shared connections require shared credentials. So either every access to the database uses the same credentials (a significant negative when considering the loss of an audit trail) or we return to managing duplicate sets of credentials – one set at the application delivery tier and another at the database, which as noted earlier incurs additional management and security risks. YOU CAN’T WIN FOR LOSING Ultimately the decision to load balance SQL must be a combination of business and operational requirements. Many organizations successfully leverage load balancing of SQL as a means to achieve very high scale. Generally speaking the resulting solutions – such as those often touted by e-Bay - are based on sound architectural principles such as sharding and are designed as a strategic solution, not a tactical response to operational failures and they rarely involve inspection of inline SQL commands. Rather they are based on the ability to discern which database should be accessed given the function being invoked or type of data being accessed and then use a traditional database connection to connect to the appropriate database. This does not preclude the use of application delivery solutions as part of such an architecture, but rather indicates a need to collaborate across the various application delivery and infrastructure tiers to determine a strategy most likely to maintain high-availability, scalability, and security across the entire architecture. Load balancing SQL can be an effective means of addressing database scalability, but it should be approached with an eye toward its potential impact on security and operational management. What are the pros and cons to keeping SQL in Stored Procs versus Code Mission Impossible: Stateful Cloud Failover Infrastructure Scalability Pattern: Sharding Streams The Real News is Not that Facebook Serves Up 1 Trillion Pages a Month… SQL injection – past, present and future True DDoS Stories: SSL Connection Flood Why Layer 7 Load Balancing Doesn’t Suck Web App Performance: Think 1990s.2.3KViews0likes1CommentBig-IP and ADFS (SCRATCH THAT! Big-IP and SAML) with Office 365 – Part 5
The BIG-IP with APM has now become SAML, (claims) aware! “SAML” not “self-aware”. No need to start worrying about Skynet and Arnold Schwarzenegger kicking in your door, (except for you Sarah Connor). This is a good thing! If you need to federate your organization with Office 365, this is a very good thing. With the release of ver. 11.3, BIG-IP with APM, (Access Policy Manager) now includes full SAML support on the box. What does that mean? Well, rather than relying upon an external resource such as ADFS to issue or security tokens, (used to present/consume claims with a federation partner), the BIG-IP becomes the federation endpoint for the organization. Check out here for more information on federation. When it comes to Office 365, not only has the infrastructure required to federate your organization been dramatically reduced, the configuration required has been simplified. Available in our community codeshare forum is an iApp as well as guidance specifically designed for deploying the BIG-IP as a federation IdP, (identity provider) for Office 365. Now federating with Office 365 is as simple as answering a few questions and entering a few PowerShell commands to configure the Office 365 side. To gain a better understanding of how we arrived here, (replacing ADFS), as well as illustrating the benefit let’s take a look at the “Evolution of Solution”…development. Saying Goodbye to ADFS Ensuring a Highly Available Architecture Throughout this series, (links below), we’ve taken a look at how the F5 BIG-IP can add value and enhance to and ADFS, (Active Directory Federation Services). To get the ball rolling we looked at how the BIG-IP was able to provide for a highly-available and scalable ADFS infrastructure, (refer to Figure 1). This included ensuring the ADFS proxy farm, located in the perimeter network, as well as the internal ADFS farm was available and the traffic is optimized. BIG-IP enhancements to the ADFS federation process: • Intelligent traffic management • Advanced L7 health monitoring – (Ensures the ADFS service is responding) • Cookie-based persistence Enhancing Security and Streamlining ADFS Building upon the previous solution, (load balancing the ADFS and ADFS Proxy layers), we implemented APM, (Access Policy Manager), (refer to Figure 2). By implementing APM on the F5 appliance(s) we not only eliminated the need for these additional servers but, by implementing pre-authentication at the perimeter and advanced features such as client-side checks, (antivirus validation, firewall verification, etc.), arguably provided for a more secure deployment. Additional BIG-IP enhancements to the ADFS federation process: •Enhanced Security •Variety of authentication methods •Client endpoint inspection •Multi-factor authentication •Improved User Experience •SSO across on-premise and cloud-based applications •Single-URL access for hybrid deployments •Simplified Architecture •Removes the ADFS proxy farm layer as well as the need to load balance the proxy farm Eliminating the ADFS Infrastructure Available with version 11.3, APM includes full SAML support. This allows the BIG-IP to not only authenticate the client connections with Active Directory, but act as the IdP or SP in the federation process. No longer will an organization be required to deploy an ADFS infrastructure for federation. Rather, the BIG-IP’s role as an application delivery controller is expanded out to include cloud-based resources, (including Office 365), as well as on-premise applications. Additional BIG-IP enhancements to the ADFS federation process: •Ability to act as IDP, (Identity Provider) for access to external claims-based resources including Office 365 •Act as service provider, (SP) to facilitate federated access to on-premise applications •Streamlined architecture, (no need for the ADFS architecture) •Simplified iApp deployment Figure 3 shows a typical Office 365 client access process utilizing APM and SAML. Additional Links: Big-IP APM as SAML 2.0 IdP from Microsoft Office 365 SAML Federation with the BIG-IP Big-IP and ADFS Part 1 – “Load balancing the ADFS Farm” Big-IP and ADFS Part 2 – “APM–An Alternative to the ADFS Proxy” Big-IP and ADFS Part 3 – “ADFS, APM, and the Office 365 Thick Clients” Big-IP and ADFS Part 4 – “What about Single Sign-Out?” BIG-IP Access Policy Manager (APM) Wiki Home - DevCentral Wiki Latest F5 Information F5 News Articles F5 Press Releases F5 Events F5 Web Media F5 Technology Alliance Partners F5 YouTube Feed1.1KViews0likes3CommentsOne Time Passwords via an SMS Gateway with BIG-IP Access Policy Manager
One time passwords, or OTP, are used (as the name indicates) for a single session or transaction. The plus side is a more secure deployment, the downside is two-fold—first, most solutions involve a token system, which is costly in management, dollars, and complexity, and second, people are lousy at remembering things, so a delivery system for that OTP is necessary. The exercise in this tech tip is to employ BIG-IP APM to generate the OTP and pass it to the user via an SMS Gateway, eliminating the need for a token creating server/security appliance while reducing cost and complexity. Getting Started This guide was developed by F5er Per Boe utilizing the newly released BIG-IP version 10.2.1. The “-secure” option for the mcget command is new in this version and is required in one of the steps for this solution. Also, this solution uses the Clickatell SMS Gateway to deliver the OTPs. Their API is documented at http://www.clickatell.com/downloads/http/Clickatell_HTTP.pdf. Other gateway providers with a web-based API could easily be substituted. Also, there are steps at the tail end of this guide to utilize the BIG-IP’s built-in mail capabilities to email the OTP during testing in lieu of SMS. The process in delivering the OTP is shown in Figure 1. First a request is made to the BIG-IP APM. The policy is configured to authenticate the user’s phone number in Active Directory, and if successful, generate a OTP and pass along to the SMS via the HTTP API. The user will then use the OTP to enter into the form updated by APM before allowing the user through to the server resources. BIG-IP APM Configuration Before configuring the policy, an access profile needs to be created, as do a couple authentication servers. First, let’s look at the authentication servers Authentication Servers To create servers used by BIG-IP APM, navigate to Access Policy->AAA Servers and then click create. This profile is simple, supply your domain server, domain name, and admin username and password as shown in Figure 2. The other authentication server is for the SMS Gateway, and since it is an HTTP API we’re using, we need the HTTP type server as shown in Figure 3. Note that the hidden form values highlighted in red will come from your Clickatell account information. Also note that the form method is GET, the form action references the Clickatell API interface, and that the match type is set to look for a specific string. The Clickatell SMS Gateway expects the following format: https://api.clickatell.com/http/sendmsg?api_id=xxxx&user=xxxx&password=xxxx&to=xxxx&text=xxxx Finally, successful logon detection value highlighted in red at the bottom of Figure 3 should be modified to response code returned from SMS Gateway. Now that the authentication servers are configured, let’s take a look at the access profile and create the policy. Access Profile & Policy Before we can create the policy, we need an access profile, shown below in Figure 4 with all default settings. Now that that is done, we click on Edit under the Access Policy column highlighted in red in Figure 5. The default policy is bare bones, or as some call it, empty. We’ll work our way through the objects, taking screen captures as we go and making notes as necessary. To add an object, just click the “+” sign after the Start flag. The first object we’ll add is a Logon Page as shown in Figure 6. No modifications are necessary here, so you can just click save. Next, we’ll configure the Active Directory authentication, so we’ll add an AD Auth object. Only setting here in Figure 7 is selecting the server we created earlier. Following the AD Auth object, we need to add an AD Query object on the AD Auth successful branch as shown in Figures 8 and 9. The server is selected in the properties tab, and then we create an expression in the branch rules tab. To create the expression, click change, and then select the Advanced tab. The expression used in this AD Query branch rule: expr { [mcget {session.ad.last.attr.mobile}] != "" } Next we add an iRule Event object to the AD Query OK branch that will generate the one time password and provide logging. Figure 10 Shows the iRule Event object configuration. The iRule referenced by this event is below. The logging is there for troubleshooting purposes, and should probably be disabled in production. 1: when ACCESS_POLICY_AGENT_EVENT { 2: expr srand([clock clicks]) 3: set otp [string range [format "%08d" [expr int(rand() * 1e9)]] 1 6 ] 4: set mail [ACCESS::session data get "session.ad.last.attr.mail"] 5: set mobile [ACCESS::session data get "session.ad.last.attr.mobile"] 6: set logstring mail,$mail,otp,$otp,mobile,$mobile 7: ACCESS::session data set session.user.otp.pw $otp 8: ACCESS::session data set session.user.otp.mobile $mobile 9: ACCESS::session data set session.user.otp.username [ACCESS::session data get "session.logon.last.username"] 10: log local0.alert "Event [ACCESS::policy agent_id] Log $logstring" 11: } 12: 13: when ACCESS_POLICY_COMPLETED { 14: log local0.alert "Result: [ACCESS::policy result]" 15: } On the fallback path of the iRule Event object, add a Variable Assign object as show in Figure 10b. Note that the first assignment should be set to secure as indicated in image with the [S]. The expressions in Figure 10b are: session.logon.last.password = expr { [mcget {session.user.otp.pw}]} session.logon.last.username = expr { [mcget {session.user.otp.mobile}]} On the fallback path of the AD Query object, add a Message Box object as shown in Figure 11 to alert the user if no mobile number is configured in Active Directory. On the fallback path of the Event OTP object, we need to add the HTTP Auth object. This is where the SMS Gateway we configured in the authentication server is referenced. It is shown in Figure 12. On the fallback path of the HTTP Auth object, we need to add a Message Box as shown in Figure 13 to communicate the error to the client. On the Successful branch of the HTTP Auth object, we need to add a Variable Assign object to store the username. A simple expression and a unique name for this variable object is all that is changed. This is shown in Figure 14. On the fallback branch of the Username Variable Assign object, we’ll configure the OTP Logon page, which requires a Logon Page object (shown in Figure 15). I haven’t mentioned it yet, but the name field of all these objects isn’t a required change, but adding information specific to the object helps with readability. On this form, only one entry field is required, the one time password, so the second password field (enabled by default) is set to none and the initial username field is changed to password. The Input field below is changed to reflect the type of logon to better queue the user. Finally, we’ll finish off with an Empty Action object where we’ll insert an expression to verify the OTP. The name is configured in properties and the expression in the branch rules, as shown in Figures 16 and 17. Again, you’ll want to click advanced on the branch rules to enter the expression. The expression used in the branch rules above is: expr { [mcget {session.user.otp.pw}] == [mcget -secure {session.logon.last.otp}] } Note again that the –secure option is only available in version 10.2.1 forward. Now that we’re done adding objects to the policy, one final step is to click on the Deny following the OK branch of the OTP Verify Empty Action object and change it from Deny to Allow. Figure 18 shows how it should look in the visual policy editor window. Now that the policy is completed, we can attach the access profile to the virtual server and test it out, as can be seen in Figures 19 and 20 below. Email Option If during testing you’d rather send emails than utilize the SMS Gateway, then configure your BIG-IP for mail support (Solution 3664), keep the Logging object, lose the HTTP Auth object, and configure the system with this script to listen for the messages sent to /var/log/ltm from the configured Logging object: #!/bin/bash while true do tail -n0 -f /var/log/ltm | while read line do var2=`echo $line | grep otp | awk -F'[,]' '{ print $2 }'` var3=`echo $line | grep otp | awk -F'[,]' '{ print $3 }'` var4=`echo $line | grep otp | awk -F'[,]' '{ print $4 }'` if [ "$var3" = "otp" -a -n "$var4" ]; then echo Sending pin $var4 to $var2 echo One Time Password is $var4 | mail -s $var4 $var2 fi done done The log messages look like this: Jan 26 13:37:24 local/bigip1 notice apd[4118]: 01490113:5: b94f603a: session.user.otp.log is mail,user1@home.local,otp,609819,mobile,12345678 The output from the script as configured looks like this: [root@bigip1:Active] config # ./otp_mail.sh Sending pin 239272 to user1@home.local Conclusion The BIG-IP APM is an incredibly powerful tool to add to the LTM toolbox. Whether using the mail system or an SMS gateway, you can take a bite out of your infrastructure complexity by using this solution to eliminate the need for a token management service. Many thanks again to F5er Per Boe for this excellent solution!6.5KViews0likes23CommentsSingle-Sign-On mit Kerberos Constrained Delegation, Teil 2 - Debugging
Hallo liebe Leser, in meinem letzten Beitrag bin ich auf die Kerberos Constrained Delegation Konfiguration eingegangen und hatte darin bereits versprochen, in diesem Beitrag das Debugging als Thema zu wählen. Dadurch, dass wir Abhängigkeiten mit anderen Komponenten haben, wird das Debuggen sicher nicht einfacher. Grundsätzlich sollte im Fehlerfall aber überprüft werden, ob die BIG-IP denn die Active Directory (AD) Infrastruktur erfolgreich erreichen kann und ob die Zeit der BIG-IP auch gleich der des Key Distribution Center (KDC) ist. Hier hilft auf dem Command Line Interface (CLI) der Befehl „ntpq –pn“ um zu überprüfen, ob die Zeit richtig ist: Die Namensauflösung ist enorm wichtig bei Kerberos, da es davon abhängig ist. Ebenso auch die Reverse-DNS Auflösung, da dies auch verwendet wird um den SPN (Service Principal Name) für jeden Server nach der Loadbalancing Entscheidung zu bestimmen. Sollten die Namen nicht richtig im DNS eingetragen sein, so kann man diese auch auf der BIG-IP statisch hinterlegen. Das ist aber nur ein Workaround und sollte eigentlich vermieden werden. Hier ein Beispiel um einen Eintrag zu setzen: # tmsh sys global-settings remote-host add { server1 { addr 1.1.1.1 hostname sven } } Natürlich ist das auch über die GUI möglich. Hier nun ein paar Beispiele, wie man die Namensauflösung überprüfen kann. Der Befehl „nslookup“ hilft einem dabei: [root@bigip1-ve:Active] config # nslookup www.example.com Server: 10.0.0.25 Address: 10.0.0.25#53 Name: www.example.com Address: 10.0.0.250 [root@bigip1-ve:Active] config # nslookup 10.0.0.250 Server: 10.0.0.25 Address: 10.0.0.25#53 250.0.0.10.in-addr.arpa name = www.example.com. [root@bigip1-ve:Active] config # nslookup w2008r2.example.com Server: 10.0.0.25 Address: 10.0.0.25#53 ** server can't find w2008r2.example.com: NXDOMAIN [root@bigip1-ve:Active] config # nslookup 10.0.0.26 Server: 10.0.0.25 Address: 10.0.0.25#53 26.0.0.10.in-addr.arpa name = win2008r2.example.com. Da Kerberos SSO sich für die KDC-Ermittlung auf DNS verlässt (sofern hier keine Adresse in der SSO Konfiguration eingetragen wurde), sollte der DNS-Server auch SRV-Records haben, die auf den KDC-Server für den Realm der Domain zeigen. Wir erinnern uns hier, dass ich dies in der Konfiguration des letzten Blogs eingetragen habe. Die Begründung war, dass es dann schneller geht: Schließlich erspart man sich genau diese Auflöse-Prozedur an der Stelle. [root@bigip1-ve:Active] config # nslookup -type=srv _kerberos._tcp.example.com Server: 10.0.0.25 Address: 10.0.0.25#53 _kerberos._tcp.example.com service = 0 100 88 win2008r2.example.com. [root@bigip1-ve:Active] config # nslookup -type=srv _kerberos._udp.example.com Server: 10.0.0.25 Address: 10.0.0.25#53 _kerberos._udp.example.com service = 0 100 88 win2008r2.example.com. Sind diese grundlegenden Konfigurationen nun überprüft und korrekt, aber es funktioniert noch nicht, hilft ein Blick in das APM Logfile. Vorher sollte aber der Debug-Level höher eingestellt werden. Dies bitte nur während des debuggens machen und anschließend wieder zurück stellen. Im Menü kann man unter System/Logs/Configuration/Options den Level einstellen. „Informational“ ist hier ein guter Start. „Debug“ kann man sicher auch verwenden, aber der Level ist schon extrem gesprächig und in der Regel reicht „Informational“ vollkommen. Schauen wir uns doch mal einen Ausschnitt eines Anmeldeprozesses an. Oct 28 12:06:46 bigip1-ve debug /usr/bin/websso[400]: 01490000:7: <0xf1c75b90>:Modules/HttpHeaderBased/Kerberos.cpp:862 Initialized UCC:user1@EXAMPLE.COM@EXAMPLE.COM, lifetime:36000 kcc:0x8ac52f0 Oct 28 12:06:46 bigip1-ve debug /usr/bin/websso[400]: 01490000:7: <0xf1c75b90>:Modules/HttpHeaderBased/Kerberos.cpp:867 UCCmap.size = 1, UCClist.size = 1 Oct 28 12:06:46 bigip1-ve debug /usr/bin/websso[400]: 01490000:7: <0xf1c75b90>:Modules/HttpHeaderBased/Kerberos.cpp:1058 S4U ======> - NO cached S4U2Proxy ticket for user: user1@EXAMPLE.COM server: HTTP/win 2008r2.example.com@EXAMPLE.COM - trying to fetch Oct 28 12:06:46 bigip1-ve debug /usr/bin/websso[400]: 01490000:7: <0xf1c75b90>:Modules/HttpHeaderBased/Kerberos.cpp:1072 S4U ======> - NO cached S4U2Self ticket for user: user1@EXAMPLE.COM - trying to fetch Oct 28 12:06:46 bigip1-ve debug /usr/bin/websso[400]: 01490000:7: <0xf1c75b90>:Modules/HttpHeaderBased/Kerberos.cpp:1082 S4U ======> - fetched S4U2Self ticket for user: user1@EXAMPLE.COM Oct 28 12:06:46 bigip1-ve debug /usr/bin/websso[400]: 01490000:7: <0xf1c75b90>:Modules/HttpHeaderBased/Kerberos.cpp:1104 S4U ======> trying to fetch S4U2Proxy ticket for user: user1@EXAMPLE.COM server: HTTP/win2008r2.example.com@EXAMPLE.COM Oct 28 12:06:46 bigip1-ve debug /usr/bin/websso[400]: 01490000:7: <0xf1c75b90>:Modules/HttpHeaderBased/Kerberos.cpp:1112 S4U ======> fetched S4U2Proxy ticket for user: user1@EXAMPLE.COM server: HTTP/win2008r2.example.com@EXAMPLE.COM Oct 28 12:06:46 bigip1-ve debug /usr/bin/websso[400]: 01490000:7: <0xf1c75b90>:Modules/HttpHeaderBased/Kerberos.cpp:1215 S4U ======> OK! Oct 28 12:06:46 bigip1-ve debug /usr/bin/websso[400]: 01490000:7: <0xf1c75b90>:Modules/HttpHeaderBased/Kerberos.cpp:236 GSSAPI: Server: HTTP/win2008r2.example.com@EXAMPLE.COM, User: user1@EXAMPLE.COM Oct 28 12:06:46 bigip1-ve debug /usr/bin/websso[400]: 01490000:7: <0xf1c75b90>:Modules/HttpHeaderBased/Kerberos.cpp:278 GSSAPI Init_sec_context returned code 0 Oct 28 12:06:46 bigip1-ve debug /usr/bin/websso[400]: 01490000:7: <0xf1c75b90>:Modules/HttpHeaderBased/Kerberos.cpp:316 GSSAPI token of length 1451 bytes will be sent back Hilfreich ist es natürlich auch die Logfiles des IIS anzuschen. Hier kann man sich auch den Usernamen anzeigen lassen (C:\inetpub\logs\LogFiles\W3SVC1): #Fields: date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status time-taken 2011-10-27 21:29:23 10.0.0.25 GET / - 80 EXAMPLE\user2 10.0.0.28 Mozilla/5.0+(compatible;+MSIE+9.0;+Windows+NT+6.1;+Trident/5.0) 200 0 0 15 Eine weitere Fehleranalysemöglichkeit ist natürlich sich anzuschauen, was denn für Daten über die Leitung gehen. Tcpdump und Wireshark sind hierbei unsere Freunde, # tcpdump –i internal –s0 –n–w /tmp/kerberos.pcap host 10.0.0.25 -i: das VLAN, die es zu belauschen gilt. Wählt man hier „0.0“ wird auf allen Schnittstellen gesniffert und man sieht die Pakete entsprechend oft. bzw. auch die Veränderungen, die ein Paket ggf. beim Durchlaufen der BIG-IP erfährt. -s0: bedeutet, dass das gesamte Paket aufgezeichnet werden soll -n: keine Namensauflösung -w /tmp/kerberos.pcap: Die Aufzeichnungen in die Datei /tmp/kerberos.cap speichern host 10.0.0.25: nur Pakete mit der Quell- oder Ziel-IP 10.0.0.25 aufzeichnen. Im Wireshark sieht eine Aufzeichnung dann etwa so aus: Wireshark versteht das Kerberos Protokoll und gibt somit gute Hinweise auf mögliche Probleme. Da APM die Kerberos Tickets cached, kann dies natürlich beim Testen zu unerwünschten Effekten führen. Das bedeutet es macht durchaus Sinn, während der Tests die Ticket Lifetime von standardmässig 600 Minuten runter zu setzen. Zudem kann man auch den Cache mit folgenden Befehl leeren: # bigstart restart websso Zum guten Schluß möchte ich dann noch ein paar Fehlermeldungen erläutern, die man im Log bei Problemen finden kann: Kerberos: can't get TGT for host/apm.realm.com@REALM.COM - Cannot find KDC for requested realm (-1765328230) – Hier sollte man überprüfen ob DNS funktioniert und der DNS Server alle SRV/A/PTR Records der involvierten Realms hat. In der Datei /etc/krb5.conf auch bitte überprüfen, ob dns_lookup_kdc auf True gesetzt ist. Den Eintrag findet man in dem Bereich [libdefaults]. Kerberos: can't get TGT for host/apm.realm.com@REALM.COM - Preauthentication failed (-1765328360) – Das Delegation Account Passwort ist falsch. Kerberos: can't get TGT for host/apm.realm.com@REALM.COM - Client not found in Kerberos database (-1765328378) – Der Delegation Account existiert nicht im AD. Kerberos: can't get TGT for host/apm.realm.com@REALM.COM - Clients credentials have been revoked (-1765328366) – Der Delegation Account ist im AD abgeschaltet. Kerberos: can't get TGT for host/apm.realm.com@realm.com - KDC reply did not match expectations (-1765328237) – Der Realm Name in der SSO Konfiguration muss groß geschrieben sein. Kerberos: can't get TGT for host/apm.realm.com@REALM.COM - A service is not available that is required to process the request (-1765328355) – Die Verbindung mit dem KDC ist nicht möglich. Gründe können z.B. sein: Firewall, TGS Service auf dem KDC läuft nicht oder es ist der falsche KDC angegeben… Kerberos: can't get S4U2Self ticket for user a@REALM.COM - Client not found in Kerberos database (-1765328378) – Es gibt den User 'a' nicht in der Domain REALM.COM. Kerberos: can't get S4U2Self ticket for user qq@REALM.COM - Ticket/authenticator don't match (-1765328348) – Der Delegation Account hat die Form UPN/SPN und entspricht nicht der verwendeten REALM Domain. Achtung, wenn man hier mit Cross-Realms arbeitet! Kerberos: can't get S4U2Self ticket for user aa@realm.com - Realm not local to KDC (-1765328316) – Das KDC Feld muss leer gelassen werden in der SSO Konfiguration, wenn man mit Cross-Realm SSO arbeitet. Also wenn ein User einen Server in einem anderen Realm erreichen muss. Kerberos: can't get S4U2Proxy ticket for server HTTP/webserver.realm.com@REALM.COM - Requesting ticket can't get forwardable tickets (-1765328163) – Der Delegation Account ist nicht für Constrained Delegation und/oder Protocol Transition konfiguriert. Kerberos: can't get S4U2Proxy ticket for server HTTP/webserver.realm.com@REALM.COM - KDC can't fulfill requested option (-1765328371) – Der Delegation Account in REALM.COM hat keinen Service für webserver.real.com in seiner Konfiguration. Damit Sie Kerberos auf der Windows Seite debuggen können, hilft vielleicht dieser Artikel: • http://www.microsoft.com/downloads/details.aspx?FamilyID=7DFEB015-6043-47DB-8238-DC7AF89C93F1&displaylang=en So, ich hoffe natürlich, dass Sie keine Probleme bei der Konfiguration des SSO mit Constrained Delegation haben, aber falls doch, hilft Ihnen hoffentlich dieser Blog-Beitrag, um dem Problem schneller auf die Spur zukommen. Ihr F5-Blogger, Sven Müller654Views1like1CommentIntroducing PoshTweet - The PowerShell Twitter Script Library
It's probably no surprise from those of you that follow my blog and tech tips here on DevCentral that I'm a fan of Windows PowerShell. I've written a set of Cmdlets that allow you to manage and control your BIG-IP application delivery controllers from within PowerShell and a whole set of articles around those Cmdlets. I've been a Twitter user for a few years now and over the holidays, I've noticed that Jeffrey Snover from the PowerShell team has hopped aboard the Twitter bandwagon and that got me to thinking... Since I live so much of my time in the PowerShell command prompt, wouldn't it be great to be able to tweet from there too? Of course it would! HTTP Requests So, last night I went ahead and whipped up a first draft of a set of PowerShell functions that allow access to the Twitter services. I implemented the functions based on Twitter's REST based methods so all that was really needed to get things going was to implement the HTTP GET and POST requests needed for the different API methods. Here's what I came up with. function Execute-HTTPGetCommand() { param([string] $url = $null); if ( $url ) { [System.Net.WebClient]$webClient = New-Object System.Net.WebClient $webClient.Credentials = Get-TwitterCredentials [System.IO.Stream]$stream = $webClient.OpenRead($url); [System.IO.StreamReader]$sr = New-Object System.IO.StreamReader -argumentList $stream; [string]$results = $sr.ReadToEnd(); $results; } } function Execute-HTTPPostCommand() { param([string] $url = $null, [string] $data = $null); if ( $url -and $data ) { [System.Net.WebRequest]$webRequest = [System.Net.WebRequest]::Create($url); $webRequest.Credentials = Get-TwitterCredentials $webRequest.PreAuthenticate = $true; $webRequest.ContentType = "application/x-www-form-urlencoded"; $webRequest.Method = "POST"; $webRequest.Headers.Add("X-Twitter-Client", "PoshTweet"); $webRequest.Headers.Add("X-Twitter-Version", "1.0"); $webRequest.Headers.Add("X-Twitter-URL", "http://devcentral.f5.com/s/poshtweet"); [byte[]]$bytes = [System.Text.Encoding]::UTF8.GetBytes($data); $webRequest.ContentLength = $bytes.Length; [System.IO.Stream]$reqStream = $webRequest.GetRequestStream(); $reqStream.Write($bytes, 0, $bytes.Length); $reqStream.Flush(); [System.Net.WebResponse]$resp = $webRequest.GetResponse(); $rs = $resp.GetResponseStream(); [System.IO.StreamReader]$sr = New-Object System.IO.StreamReader -argumentList $rs; [string]$results = $sr.ReadToEnd(); $results; } } Credentials Once those were completed, it was relatively simple to get the Status methods for public_timeline, friends_timeline, user_timeline, show, update, replies, and destroy going. But, for several of those services, user credentials were required. I opted to store them in a script scoped variable and provided a few functions to get/set the username/password for Twitter. $script:g_creds = $null; function Set-TwitterCredentials() { param([string]$user = $null, [string]$pass = $null); if ( $user -and $pass ) { $script:g_creds = New-Object System.Net.NetworkCredential -argumentList ($user, $pass); } else { $creds = Get-TwitterCredentials; } } function Get-TwitterCredentials() { if ( $null -eq $g_creds ) { trap { Write-Error "ERROR: You must enter your Twitter credentials for PoshTweet to work!"; continue; } $c = Get-Credential if ( $c ) { $user = $c.GetNetworkCredential().Username; $pass = $c.GetNetworkCredential().Password; $script:g_creds = New-Object System.Net.NetworkCredential -argumentList ($user, $pass); } } $script:g_creds; } The Status functions Now that the credentials were out of the way, it was time to tackle the Status methods. These methods are a combination of HTTP GETs and POSTs that return an array of status entries. For those interested in the raw underlying XML that's returned, I've included the $raw parameter, that when set to $true, will not do a user friendly display, but will dump the full XML response. This would be handy, if you want to customize the output beyond what I've done. #---------------------------------------------------------------------------- # public_timeline #---------------------------------------------------------------------------- function Get-TwitterPublicTimeline() { param([bool]$raw = $false); $results = Execute-HTTPGetCommand "http://twitter.com/statuses/public_timeline.xml"; Process-TwitterStatus $results $raw; } #---------------------------------------------------------------------------- # friends_timeline #---------------------------------------------------------------------------- function Get-TwitterFriendsTimeline() { param([bool]$raw = $false); $results = Execute-HTTPGetCommand "http://twitter.com/statuses/friends_timeline.xml"; Process-TwitterStatus $results $raw } #---------------------------------------------------------------------------- #user_timeline #---------------------------------------------------------------------------- function Get-TwitterUserTimeline() { param([string]$username = $null, [bool]$raw = $false); if ( $username ) { $username = "/$username"; } $results = Execute-HTTPGetCommand "http://twitter.com/statuses/user_timeline$username.xml"; Process-TwitterStatus $results $raw } #---------------------------------------------------------------------------- # show #---------------------------------------------------------------------------- function Get-TwitterStatus() { param([string]$id, [bool]$raw = $false); if ( $id ) { $results = Execute-HTTPGetCommand "http://twitter.com/statuses/show/" + $id + ".xml"; Process-TwitterStatus $results $raw; } } #---------------------------------------------------------------------------- # update #---------------------------------------------------------------------------- function Set-TwitterStatus() { param([string]$status); $encstatus = [System.Web.HttpUtility]::UrlEncode("$status"); $results = Execute-HTTPPostCommand "http://twitter.com/statuses/update.xml" "status=$encstatus"; Process-TwitterStatus $results $raw; } #---------------------------------------------------------------------------- # replies #---------------------------------------------------------------------------- function Get-TwitterReplies() { param([bool]$raw = $false); $results = Execute-HTTPGetCommand "http://twitter.com/statuses/replies.xml"; Process-TwitterStatus $results $raw; } #---------------------------------------------------------------------------- # destroy #---------------------------------------------------------------------------- function Destroy-TwitterStatus() { param([string]$id = $null); if ( $id ) { Execute-HTTPPostCommand "http://twitter.com/statuses/destroy/$id.xml", "id=$id"; } } You may notice the Process-TwitterStatus function. Since there was a lot of duplicate code in each of these functions, I went ahead and implemented it in it's own function below: function Process-TwitterStatus() { param([string]$sxml = $null, [bool]$raw = $false); if ( $sxml ) { if ( $raw ) { $sxml; } else { [xml]$xml = $sxml; if ( $xml.statuses.status ) { $stats = $xml.statuses.status; } elseif ($xml.status ) { $stats = $xml.status; } $stats | Foreach-Object -process { $info = "by " + $_.user.screen_name + ", " + $_.created_at; if ( $_.source ) { $info = $info + " via " + $_.source; } if ( $_.in_reply_to_screen_name ) { $info = $info + " in reply to " + $_.in_reply_to_screen_name; } "-------------------------"; $_.text; $info; }; "-------------------------"; } } } A few hurdles Nothing goes without a hitch and I found myself pounding my head at why my POST commands were all getting HTTP 417 errors back from Twitter. A quick search brought up this post on Phil Haack's website as well as this Google Group discussing an update in Twitter's services in how they react to the Expect 100 HTTP header. A simple setting in the ServicePointManager at the top of the script was all that was needed to get things working again. [System.Net.ServicePointManager]::Expect100Continue = $false; PoshTweet in Action So, now it's time to try it out. First you'll need to . source the script and then set your Twitter credentials. This can be done in your Twitter $profile file if you wish. Then you can access all of the included functions. Below, I'll call Set-TwitterStatus to update my current status and then Get-TwitterUserTimeline and Get-TwitterFriendsTimeline to get my current timeline as well as that of my friends. PS> . .\PoshTweet.ps1 PS> Set-TwitterCredentials PS> Set-TwitterStatus "Hacking away with PoshTweet" PS> Get-TwitterUserTimeline ------------------------- Hacking away with PoshTweet by joepruitt, Tue Dec 30, 12:33:04 +0000 2008 via web ------------------------- PS> Get-TwitterFriendsTimeline ------------------------- @astrout Yay, thanks! by mediaphyter, Tue Dec 30 20:37:15 +0000 2008 via web in reply to astrout ------------------------- RT @robconery: Headed to a Portland Nerd Dinner tonite - should be fun! http://bit.ly/EUFC by shanselman, Tue Dec 30 20:37:07 +0000 2008 via TweetDeck ------------------------- ... Things Left Todo As I said, this was implemented in an hour or so last night so it definitely needs some more work, but I believe I've got the Status methods pretty much covered. Next I'll move on to the other services of User, Direct Message, Friendship, Account, Favorite, Notification, Block, and Help when I've got time. I'd also like to add support for the "source" field. I'll need to setup a landing page for this library that is public facing so the folks at Twitter will add it to their system. Once I get all the services implemented, I'll more forward in formalizing this as an application and submit it for consideration. Collaboration I've posted the source to this set of functions on the DevCentral wiki under PsTwitterApi. You'll need to create an account to get to it, but I promise it will be worth it! Feel free to contribute and add to if you have the time. Everyone is welcome and encouraged to tear my code apart, optimize it, enhance it. Just as long as it get's better in the process. B-).1.7KViews0likes10CommentsCreating An iControl PowerShell Monitoring Dashboard With Google Charts
PowerShell is a very extensible scripting language and the fact that it integrates so nicely with iControl means you can do all sorts of fun things with it. In this tech tip, I'll illustrate how to use just a couple of iControl method calls (3 to be exact) to create a load distribution dashboard for you desktop (with a little help from the Google Chart API). Usage The arguments for this application are the address, username, and password for your BIG-IP. param ( $g_bigip = $null, $g_uid = $null, $g_pwd = $null ); The main control flow then looks for the input parameters and if they are not present, a usage message is displayed to the console indicating the required inputs. If the connection info is specified, then the standard Do-Initialize function is called which will look to see if the iControl Snapin is installed and the Initialize-F5.iControl cmdlet is called to initialize the connection to the BIG-IP. If an error occurs during the connection, then an error is logged and the application exits. function Write-Usage() { Write-Host "Usage: iControlDashboard.ps1 host uid pwd"; exit; } function Do-Initialize() { if ( (Get-PSSnapin | Where-Object { $_.Name -eq "iControlSnapIn"}) -eq $null ) { Add-PSSnapIn iControlSnapIn } $success = Initialize-F5.iControl -HostName $g_bigip -Username $g_uid -Password $g_pwd; return $success; } #------------------------------------------------------------------------- # Main Application Logic #------------------------------------------------------------------------- if ( ($g_bigip -eq $null) -or ($g_uid -eq $null) -or ($g_pwd -eq $null) ) { Write-Usage; } if ( Do-Initialize ) { Run-Dashboard } else { Write-Error "ERROR: iControl subsystem not initialized" Kill-Browser } Global Variables This appliction will make use of the Google Chart APIs to generate graphs and as such we need a browser to render it in. Since we will be interacting with another process (in this case Internet Explorer), it is probably a good idea to gracefully shutdown if an error occurs. A generic Exception Trap is created to log the error and shutdown the application properly. Trap [Exception] { Write-Host $("TRAPPED: " + $_.Exception.GetType().FullName); Write-Host $("TRAPPED: " + $_.Exception.Message); Kill-Browser Exit; } A few global variables are used to make the app more configurable. You can specify the title that comes up in the browsers header as well as the graph size for each report graph along with the chart type and polling interval. I opted for a pie chart but other options are available that may or may not be to your liking. At this point I go ahead and create a empty browser window and point it to the about:blank page giving us a context to manipulate the contents of the browser window. I make the window visible and set it to full screen theatermode. $g_title = "iControl PowerShell Dashboard"; $g_graphsize = "300x150"; $g_charttype = "p"; $g_interval = 5; $g_browser = New-Object -com InternetExplorer.Application; $g_browser.Navigate2("About:blank"); $g_browser.Visible = $true; $g_browser.TheaterMode = $true; Browser Control The following functions are to control the browser and the data going into it. The Refresh-Browser function takes as input the HTML to display. The Document object is then accessed from the InternetExplorer.Application object and from there we can access the DocumentElement. Then we set the InnerHTML to the input parameter $html_data and that is displayed in the browser window. #------------------------------------------------------------------------- # function Refresh-Browser #------------------------------------------------------------------------- function Refresh-Browser() { param($html_data); if ( $null -eq $g_browser ) { Write-Host "Creating new Browser" $g_browser = New-Object -com InternetExplorer.Application; $g_browser.Navigate2("About:blank"); $g_browser.Visible = $true; $g_browser.TheaterMode = $true; } $docBody = $g_browser.Document.DocumentElement.lastChild; $docBody.InnerHTML = $html_data; } #------------------------------------------------------------------------- # function Kill-Browser #------------------------------------------------------------------------- function Kill-Browser() { if ( $null -ne $g_browser ) { $g_browser.TheaterMode = $false; $g_browser.Quit(); $g_browser = $null; } } Main Application Loop The main logic for this application is a little infinite loop where we call the Get-Data function, refresh the browser with the newly acquired report, and sleep for the configured interval until the next poll occurs. function Run-Dashboard() { while($true) { #Write-Host "Requesting data..." $html_data = Get-Data; Refresh-Browser $html_data; Start-Sleep $g_interval; } } Generating the Report Here's where all the good stuff happens. The Get-Data function will make a few iControl calls (LocalLB.Pool.get_list(), LocalLB.PoolMember.get_all_statistics(), and LocalLB.PoolMember.get_object_status()) and from that generate a HTML report with charts generated with the Google Chart API. The local variable $html-data is used to store the resulting HTML data that will be sent to Internet Explorer for display and we start off the function by filling in the title and start of the report table. Then the three previously mentioned iControl calls are made and the resulting values are stored in local varables for later reference. The main loop here goes over each of the pools in the MemberStatisticsA local array variable. A few hash tables and counters are created and then we loop over each pool member for the current pool we are processing. Then entries are added to the local hash tables for total connections, current connections, bytes in, and status for later reference. Also a sum of all the values for those hash tables are stored so we can calculate percentages later on. At this point we will use the hash tables for generating the report. Each numeric value is calculated into a percent and chart variables are created to contain the data as well as the labels for the generated pie charts. Once all the number crunching has been performed the actual chart images are specified in the $chart_total, $chart_current, and $chart_bytes variables and the row in the report for the given pool is added to the $html_data variable. function Get-Data() { # TODO - get connection statistics $now = [DateTime]::Now; $html_data = "<html> <head> <title>$g_title</title> </head> <body> <center><h1>$g_title</h1><br/><h2>$now</h2></center> <center><table border='0' bgcolor='#C0C0C0'><tr><td><table border='0' cellspacing='0' bgcolor='#FFFFFF'>"; $html_data += " <tr bgcolor='#C0C0C0'><th>Pool</th><th>Total Connections</th><th>Current Connections</th><th>Bytes In</th></tr>"; $charts_total = ""; $charts_current = ""; $charts_bytes = ""; $PoolList = (Get-F5.iControl).LocalLBPool.get_list() | Sort-Object; $MemberStatisticsA = (Get-F5.iControl).LocalLBPoolMember.get_all_statistics($PoolList) $MemberObjectStatusAofA = (Get-F5.iControl).LocalLBPoolMember.get_object_status($PoolList); # loop over each pool $i = 0; foreach($MemberStatistics in $MemberStatisticsA) { $hash_total = @{}; $hash_current = @{}; $hash_bytes = @{}; $hash_status = @{}; $sum_total = 0; $sum_current = 0; $sum_bytes = 0; $PoolName = $PoolList[$i]; # loop over each member $MemberStatisticEntryA = $MemberStatistics.statistics; foreach($MemberStatisticEntry in $MemberStatisticEntryA) { $member = $MemberStatisticEntry.member; $addr = $member.address; $port = $member.port; $addrport = "${addr}:${port}"; $StatisticA = $MemberStatisticEntry.statistics; $total = Extract-Statistic $StatisticA "STATISTIC_SERVER_SIDE_TOTAL_CONNECTIONS" [long]$sum_total += $total; $hash_total.Add($addrport, $total); $current = Extract-Statistic $StatisticA "STATISTIC_SERVER_SIDE_CURRENT_CONNECTIONS" $sum_current += $current; $hash_current.Add($addrport, $current); $bytes = Extract-Statistic $StatisticA "STATISTIC_SERVER_SIDE_BYTES_IN" [long]$sum_bytes += $bytes; $hash_bytes.Add($addrport, $bytes); $color = Extract-Status $MemberObjectStatusAofA[$i] $member; $hash_status.Add($addrport, $color); } $chd_t = ""; $chd_c = ""; $chd_b = ""; $chl_t = ""; $chl_c = ""; $chl_b = ""; $chdl_t = ""; $chdl_c = ""; $chdl_b = ""; $tbl_t = ""; $tbl_c = ""; $tbl_b = ""; # enumerate the total connections foreach($k in $hash_total.Keys) { $member = $k; $v_t = $hash_total[$k]; $v_c = $hash_current[$k]; $v_b = $hash_bytes[$k]; $color = $hash_status[$k]; $div = $sum_total; if ($div -eq 0 ) { $div = 1; } $p_t = ($v_t/$div)*100; $div = $sum_current; if ($div -eq 0 ) { $div = 1; } $p_c = ($v_c/$div)*100; $div = $sum_bytes; if ($div -eq 0 ) { $div = 1; } $p_b = ($v_b/$div)*100; if ( $chd_t.Length -gt 0 ) { $chd_t += ","; $chd_c += ","; $chd_b += ","; } $chd_t += $p_t; $chd_c += $p_c; $chd_b += $p_b; if ( $chl_t.Length -gt 0 ) { $chl_t += "|"; $chl_c += "|"; $chl_b += "|"; $chdl_t += "|"; $chdl_c += "|"; $chdl_b += "|"; } $chl_t += "$member"; $chl_c += "$member"; $chl_b += "$member"; $chdl_t += "$member - $v_t"; $chdl_c += "$member - $v_c"; $chdl_b += "$member - $v_b"; #$alt_t += "($member,$v_t)"; #$alt_c += "($member,$v_c)"; #$alt_b += "($member,$v_b)"; $tbl_t += "<tr><td bgcolor='$color'>$member</td><td align='right'>$v_t</td></tr>"; $tbl_c += "<tr><td bgcolor='$color'>$member</td><td align='right'>$v_c</td></tr>"; $tbl_b += "<tr><td bgcolor='$color'>$member</td><td align='right'>$v_b</td></tr>"; } if ( $sum_total -gt 0 ) { $chart_total = "<img src='http://chart.apis.google.com/chart? chs=$g_graphsize &chd=t:$chd_t &cht=$g_charttype &chdl=$chl_t' alt='Total Connections for pool $PoolName' />"; } else { $chart_total = ""; } if ( $sum_current -gt 0 ) { $chart_current = "<img src='http://chart.apis.google.com/chart? chs=$g_graphsize &chd=t:$chd_c &cht=$g_charttype &chdl=$chl_c' alt='Current Connections for pool $PoolName' />"; } else { $chart_current = ""; } if ( $sum_bytes -gt 0 ) { $chart_bytes = "<img src='http://chart.apis.google.com/chart? chs=$g_graphsize &chd=t:$chd_b &cht=$g_charttype &chdl=$chl_b' alt='Incoming Bytes for pool $PoolName' />"; } else { $chart_current = ""; } if ( $i -gt 0 ) { $html_data += "<tr><td colspan='4'><hr/></td></tr>"; } $html_data += " <tr><th nowrap='nowrap'>$PoolName</th> <td valign='bottom'>$chart_total<br/> <center><table border='1'><tr><th>Member</th><th>Value</th></tr>$tbl_t</table> </td> <td valign='bottom'>$chart_current<br/> <center><table border='1'><tr><th>Member</th><th>Value</th></tr>$tbl_c</table> </td> <td valign='bottom'>$chart_bytes<br/> <center><table border='1'><tr><th>Member</th><th>Value</th></tr>$tbl_b</table> </td> </tr>"; $i++; } $html_data += "</table></td></tr></table></body></html>"; return $html_data; } Utility Functions It's always useful to extract common code into utility functions and this application is no exception. In here I've got a Convert-To64Bit function that takes the high and low 32 bits of a 64 bit number and does the math to convert them into a native 64 bit value. The Extract-Statistic function takes as input a Common.Statsistic Array along with a type to look for in that array. It loops over the array of Statistic values and returns the 64 bit value of the match, if one is found. And finally the Extract-Status function is used to look through the returned value from the LocalLB.PoolMember.get_object_status iControl method for a specific pool member. This function returns a color to display in the generated HTML table, green for good, red for bad. The only way a green will show up will be if both it's availability_status and enabled_status values are AVAILABILITY_STATUS_GREEN and ENABLED_STATUS_ENABLED respectively. function Convert-To64Bit() { param($high, $low); $low = [Convert]::ToString($low,2).PadLeft(32,'0') if($low.length -eq "64") { $low = $low.substring(32,32) } return [Convert]::ToUint64([Convert]::ToString($high,2).PadLeft(32,'0')+$low,2); } function Extract-Statistic() { param($StatisticA, $type); $value = -1; foreach($Statistic in $StatisticA) { if ( $Statistic.type -eq $type ) { $value = Convert-To64Bit $Statistic.value.high $Statistic.value.low; break; } } return $value; } function Extract-Status() { param($MemberObjectStatusA, $IPPortDefinition); $color = "#FF0000"; foreach($MemberObjectStatus in $MemberObjectStatusA) { if ( ($MemberObjectStatus.member.address -eq $IPPortDefinition.address) -and ($MemberObjectStatus.member.port -eq $IPPortDefinition.port) ) { $availability_status = $MemberObjectStatus.object_status.availability_status; $enabled_status = $MemberObjectStatus.object_status.enabled_status; if ( ($availability_status -eq "AVAILABILITY_STATUS_GREEN") -and ($enabled_status -eq "ENABLED_STATUS_ENABLED" ) ) { $color = "#00FF00"; } } } return $color; } Running The Application After running the application on the console, Internet Explorer will be created in Theater Mode (Full Screen) and will look something like this. My system is somewhat inactive so you'll see that some of the charts are missing. This was by design in that charts with no data are not very informative. Assuming you have traffic across all your pools, charts will be created. Extending This Application This application merely looks at load distribution and state for members within the pools. It would be trivial to change or extend the types of charts presented. iControl provides you with all the data you need to build your own monitoring dashboard regardless of the types of metrics you would like to keep an eye on. For the full application, check out the PsiControlDashboard entry in the iControl CodeShare Get the Flash Player to see this player. 20081106-iControlApps-15-iControlDashboard.mp3844Views0likes28CommentsSingle-Sign-On mit Kerberos Constrained Delegation
Hallo liebe Leser, als heutiges Thema möchte ich über ein weiteres Single-Sign-On Verfahren sprechen: Kerberos Constrained Delegation. Was bedeutet das? Nun was hier passiert, ist dass der User sich am Access Policy Manager (APM) anmeldet. Hier können natürlich verschiedene Verfahren verwendet werden, wie z.B. Client-zertifikatsbasierte Anmeldung, oder Username/Passwort, die Kombination aus beiden oder eben auch andere Methoden. Entscheidend jedenfalls ist, dass sich der User authentifiziert und der Access Policy Manager (APM) anschließend ein Ticket für den User vom Key Distribution Center (KDC) erhält. Dieses Ticket wird dann dem User-Request hinzugefügt, damit am Backend-Server die entsprechende Authentifizierung mittels Kerberos erfolgreich verläuft. Anwendung findet dieses Verfahren häufig, wenn User aus dem „externen“ Netz zugreifen und somit keine Möglichkeit haben, selber ein Ticket vom KDC zu erhalten. Damit die BIG-IP aber überhaupt ein Ticket für den User bekommt, muss ein entsprechender Delegation-Account auf dem Domain Controller (DC) angelegt sein. Windows Administratoren fühlen sich hier vermutlich zu Hause. Für alle anderen ein paar Screenshots, die bei dieser Konfiguration hoffentlich helfen: Der „Advanced View“ im „Active Directory Users and Computers“ ermöglicht es den Attribute Editor unter User Properties auszuwählen. Nun legen wir einen User-Account an: User logon name = host/apm.example.com User logon name pre Windows 2000 = apm.example Der servicePrincipial Name muss dem DC hinzugefügt werden, bevor man in den Delegation Reiter des User wechselt. Dies kann mit dem Tool „setspn“ oder „adsiedit“ durchgeführt werden: run adsiedit.msc Nun den Usernamen auswählen, Properties wählen, und den Eintrag servicePrincipalName auswählen und „host/apm.example.com“ hinzufügen. Oder man fügt den Principial Name über den Attribute-Editor in den User Eigenschaften hinzu: Schließt man nun das Property Fenster und öffnet es wieder, hat man Zugriff auf den Delegation Tab. Hier kann man nun die Delegationsrechte für den User auf einen bestimmten Service freigeben. In meinem Beispiel ist es „WIN2008R2.example.com“. Übrigens das ist der Name meines Backend-Servers. Hat man mehrere müssen diese ebenfalls hinzugefügt werden. Unter den Account Options sollte man nun nochmal überprüfen, dass keine Account Optionen gesetzt sind. Der Delegation User brauch nur Mitglied der „Domain User“ zu sein. Um zu überprüfen, dass es keine Konflikte in der Kerberos Konfiguration gibt, kann man „setspn –x“ aufrufen: Wichtig bei dem Thema Kerberos ist, dass man alle DNS Einträge richtig gesetzt hat. Hier bitte alle Adressen und Namen die in dem Zusammenhang verwendet werden in beide Richtungen auflösen. Das kann einem so manches Debugging ersparen. Ganz wichtig auch: die Zeit! NTP sollte auf der BIG-IP auf jeden Fall eingestellt sein und synchron zum DC sein! DNS sollte natürlich auch funktionieren. Fassen wir kurz zusammen: Wir haben nun also einen User im DC konfiguriert, der für einen bestimmten Service im Namen eines anderen Nutzers ein Kerberos Ticket erstellen darf. Das ist noch nicht viel, kommen wir nun also zur Konfiguration der BIG-IP. Übrigens die Kerberos Konfiguration des IIS überlasse ich erstmal komplett dem Windows-Admin ;-). Als erstes richten wir auf der BIG-IP eine Kerberos SSO-Konfiguration ein: Unter „SSO Configurations“ findet man auch schon die passende Option. Was gibt es hier zu beachten? Ich habe bei mir den KDC eingetragen, das muss nicht sein. Ist in der Praxis aber etwas schneller, da der KDC nicht erst discovered werden muss. Namen können hier natürlich auch eingesetzt werden. Spannend ist immer der SPN Pattern. Per default, also wenn nichts eingetragen ist, wird hier HTTP/%s@REALM verwendet. Wobei das %s für den Namen des ausgewählten Backend-Server steht. Kurz zusammengefasst: Der Request trifft bei der BIG-IP ein, es wird die Authentifizierung durchgeführt und bevor der Request an den Backend-Server weitergereicht wird, muss dieser über das Balancing-Verfahren ausgewählt werden. Die IP-Adresse des Ziel-Servers wird über DNS Reverse-Lookup aufgelöst und der Name wird als SPN verwendet. Das bedeutet natürlich auch, dass für den Delegation-Account auch die entsprechenden Namen hinzugefügt werden müssen (s.o.). Alternativ kann man auch statt mit „Host-based“ mit „Domain Account-based“ Zugriff arbeiten und würde unter dem SPN Pattern einen Eintrag der Form „http/www.example.com“ vornehmen. Natürlich muss die Konfiguration der IIS-Server auch entsprechend angepasst werden. Als Credential Source müssen die Variablen genommen werden, in denen nachher auch die entsprechenden Informationen enthalten sind. Kommen wir zum nächsten Schritt. Dem Anlegen einer Access-Policy: Hier wählen wir das übliche Vorgehen. Unter Access Profiles klicken wir auf das „Plus-Zeichen“, geben der Policy einen Namen, fügen eine Sprache hinzu und vergessen nicht unter „SSO Configuration“ unser eben angelegtes Kerberos-SSO Objekt auszuwählen. Für den ersten Test sollte das reichen, Timeout-Werte etc. können später noch entsprechend angepasst werden, falls notwendig. Öffnen wir nun der „Visual Policy Editor“ (VPE). Eine User Authentifizierung gegen das AD habe ich in einem vorigen BLOG-Beitrag schon gezeigt, daher denke ich, lassen wir den User sich diesmal mittels Client Zertifikat ausweisen. Damit das auch funktioniert, müssen wir unserem Client-SSL-Profil noch eine Eigenschaft hinzufügen. Nämlich, dass ein Client-Zertifikat benötigt oder wie in diesem Screenshot zu sehen angefragt (request) wird. Geprüft wird es dann gegen die entsprechende CA. Die Frage lautet nun sicher, warum ich das vorhanden sein eines validen Client Zertifikat, nicht auf „require“ gesetzt habe. Nun, da spricht nichts gegen, aber ich will die Kontrolle dessen, in der Access-Policy abfangen und genau da hin kommen wir jetzt. In meiner Policy füge ich als erstes nun das „On-Demand-Cert-Auth“ Objekt ein. Hier muss das Client-Zertifikat nun auch vorhanden sein. Übrigens wird hier ein SSL-Rehandshake durchgeführt. Ist das Client-Zertifikat nun valide, ist der User autorisiert, auf den Backend-Server zuzugreifen. Vorher brauchen wir aber noch ein Kerberos-Ticket für ihn. Das bedeutet aus dem Zertifikat müssen wir den Usernamen extrahieren. Dieser kann in verschiedenen Feldern des Zertifikates stehen. Mit Hilfe des „Variable Assigns“ und ein wenig TCL können wir aber ziemlich flexibel auf alles eingehen. Fügen wir als erstes also das Objekt „Variable Assign“ in unseren „Successful“-Branch ein. Mit Hilfe des Variable Assign können wir neue Variablen erstellen, oder bestehende verändern. Wir erinnern uns an unsere Kerberos-SSO Konfiguration. Dort haben wir angegeben, dass der Username aus der Variablen session.logon.last.username genommen werden soll und die Domain aus session.logon.last.domain. Das bedeutet, wir müssen nun dafür sorgen, dass diese Variablen auch entsprechend gefüllt sind. Ein Blick in die bereits existierenden Session Variablen nach einem ersten Einloggen verrät, dass in meinem Fall der Username in der Variablen session.ssl.cert.x509extension gespeichert ist und er dort nun aus der recht langen Zeile extrahiert werden muss. Was es angenehm macht, mein Username ist in meiner Mail-Adresse enthalten und steht begrenzt von den Zeichen UPN<sven.mueller@example.com> Also kann ich die Mailadresse mit ein wenig Regular Expression schnell rausfiltern. Zur Erklärung: set f1 [mcget {session.ssl.cert.x509extension}] Speichert den Inhalt der Session Variablen in die temporäre Variable f1, mit der im Folgenden gearbeitet wird. regexp {(.*)UPN<(.*)>(.*)} $f1 matched sub1 sub2 sub3 Verwende Regular Expression und speichere alles was in der ersten runden Klammer matched, in die Variable sub1. Das sind alle Zeichen (.*) bis zur Zeichenfolge „UPN<“. Alles was dann an Zeichen kommt, bis zum Zeichen „>“ (und das ist die Mailadresse) speichere in die Variable sub2. Der Rest soll in die Variable sub3 gespeichert werden. Das Ganze soll angewendet werden, auf den Inhalt der Variablen f1, also session.ssl.cert.x509extension. return $sub2 Weist den Inhalt der Variablen sub2 (also die Mailadresse) und die Session Variable session.logon.custom.username zu. Soweit so gut. Nun muss der Username, also der Teil vor dem „@“ Zeichen noch extrahiert werden und der Domain Teil muss auch noch in eine Session-Variable geschrieben werden. Also fügen wir dem Variablen Assign noch zwei Einträge hinzu: session.logon.last.username expr {[lindex [split [mcget {session.logon.custom.username}] "@"] 0]} Hier zerlegen wir den Inhalt der Variablen session.logon.last.username in ein Array und als Trenner soll das „@“ Zeichen dienen. Somit steht im 0-ten Feld des Array unser Unsername. session.logon.last.domain expr {[lindex [split [mcget {session.logon.custom.username}] "@"] 1]} Im 1-ten Feld des Array steht die Domain und die weisen wir nun auch einer Session Variablen zu. Perfekt! Schon ist alles fertig. Policy applyen, dem entsprechenden virtuellen Server zufügen und testen. J Falls es nun doch noch nicht auf Anhieb funktioniert, so werde ich in meinem nächsten Beitrag auf das Debuggen eingehen. Bis dahin aber erstmal viel Erfolg und Spaß bei der Konfiguration. Ihr F5-Blogger, Sven Müller568Views0likes2CommentsHealth Monitors for Exchange 2010
I was recently asked to develop health monitors on my 10.2.4HF3 LTMs for our Exchange 2010 environment. Two of the three monitors were a bit challenging, so I wanted to share what I developed. The monitors have been sanitized to protect our environment - modify them to fit yours. *** Active Sync *** This monitor was challenging me, until I stumbled across this statement for versions 10.2.x and 11.x in SOL2167: "When Basic Authentication is enabled by configuring a User Name and Password in the monitor definition, the system inserts the Authorization header and a terminating double CR/LF sequence (0x0d 0x0a 0x0d 0x0a) after the last character in the Send String." Once I read that, I removed my standard trailing "\r\n\r\n" sequence at the end of the Send String, and the monitor immediately started working. It saved me from having to use an External monitor. You will need to insert a username, password, and host header value in the send string to fit your environment: 1: ltm monitor https active-sync { 2: cipherlist "DEFAULT:+SHA:+3DES:+kEDH" 3: compatibility "enabled" 4: defaults-from https 5: interval 10 6: password "<password>" 7: recv "MS-ASProtocolCommands: Sync,SendMail,SmartForward,SmartReply,GetAttachment,GetHierarchy,CreateCollection,DeleteCollection,MoveCollection,FolderSync" 8: send "OPTIONS /Microsoft-Server-ActiveSync/ HTTP/1.1\r\nHost: <host header value>" 9: time-until-up 0 10: timeout 31 11: username "<domain>\<username>" 12: } *** Outlook Web Access *** This monitor was straightforward. You will need to replace the FQDN in both the URI and the Host: header in the Send String: 1: ltm monitor https outlook-web-access { 2: cipherlist "DEFAULT:+SHA:+3DES:+kEDH" 3: compatibility "enabled" 4: defaults-from https 5: interval 10 6: recv "OutlookSession=" 7: send "GET /owa/auth/logon.aspx?url=https://<hostname>/owa/&reason=0 HTTP/1.1\r\nUser-Agent: Mozilla/4.0\r\nHost: <hostname>\r\n\r\n" 8: time-until-up 0 9: timeout 31 10: } *** Outlook Anywhere *** This monitor was complicated enough that it could not be done by any of the built-in monitor types, so I had to run this with CURL as an External monitor. Here is the LTM monitor definition: 1: ltm monitor external outlook-anywhere { 2: defaults-from external 3: interval 10 4: run "outlook-anywhere.sh" 5: time-until-up 0 6: timeout 31 7: user-defined debug "0" 8: user-defined debugfile "/shared/tmp/outloo-anywhere_debug.log" 9: user-defined receivestring "200 Success" 10: user-defined username "<domain>\<username>" 11: } In order to get this monitor to run in your environment you must update the username variable. In our environment, I had to precede the username with our domain name in order for it to return successful. The outlook-anywhere.sh script looks like this. It needs to be placed in /config/monitors with execute permission: 1: #!/bin/bash 2: # 3: # Exchange 2010 Outlook Anywhere external health monitor 4: # 5: # Syntax: /config/monitors/outlook-anywhere.sh 6: # 7: # Author: 8: # Date: 9: # 10: # Important Notes: 11: # * Username must be preceded by the "<domain>\" string 12: # * There should be four Variables configured in the monitor properties: 13: # - USERNAME: the user credentials used to perform the check 14: # - RECEIVESTRING: The string to look for in a successful response 15: # - DEBUG: Enables output debugging 16: # - DEBUGFILE: File which tores debug output (if DEBUG is enabled) 17: # * The password for USERNAME should only be stored in this file. This way, 18: # it is not accessible via TMSH commands or by viewing the LTM config file 19: # * To execute this script manually, uncomment the variables and execute 20: # the shell script. The output will be stored in DEBUGFILE. 21: # * CAUTION: If you execute this script manually, Make sure you comment 22: # the USERNAME, RECEIVESTRING, DEBUG, and DEBUGFILE variables (do not 23: # commend the PASSWORD variable). If you do not comment these variables, 24: # they will override the variables configured in the Monitor properties 25: # 26: # Revisions: 27: # 28: 29: # Do not comment out the PASSWORD variable 30: PASSWORD='<password>' 31: 32: # Uncomment these variables temporarily if you want to execute the 33: # script manually 34: #USERNAME='\' 35: #RECEIVESTRING='200 Success' 36: #DEBUG=1 37: #DEBUGFILE=/shared/tmp/outlook-anywhere_debug.log 38: 39: # Remove IPv6/IPv4 compatibility prefix (LTM passes addresses in IPv6 format) 40: IP=`echo ${1} | sed 's/::ffff://'` 41: PORT=${2} 42: 43: # Create a PID file 44: PIDFILE="/var/run/`basename ${0}`.${IP}_${PORT}.pid" 45: 46: # Kill of the last instance of this monitor if hung and log current pid 47: if [ -f $PIDFILE ] 48: then 49: kill -9 `cat $PIDFILE` >; /dev/null 2>&1 50: fi 51: echo "$$" >; $PIDFILE 52: 53: 54: if [ $DEBUG -eq 0 ] 55: then 56: 57: curl -v -k --request RPC_IN_DATA -A MSRPC -u "${USERNAME}:${PASSWORD}" --ntlm -H "Host: <hostname>" "https://${IP}/rpc/rpcproxy.dll?:6001" 2>;&1 | grep -ic "${RECEIVESTRING}" > /dev/null 2>&1 58: 59: else 60: 61: echo "*******************" 2>;&1 > "${DEBUGFILE}" 62: echo -e "curl -v -k --request RPC_IN_DATA -A MSRPC -u '${USERNAME}:${PASSWORD}' --ntlm -H \"Host: <hostname>\" \"https://${IP}/rpc/rpcproxy.dll?:6001\" 2>&1 | grep '200 Success'\n" 2>;&1 >> "${DEBUGFILE}" 63: echo "USERNAME: ${USERNAME}" 2>;&1 >> "${DEBUGFILE}" 64: echo "PASSWORD: ${PASSWORD}" 2>;&1 >> "${DEBUGFILE}" 65: echo "RECEIVESTRING: ${RECEIVESTRING}" 2>;&1 >> "${DEBUGFILE}" 66: echo "DEBUG: ${DEBUG}" 2>;&1 >> "${DEBUGFILE}" 67: echo "DEBUGFILE: ${DEBUGFILE}" 2>;&1 >> "${DEBUGFILE}" 68: echo -e "*******************\n" 2>;&1 >> "${DEBUGFILE}" 69: 70: curl -v -k --request RPC_IN_DATA -A MSRPC -u "${USERNAME}:${PASSWORD}" --ntlm -H "Host: <hostname>" "https://${IP}/rpc/rpcproxy.dll?:6001" 2>;&1 | tee -a "${DEBUGFILE}" | grep -ic "${RECEIVESTRING}" > /dev/null 2>&1 71: fi 72: 73: # Retain the return code of the grep command. 74: EXITCODE=$? 75: 76: # Need to remove PIDFILE here because the LTM terminates the script after 77: # receiving anythin in STDOUT 78: rm -f $PIDFILE 79: 80: if [ $EXITCODE -eq 0 ] 81: then 82: echo "UP" 83: fi In order to get this script to run in your environment you must update the PASSWORD variable, and insert your <hostname> into Host: header and the URI string in all three CURL command references. To execute the script manually, you must define the variables that are normally passed by the LTM by uncommenting them near the top of the script. If you do uncomment these variables, make sure you comment them out when you are finished or they will override the variables in the Monitor definition: USERNAME RECEIVESTRING DEBUG (if desired) DEBUGFILE (if DEBUG desired)459Views0likes3Comments