service provider
363 TopicsF5 SAML SP for a portion of a website
Let's say I have the following setup: a website called test.example.com an access policy called test_apol with SAML Auth If I assign the test_apol access policy to test.example.com VIP, the entire test.example.com becomes Service Provider (SP) and is protected by SAML Auth. Can I, and if yes then how, place only a portion of the website, i.e. a selected list of HTTP Paths/URIs behind SAML Auth, instead of the entire website? I.e. if test.example.com/private then SAML Auth, otherwise no restrictions. Just from top of my head, I was thinking about placing an iRule Event in front of SAML Auth; and inside iRule do the filtering of which HTTP Paths/URIs I want to send to SAML for authentication, and which ones just straight to the back-end servers without any authentication: However, I don't know whether this is the best approach to address my problem, or there is a better more elegant solution. Any ideas, suggestions, recommendations to address this are much appreciated.311Views0likes1CommentThe Drive Towards NFV: Creating Technologies to Meet Demand
It is interesting to see what is happening in the petroleum industry over time. I won’t get into the political and social aspects of the industry or this will become a 200 page dissertation. What is interesting to me is how the petroleum industry has developed new technologies and uses these technologies in creative ways to gain more value from the resources that are available. Drilling has gone from the simple act of poking a hole in the ground using a tool, to the use of drilling fluids, ‘mud’, to optimize the drilling performance for specific situations, non-vertical directional drilling, where they can actually drill horizontally, and the use of fluids and gases to maximize the extraction of resources through a process called ‘fracking’. What the petroleum industry has done is looked for and created technologies to extract further value from known resources that would not have been available with the tools that were available to them. We see a similar evolution of the technologies used and value extracted by the communications service providers (CSPs). Looking back, CSPs usually delivered a single service such as voice over a dedicated physical infrastructure. Then, it became important to deliver data services and they added a parallel infrastructure to deliver the video content. As costs started to become prohibitive to continue to support parallel content delivery models, the CSPs started looking for ways to use the physical infrastructure as the foundation and use other technologies to drive both voice and data to the customer. Frame relay (FR) and asynchronous transfer mode (ATM) technologies were created to allow for the separation of the traffic at a layer 2 (network) perspective. The CSP is extracting more value from their physical infrastructure by delivering multiple services over it. Then, the Internet came and things changed again. Customers wanted their Internet access in addition to the voice and video services that they currently received. The CSPs evolved, yet again, and started looking at layer 3 (IP) differentiation and laid this technology on their existing FR and ATM networks. Today, mobile and fixed service providers are discovering that managing the network at the layer 3 level is no longer enough to deliver services to their customers, differentiate their offerings, and most importantly, support the revenue cost model as they continue to build and evolve their networks to new models such as 4G LTE wireless and customer usage patterns change. Voice services are not growing while data services are increasing at an explosive rate. Also, the CSPs are finding that much of their legacy revenue streams are being diverted to over-the-top providers that deliver content from the Internet and do not deliver any revenue or value to the CSP. There is Value in that Content The CSPs are moving up the OSI network stack and looking to find value in the layer 4 through 7 content and delivering services that enhance specific types of content and allow subscribers gain additional value through value added services (VAS) that can be targeted towards the subscribers and the content. This means that new technologies such as content inspection and traffic steering are necessary to leverage this function. Unfortunately, there is a non-trivial cost for the capability for the CSP to deliver content and subscriber aware services. These services require significant memory and computing resources. To offset these costs as well as introduce a more flexible dynamic network infrastructure that is able to adapt to new services and evolving technologies, a consortium of CSPs have developed the Network Functions Virtualization (NFV) technology working group. As I mentioned in a previous blog, NFV is designed to virtualize network functions such as the MME, SBC, SGSN/GGSN, and DPI onto an open hardware infrastructure using commercial, off-the-shelf (COTS) hardware. In addition, VAS solutions can leverage this architecture to enhance the customer experience. By using COTS hardware and using virtual/software versions of these functions, the CSP gains a cost benefit and the network becomes more flexible and dynamic. It is also important to remember that one of the key components of the NFV standard is to deliver a mechanism to manage and orchestrate all of these virtualized elements while tying the network elements more closely to the business needs of the operator. Since the services are deployed in a flexible and dynamic way, it becomes possible to deliver a mechanism to orchestrate the addition or removal of resources and services based on network analytics and policies. This flexibility allows operators to add and remove services and adjust capacity as needed without the need for additional personnel and time for coordination. An agile infrastructure enables operators to roll-out new services quicker to meet the evolving market demands, and also remove services, which are not contributing to the company’s bottom line or delivering a measurable benefit to the customer quality of experience, with minimal impact the the infrastructure or investment. Technology to Extract Content and Value Operators need to consider the four key elements to making the necessary application defined network (ADN) successful in an NFV-based architecture: Virtualization, Abstraction, Programmability, and Orchestration. Virtualization provides the foundation for that flexible infrastructure which allows for the standardization of the hardware layer as well as being one of the key enablers for the dynamic service provisioning. Abstraction is a key element because operators need to be able to tie their network services up into the application and business services they are offering to their customers – enabling their processes and the necessarily orchestration. Programmability of the network elements and the NFV infrastructure ensures that the components being deployed can not only be customized and successfully integrated into the network ecosystem, but adapted as the business needs and technology changes. Orchestration is the last key element. Orchestration is where operators will get some of their largest savings by being able to introduce and remove services quicker and broader through automating the service enablement on their network. This enables operators to adjust quicker to the changing market needs while “doing more with less”. As these CSPs look to introduce NFV into their architectures, they need to consider these elements and look for vendors which can deliver these attributes. I will discuss each of these features in more detail in upcoming blog posts. We will look at how these features are necessary to deliver the NFV vision and what this means to the CSPs who are looking to leverage the technologies and architectures surrounding the drive towards NFV. Ultimately, CSPs want a NFV orchestration system enabling the network to add and remove service capacity, on-demand and without human intervention, as the traffic ebbs and flows to those services. This allows the operator to be able to reduce their overall service footprint by re-using infrastructure for different services based upon their needs. F5 is combining these attributes in innovative ways to deliver solutions that enable them to leverage the NFV design. Demo of F5 utilizing NFV technologies to deliver an agile network architecture: Dynamic Service Availability through VAS bursting163Views0likes0CommentsCreating a proper RADIUS Accounting-Response packet in iRules
Creating a proper RADIUS Accounting-Response packet in iRules If you do a lot of work with RADIUS messages being sent to your BIGIP so that you can get some information from another network node in the system, you are going to need to respond back to that node in a correct and proper way, so that you don’t mess it up, or otherwise fill up its error log with ‘improper response’ messages. I was working on just that kind of iRule, and got it working. Here’s the code: 1: 2: RADIUS Acct-Resp packet 3: 4: input variables: 5: $rId -> RADIUS Request ID field 6: $rAuth -> RADIUS Request Authorization field 7: $static::SHARED_SECRET -> RADIUS Shared secret of the 2 nodes. 8: 9: UDP::drop ; kill the incoming packet, don’t need it. 10: set RADcode 5 ; RADIUS Accounting-Response Type code. 11: set md5hash [ format "05%02x00%02x" $rId 20 ] 12: set md5hash $md5hash$rAuth$static::SHARED_SECRET 13: set hash [ binary format H* $md5hash ] 14: set respAuth [md5 $hash] 15: set payldhdr [ binary format ccS $RADcode $rId 20 ] 16: set payload $payldhdr$respAuth 17: UDP::respond $payload So the RADIUS Accounting-Response packet has, at a minimum, four fields: RADIUS Type code(1 byte), ID number(1 byte), Packet Length(2 byte Integer), and the Authenticator(16 bytes). Getting the response authenticator correct is really the only tricky part. Line 5: The RADIUS Request ID field comes from the incoming packet. You can retrieve this value previously in the iRule using a line of code like this: set rId [RADIUS::id]. Of course, this value can also be pulled out at the same time some other important fields are decoded using the sample code for line 6 below. Line 6: The RADIUS Request Authorization field in the incoming packet is an MD5 hash of most if the incoming packet. To get this value, you will need to do a binary scan of the UDP packet: set payload [UDP::payload] binary scan $payload ccSH32a* rType rId rLen rAuth rPkt In this example, we also pull out the Type Code (rType) and the packet ID (rId), the length of the overall packet, the Authorization field that we are looking for, and the rest of the packet, which should be all the AVP fields. Line 9: We don’t need (and usually don’t want) the RADIUS Request packet from going on from the BIGIP, so we just drop the packet to prevent this. Line 10: 0x05 is the RADIUS Type Code for an Account-Response packet. Line 11: Here’s where we start working on our response authenticator. We are going to setup a hexstring (which is hex characters displayed as a string; IE> “05140016” is four bytes 0x05, 0x14, 0x00, 0x16), pack all of our values into it, convert it to real binary hex, do the MD5 hash on the result, and use the MD5 checksum result in the response packet. This line sets Type Code, id, and Length (hard coded to 20 bytes). Line 12: Using the hexstring from the last line, it adds the Original request Authenticator, and the shared secret (which should already be in hexstring format) to it. We now have our complete hexstring that needs to be run through the MD5 checksum function. Line 13: This line simply converts the hexstring to a binary hex value. Line 14: We get our valid Authenticator for the response. Line 15: Now we create the actual response packet. This line sets the first three fields: Type Code, id, and Length. Line 16: We add the header and the response Authenticator and we have our RADIUS Accounting-Response packet. Line 17: Send the response back. The RADIUS Accounting-Response protocol is documented in RFC-2866, Section 4.2.867Views0likes7CommentsDNSSEC: Is Your Infrastructure Ready?
A few months ago, we teamed with Infoblox for a DNSSEC webinar. Jonathan George, F5 Product Marketing Manager, leads with myself and Cricket Liu of Infoblox as background noise. He’s a blast as always and certainly knows his DNS. So, learn how F5 enables you to deploy DNSSEC quickly and easily into an existing GSLB environment with BIG-IP Global Traffic Manager (GTM). BIG-IP GTM streamlines encryption key generation and distribution by dynamically signing DNS responses in real-time. Running time: 49:20 </p> <p>ps</p> <p>Resources:</p> <ul> <li><a href="http://www.f5.com/news-press-events/web-media/" _fcksavedurl="http://www.f5.com/news-press-events/web-media/">F5 Web Media</a></li> <li><a href="http://www.youtube.com/user/f5networksinc" _fcksavedurl="http://www.youtube.com/user/f5networksinc">F5 YouTube Channel</a></li> <li><a href="http://www.f5.com/products/big-ip/global-traffic-manager.html" _fcksavedurl="http://www.f5.com/products/big-ip/global-traffic-manager.html">BIG-IP GTM</a></li> <li><a href="http://www.f5.com/pdf/white-papers/dnssec-wp.pdf" _fcksavedurl="http://www.f5.com/pdf/white-papers/dnssec-wp.pdf">DNSSEC: The Antidote to DNS Cache Poisoning and Other DNS Attacks (whitepaper)</a> | <a href="http://devcentral.f5.com/s/weblogs/interviews/archive/2009/12/04/audio-tech-brief-dnssec-the-antidote-to-dns.aspx" _fcksavedurl="http://devcentral.f5.com/s/weblogs/interviews/archive/2009/12/04/audio-tech-brief-dnssec-the-antidote-to-dns.aspx">Audio</a></li> <li><a href="http://www.cricketondns.com" _fcksavedurl="http://www.cricketondns.com">Cricket on DNS</a></li> <li><a href="http://www.youtube.com/user/InfobloxInc" _fcksavedurl="http://www.youtube.com/user/InfobloxInc">Infoblox YouTube Channel</a></li> </ul> <p>Technorati Tags: <a href="http://devcentral.f5.com/s/weblogs/psilva/psilva/psilva/archive/2011/05/09/" _fcksavedurl="http://devcentral.f5.com/s/weblogs/psilva/psilva/psilva/archive/2011/05/09/">F5</a>, <a href="http://technorati.com/tags/webinar" _fcksavedurl="http://technorati.com/tags/webinar">webinar</a>, <a href="http://technorati.com/tags/Pete+Silva" _fcksavedurl="http://technorati.com/tags/Pete+Silva">Pete Silva</a>, <a href="http://technorati.com/tags/security" _fcksavedurl="http://technorati.com/tags/security">security</a>, <a href="http://technorati.com/tag/business" _fcksavedurl="http://technorati.com/tag/business">business</a>, <a href="http://technorati.com/tag/education" _fcksavedurl="http://technorati.com/tag/education">education</a>, <a href="http://technorati.com/tag/technology" _fcksavedurl="http://technorati.com/tag/technology">technology</a>, <a href="http://technorati.com/tags/internet" _fcksavedurl="http://technorati.com/tags/internet">internet, </a><a href="http://technorati.com/tags/big-ip" _fcksavedurl="http://technorati.com/tags/big-ip">big-ip</a>, <a href="http://technorati.com/tag/dnssec" _fcksavedurl="http://technorati.com/tag/dnssec">dnssec</a>, <a href="http://technorati.com/tags/infoblox" _fcksavedurl="http://technorati.com/tags/infoblox">infoblox</a> <a href="http://technorati.com/tags/dns" _fcksavedurl="http://technorati.com/tags/dns">dns</a></p> <table border="0" cellspacing="0" cellpadding="2" width="378"><tbody> <tr> <td valign="top" width="200">Connect with Peter: </td> <td valign="top" width="176">Connect with F5: </td> </tr> <tr> <td valign="top" width="200"><a href="http://www.linkedin.com/pub/peter-silva/0/412/77a" _fcksavedurl="http://www.linkedin.com/pub/peter-silva/0/412/77a"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="o_linkedin[1]" border="0" alt="o_linkedin[1]" src="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_linkedin.png" _fcksavedurl="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_linkedin.png" width="24" height="24" /></a> <a href="http://devcentral.f5.com/s/weblogs/psilva/Rss.aspx" _fcksavedurl="http://devcentral.f5.com/s/weblogs/psilva/Rss.aspx"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="o_rss[1]" border="0" alt="o_rss[1]" src="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_rss.png" _fcksavedurl="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_rss.png" width="24" height="24" /></a> <a href="http://www.facebook.com/f5networksinc" _fcksavedurl="http://www.facebook.com/f5networksinc"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="o_facebook[1]" border="0" alt="o_facebook[1]" src="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_facebook.png" _fcksavedurl="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_facebook.png" width="24" height="24" /></a> <a href="http://twitter.com/psilvas" _fcksavedurl="http://twitter.com/psilvas"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="o_twitter[1]" border="0" alt="o_twitter[1]" src="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_twitter.png" _fcksavedurl="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_twitter.png" width="24" height="24" /></a> </td> <td valign="top" width="176"> <a href="http://bitly.com/nIsT1z?r=bb" _fcksavedurl="http://bitly.com/nIsT1z?r=bb"><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="o_facebook[1]" border="0" alt="o_facebook[1]" src="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_facebook.png" _fcksavedurl="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_facebook.png" width="24" height="24" /></a> <a href="http://bitly.com/rrAfiR?r=bb" _fcksavedurl="http://bitly.com/rrAfiR?r=bb"><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="o_twitter[1]" border="0" alt="o_twitter[1]" src="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_twitter.png" _fcksavedurl="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_twitter.png" width="24" height="24" /></a> <a href="http://bitly.com/neO7Pm?r=bb" _fcksavedurl="http://bitly.com/neO7Pm?r=bb"><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="o_slideshare[1]" border="0" alt="o_slideshare[1]" src="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_slideshare.png" _fcksavedurl="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_slideshare.png" width="24" height="24" /></a> <a href="http://bitly.com/mOVxf3?r=bb" _fcksavedurl="http://bitly.com/mOVxf3?r=bb"><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="o_youtube[1]" border="0" alt="o_youtube[1]" src="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_youtube.png" _fcksavedurl="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_youtube.png" width="24" height="24" /></a></td> </tr> </tbody></table></body></html> ps Resources: F5 Web Media F5 YouTube Channel BIG-IP GTM DNSSEC: The Antidote to DNS Cache Poisoning and Other DNS Attacks (whitepaper) | Audio Cricket on DNS Infoblox YouTube Channel316Views0likes1CommentHow is SDN disrupting the way businesses develop technology?
You must have read so much about software-defined networking (SDN) by now that you probably think you know it inside and out. However, such a nascent industry is constantly evolving and there are always new aspects to discover and learn about. While much of the focus on SDN has focused on the technological benefits it brings, potential challenges are beginning to trouble some SDN watchers. While many businesses acknowledge that the benefits of SDN are too big to ignore, there are challenges to overcome, particularly with the cultural changes that it brings. In fact, according to attendees at the Open Networking Summit (ONS) recently the cultural changes required to embrace SDN outweigh the technological challenges. One example, outlined in this TechTarget piece, is that the (metaphorical) wall separating network operators and software developers needs to be torn down; network operators need coding skills and software developers will need to be able to program networking services into their applications. That’s because SDN represents a huge disruption to how organisations develop technology. With SDN, the speed of service provisioning is dramatically increased; provisioning networks becomes like setting up a VM... a few clicks of the button and you’re done. This centralised network provision means the networking element of development is no longer a bottleneck; it’s ready and available right when it’s needed. There’s another element to consider when it comes to SDN, tech development and its culture. Much of what drives software-defined networking is open source, and dealing with that is something many businesses may not have a lot of experience with. Using open source SDN technologies means a company will have to contribute something back to the community - that’s how open source works. But for some that may prove to be a bit of an issue: some SDN users such as banks or telecoms companies may feel protective of their technology and not want is source code to be released to the world. But that is the reality of the open source SDN market, so it is something companies will have to think carefully about. Are the benefits of SDN for tech development worth going down the open source route? That’s a question only the companies themselves can answer. Software-defined networking represents a huge disruption to the way businesses develop technology. It makes things faster, easier and more convenient during the process and from a management and scalability point of view going forward. There will be challenges - there always are when disruption is on the agenda - but if they can be overcome SDN could well usher in a new era of technological development.1KViews0likes6CommentsRADIUS Load Balancing with iRules
What is RADIUS? “Remote Authentication Dial In User Service” or RADIUS is a very mature and widely implemented protocol for exchanging ”Triple A” or “Authentication, Authorization and Accounting” information. RADIUS is a relatively simple, transactional protocol. Clients, such as remote access server, FirePass, BIG-IP, etc. originate RADIUS requests (for example, to authenticate a user based on a user/password combination) and then wait for a response from the RADIUS server. Information is exchanged between a RADIUS client and server in the form of attributes. User-name, user-password, IP Address, port, and session state are all examples of attributes. Attributes can be in the format of text, string, IP address, integer or timestamp. Some attributes are variable in length, some are fixed. Why is protocol-specific support valuable? In typical UDP Load Balancing (not protocol-specific), there is one common challenge: if a client always sends requests with the same source port, packets will never be balanced across multiple servers. This behavior is the default for a UDP profile. To allow load balancing to work in this situation, using "Datagram LB" is the common recommendation or the use of an immediate session timeout. By using Datagram LB, every packet will be balanced. However, if a new request comes in before the reply for the previous request comes back from the server, BIG-IP LTM may change source port of that new request before forwards it to the server. This may result in an application not acting properly. In this later case, “Immediate timeout” must then be used. An additional virtual server may be needed for outbound traffic in order to route traffic back to the client. In short, to enable load balancing for RADIUS transaction-based traffic coming from the same source IP/source port, Datagram LB or immediate timeout should be employed. This configuration works in most cases. However, if the transaction requires more than 2 packets (1 request, 1 response), then, further BIG-IP LTM work is needed. An example where this is important occurs in RADIUS challenge/response handshakes, which require 4 packets: * Client ---- access-request ---> Server * Client <-- access-challenge --- Server * Client --- access-request ----> Server * Client <--- access-accept ----- Server For this traffic to succeed, all packets associated with the same transaction must be returned to the same server. In this case, custom layer 7 persistence is needed. iRules can provide the needed persistency. With iRules that understand the RADIUS protocol, BIG-IP LTM can direct traffic based on any attribute sent by client or persist sessions based on any attribute sent by client or server. Session management can then be moved to the BIG-IP, reducing server-side complexity. BIG-IP can provide almost unlimited intelligence in an iRule that can even re-calculate md5, modify usernames, detect realms, etc. BIG-IP LTM can also provide security at the application level of the RADIUS protocol, rejecting malformed traffic, denial of service attacks, or similar threats using customized iRules. Solution Datagram LB UDP profile or immediate timeout may be used if requests from client always use the same source IP/port. If immediate timeout is used, there should be an additional VIP for outbound traffic originated from server to client and also an appropriate SNAT (same IP as VIP). Identifier or some attributes can be used for Universal Inspection Engine (UIE) persistence. If immediate timeout/2-side-VIP technique are used, these should be used in conjunction with session command with "any" option. iRules 1) Here is a sample iRule which does nothing except decode and log some attribute information. This is a good example of the depth of fluency you can achieve via an iRule dealing with RADIUS. when RULE_INIT { array set ::attr_code2name { 1 User-Name 2 User-Password 3 CHAP-Password 4 NAS-IP-Address 5 NAS-Port 6 Service-Type 7 Framed-Protocol 8 Framed-IP-Address 9 Framed-IP-Netmask 10 Framed-Routing 11 Filter-Id 12 Framed-MTU 13 Framed-Compression 14 Login-IP-Host 15 Login-Service 16 Login-TCP-Port 17 (unassigned) 18 Reply-Message 19 Callback-Number 20 Callback-Id 21 (unassigned) 22 Framed-Route 23 Framed-IPX-Network 24 State 25 Class 26 Vendor-Specific 27 Session-Timeout 28 Idle-Timeout 29 Termination-Action 30 Called-Station-Id 31 Calling-Station-Id 32 NAS-Identifier 33 Proxy-State 34 Login-LAT-Service 35 Login-LAT-Node 36 Login-LAT-Group 37 Framed-AppleTalk-Link 38 Framed-AppleTalk-Network 39 Framed-AppleTalk-Zone 60 CHAP-Challenge 61 NAS-Port-Type 62 Port-Limit 63 Login-LAT-Port } } when CLIENT_ACCEPTED { binary scan [UDP::payload] cH2SH32cc code ident len auth \ attr_code1 attr_len1 log local0. "code = $code" log local0. "ident = $ident" log local0. "len = $len" log local0. "auth = $auth" set index 22 while { $index < $len } { set hsize [expr ( $attr_len1 - 2 ) * 2] switch $attr_code1 { 11 - 1 { binary scan [UDP::payload] @${index}a[expr $attr_len1 - 2]cc \ attr_value attr_code2 attr_len2 log local0. " $::attr_code2name($attr_code1) = $attr_value" } 9 - 8 - 4 { binary scan [UDP::payload] @${index}a4cc rawip \ attr_code2 attr_len2 log local0. " $::attr_code2name($attr_code1) =\ [IP::addr $rawip mask 255.255.255.255]" } 13 - 12 - 10 - 7 - 6 - 5 { binary scan [UDP::payload] @${index}Icc attr_value \ attr_code2 attr_len2 log local0. " $::attr_code2name($attr_code1) = $attr_value" } default { binary scan [UDP::payload] @${index}H${hsize}cc \ attr_value attr_code2 attr_len2 log local0. " $::attr_code2name($attr_code1) = $attr_value" } } set index [ expr $index + $attr_len1 ] set attr_len1 $attr_len2 set attr_code1 $attr_code2 } } when SERVER_DATA { binary scan [UDP::payload] cH2SH32cc code ident len auth \ attr_code1 attr_len1 log local0. "code = $code" log local0. "ident = $ident" log local0. "len = $len" log local0. "auth = $auth" set index 22 while { $index < $len } { set hsize [expr ( $attr_len1 - 2 ) * 2] switch $attr_code1 { 11 - 1 { binary scan [UDP::payload] @${index}a[expr $attr_len1 - 2]cc \ attr_value attr_code2 attr_len2 log local0. " $::attr_code2name($attr_code1) = $attr_value" } 9 - 8 - 4 { binary scan [UDP::payload] @${index}a4cc rawip \ attr_code2 attr_len2 log local0. " $::attr_code2name($attr_code1) =\ [IP::addr $rawip mask 255.255.255.255]" } 13 - 12 - 10 - 7 - 6 - 5 { binary scan [UDP::payload] @${index}Icc attr_value \ attr_code2 attr_len2 log local0. " $::attr_code2name($attr_code1) = $attr_value" } default { binary scan [UDP::payload] @${index}H${hsize}cc \ attr_value attr_code2 attr_len2 log local0. " $::attr_code2name($attr_code1) = $attr_value" } } set index [ expr $index + $attr_len1 ] set attr_len1 $attr_len2 set attr_code1 $attr_code2 } } This iRule could be applied to many areas of interest where a particular value needs to be extracted. For example, the iRule could detect the value of specific attributes or realm and direct traffic based on that information. 2) This second iRule allows UDP Datagram LB to work with 2 factor authentication. Persistence in this iRule is based on "State" attribute (value = 24). Another great example of the kinds of things you can do with an iRule, and how deep you can truly dig into a protocol. when CLIENT_ACCEPTED { binary scan [UDP::payload] ccSH32cc code ident len auth \ attr_code1 attr_len1 set index 22 while { $index < $len } { set hsize [expr ( $attr_len1 - 2 ) * 2] binary scan [UDP::payload] @${index}H${hsize}cc attr_value \ attr_code2 attr_len2 # If it is State(24) attribute... if { $attr_code1 == 24 } { persist uie $attr_value 30 return } set index [ expr $index + $attr_len1 ] set attr_len1 $attr_len2 set attr_code1 $attr_code2 } } when SERVER_DATA { binary scan [UDP::payload] ccSH32cc code ident len auth \ attr_code1 attr_len1 # If it is Access-Challenge(11)... if { $code == 11 } { set index 22 while { $index < $len } { set hsize [expr ( $attr_len1 - 2 ) * 2] binary scan [UDP::payload] @${index}H${hsize}cc attr_value \ attr_code2 attr_len2 if { $attr_code1 == 24 } { persist add uie $attr_value 30 return } set index [ expr $index + $attr_len1 ] set attr_len1 $attr_len2 set attr_code1 $attr_code2 } } } Conclusion With iRules, BIG-IP can understand RADIUS packets and make intelligent decisions based on RADIUS protocol information. Additionally, it is also possible to manipulate RADIUS packets to meet nearly any application need. Contributed by: Nat Thirasuttakorn Get the Flash Player to see this player.2.7KViews0likes4CommentsIT as a Service: A Stateless Infrastructure Architecture Model
The dynamic data center of the future, enabled by IT as a Service, is stateless. One of the core concepts associated with SOA – and one that failed to really take hold, unfortunately – was the ability to bind, i.e. invoke, a service at run-time. WSDL was designed to loosely couple services to clients, whether they were systems, applications or users, in a way that was dynamic. The information contained in the WSDL provided everything necessary to interface with a service on-demand without requiring hard-coded integration techniques used in the past. The theory was you’d find an appropriate service, hopefully in a registry (UDDI-based), grab the WSDL, set up the call, and then invoke the service. In this way, the service could “migrate” because its location and invocation specific meta-data was in the WSDL, not hard-coded in the client, and the client could “reconfigure”, as it were, on the fly. There are myriad reasons why this failed to really take hold (notably that IT culture inhibited the enforcement of a strong and consistent governance strategy) but the idea was and remains sound. The goal of a “stateless” architecture, as it were, remains a key characteristic of what is increasingly being called IT as a Service – or “private” cloud computing . TODAY: STATEFUL INFRASTRUCTURE ARCHITECTURE The reason the concept of a “stateless” infrastructure architecture is so vital to a successful IT as a Service initiative is the volatility inherent in both the application and network infrastructure needed to support such an agile ecosystem. IP addresses, often used to bypass the latency induced by resolution of host names at run-time from DNS calls, tightly couple systems together – including network services. Routing and layer 3 switching use IP addresses to create a virtual topology of the architecture and ensure the flow of data from one component to the next, based on policy or pre-determine routes as meets the needs of the IT organization. It is those policies that in many cases can be eliminated; replaced with a more service-oriented approach that provisions resources on-demand, in real-time. This eliminates the “state” of an application architecture by removing delivery dependencies on myriad policies hard-coded throughout the network. Policies are inexorably tied to configurations, which are the infrastructure equivalent of state in the infrastructure architecture. Because of the reliance on IP addresses imposed by the very nature of network and Internet architectural design, we’ll likely never reach full independence from IP addresses. But we can move closer to a “stateless” run-time infrastructure architecture inside the data center by considering those policies that can be eliminated and instead invoked at run-time. Not only would such an architecture remove the tight coupling between policies and infrastructure, but also between applications and the infrastructure tasked with delivering them. In this way, applications could more easily be migrated across environments, because they are not tightly bound to the networking and security policies deployed on infrastructure components across the data center. The pre-positioning of policies across the infrastructure requires codifying topological and architectural meta-data in a configuration. That configuration requires management; it requires resources on the infrastructure – storage and memory – while the device is active. It is an extra step in the operational process of deploying, migrating and generally managing an application. It is “state” and it can be reduced – though not eliminated – in such a way as to make the run-time environment, at least, stateless and thus more motile. TOMORROW: STATELESS INFRASTRUCTURE ARCHITECTURE What’s needed to move from a state-dependent infrastructure architecture to one that is more stateless is to start viewing infrastructure functions as services. Services can be invoked, they are loosely coupled, they are independent of solution and product. Much in the same way that stateless application architectures address the problems associated with persistence and impede real-time migration of applications across disparate environments, so too does stateless infrastructure architectures address the same issues inherent in policy-based networking – policy persistence. While standardized APIs and common meta-data models can alleviate much of the pain associated with migration of architectures between environments, they still assume the existence of specific types of components (unless, of course, a truly service-oriented model in which services, not product functions, are encapsulated). Such a model extends the coupling between components and in fact can “break” if said service does not exist. Conversely, a stateless architecture assumes nothing; it does not assume the existence of any specific component but merely indicates the need for a particular service as part of the application session flow that can be fulfilled by any appropriate infrastructure providing such a service. This allows the provider more flexibility as they can implement the service without exposing the underlying implementation – exactly as a service-oriented architecture intended. It further allows providers – and customers – to move fluidly between implementations without concern as only the service need exist. The difficulty is determining what services can be de-coupled from infrastructure components and invoked on-demand, at run-time. This is not just an application concern, it becomes an infrastructure component concern, as well, as each component in the flow might invoke an upstream – or downstream – service depending on the context of the request or response being processed. Assuming that such services exist and can be invoked dynamically through a component and implementation-agnostic mechanism, it is then possible to eliminate many of the pre-positioned, hard-coded policies across the infrastructure and instead invoke them dynamically. Doing so reduces the configuration management required to maintain such policies, as well as eliminating complexity in the provisioning process which must, necessarily, include policy configuration across the infrastructure in a well-established and integrated enterprise-class architecture. Assuming as well that providers have implemented support for similar services, one can begin to see the migratory issues are more easily redressed and the complications caused by needed to pre-provision services and address policy persistence during migration mostly eliminated. SERVICE-ORIENTED THINKING One way of accomplishing such a major transformation in the data center – from policy to service-oriented architecture – is to shift our thinking from functions to services. It is not necessarily efficient to simply transplant a software service-oriented approach to infrastructure because the demands on performance and aversion to latency makes a dynamic, run-time binding to services unappealing. It also requires a radical change in infrastructure architecture by adding the components and services necessary to support such a model – registries and the ability of infrastructure components to take advantage of them. An in-line, transparent invocation method for infrastructure services offers the same flexibility and motility for applications and infrastructure without imposing performance or additional dependency constraints on implementers. But to achieve a stateless infrastructure architectural model, one must first shift their thinking from functions to services and begin to visualize a data center in which application requests and responses communicate the need for particular downstream and upstream services with them, rather than completely in hard-coded policies stored in component configurations. It is unlikely that in the near-term we can completely eliminate the need for hard-coded configuration, we’re just no where near that level of dynamism and may never be. But for many services – particularly those associated with run-time delivery of applications, we can achieve the stateless architecture necessary to realize a more mobile and dynamic data center. Now Witness the Power of this Fully Operational Feedback Loop Cloud is the How not the What Challenging the Firewall Data Center Dogma Cloud-Tiered Architectural Models are Bad Except When They Aren’t Cloud Chemistry 101 You Can’t Have IT as a Service Until IT Has Infrastructure as a Service Let’s Face It: PaaS is Just SOA for Platforms Without the Baggage The New Distribution of The 3-Tiered Architecture Changes Everything499Views0likes1CommentDNS The F5 Way: A Paradigm Shift
This is the second in a series of DNS articles that I'm writing. The first is: Let's Talk DNS on DevCentral. Internet users rely heavily on DNS, and when DNS breaks, applications break. It's extremely important to implement an architecture that provides for DNS availability at all times. It's important because the number of Internet users continues to grow. In fact, a recent study conducted by the International Telecommunications Union claims that mobile devices will outnumber the people living on this planet at some point this year (2014). I'm certainly contributing to those stats as I have a smartphone and a tablet! In addition, the sophistication and complexity of websites are increasing. Many sites today require hundreds of DNS requests just to load a single page. So, when you combine the number of Internet users with the complexity of modern sites, you can imagine that the number of DNS requests traversing your network is extremely large. Verisign's average daily DNS query load during the fourth quarter of 2012 was 77 billion with a peak of 123 billion. Wow...that's a lot of DNS requests...every day! The point is this...Internet use is growing, and the need for reliable DNS is more important than ever. par·a·digm noun \ˈper-ə-ˌdīm\: a group of ideas about how something should be done, made, or thought about Conventional DNS design goes something like this... Front end (secondary) DNS servers are load balanced behind a firewall, and these servers answer all the DNS queries from the outside world. The master (primary) DNS server is located in the datacenter and is hidden from the outside world behind an internal firewall. This architecture was adequate for a smaller Internet, but in today's complex network world, this design has significant limitations. Typical DNS servers can only handle up to 200,000 DNS queries per second per server. Using the conventional design, the only way to handle more requests is to add more servers. Let's say your organization is preparing for a major event (holiday shopping, for example) and you want to make sure all DNS requests are handled. You might be forced to purchase more DNS servers in order to handle the added load. These servers are expensive and take critical manpower to operate and maintain. You can start to see the scalability and cost issues that add up with this design. From a security perspective, there is often weak DDoS protection with a conventional design. Typically, DDoS protection relies on the network firewall, and this firewall can be a huge traffic bottleneck. Check out the following diagram that shows a representation of a conventional DNS deployment. It's time for a DNS architecture paradigm shift. Your organization requires it, and today's Internet demands it. F5 Introduces A New Way... The F5 Intelligent DNS Scale Reference Architecture is leaner, faster, and more secure than any conventional DNS architecture. Instead of adding more DNS servers to handle increased DNS request load, you can simply install the BIG-IP Global Traffic Manager (GTM) in your network’s DMZ and allow it to handle all external requests. The following diagram shows the simplicity and effectiveness of the F5 design. Notice that the infrastructure footprint of this design is significantly smaller. This smaller footprint reduces costs associated with additional servers, manpower, HVAC, facility space, etc. I mentioned the external request benefit of the BIG-IP GTM...here's how it works. The BIG-IP GTM uses F5's specifically designed DNS Express zone transfer feature and cluster multiprocessing (CMP) for exponential performance of query responses. DNS Express manages authoritative DNS queries by transferring zones to its own RAM, so it significantly improves query performance and response time. With DNS Express zone transfer and the high performance processing realized with CMP, the BIG-IP GTM can scale up to more than 10 million DNS query responses per second which means that even large surges of DNS requests (including malicious ones) will not likely disrupt your DNS infrastructure or affect the availability of your critical applications. The BIG-IP GTM is much more than an authoritative DNS server, though. Here are some of the key features and capabilities included in the BIG-IP GTM: ICSA certified network firewall -- you don't have to deploy DMZ firewalls any more...it IS your firewall! Monitors the health of app servers and intelligently routes traffic to the nearest data center using IP Geolocation Protects from DNS DDoS attacks using the integrated firewall services, scaling capabilities, and IP address intelligence Allows you to utilize benefits of cloud environment by flexibly deploying BIG-IP GTM Virtual Edition (VE) Supports DNSSEC with real-time signing and validates DNSSEC responses As you can see, the BIG-IP GTM is a workhorse that literally has no rival in today's market. It's time to change the way we think about DNS architecture deployments. So, utilize the F5 Intelligent DNS Scale Reference Architecture to improve web performance by reducing DNS latency, protect web properties and brand reputation by mitigating DNS DDoS attacks, reduce data center costs by consolidating DNS infrastructure, and route customers to the best performing components for optimal application and service delivery. Learn more about F5 Intelligent DNS Scale by visiting https://f5.com/solutions/architectures/intelligent-dns-scale1KViews0likes2CommentsGetting Around the Logon/Legal Banner Issues when using APM PCoIP Proxy and Horizon
If you're using APM's PCoIP Proxy and require a logon banner, you've probably figured out that the PCoIP Proxy integration stops working when you turn on the integrated logon banner from within the Horizon Administrator. Adding to the pain, internal users can't get any logon banner since you had to turn it off in order for your external access to work! Well, the wait is over! With the use of a nifty iRule that you can attach to your internal Horizon Connection Servers virtual server, you can now present a banner BOTH internal users as well as external users who access Horizon resources using APM PCoIP Proxy. Here's how it works: Disable the logon banner through Horizon Administrator - the BIG-IP will handle presenting the banners for internal users (through the iRule) and external users (through the View iApp) instead of Horizon. Modify the text in the iRule with the text you want to show in the logon banner. Apply the iRule to your LTM Virtual Server that services internal Horizon users (either manually to the LTM virtual server or through the View iApp). You're done! A couple of things to think about when you implement this: If you need to present a legal disclaimer your external users using the PCoIP Proxy, you can still do that through the Horizon View iApp. Do not apply this to any virtual server running the APM PCoIP Proxy - it's only for providing the logon banner to internal Horizon users. The banner for PCoIP Proxy can be easily enabled through the iApp It's important to ensure the PCoIP Proxy's Connection Server settings are pointing to the individual connection server(s) and NOT the LTM virtual server that has the Logon Banner iRule applied. The iRule source is below. # Attach iRule to iApp created virtual server named "<iapp_name>_internal_https" # Replace the section “This is a XXX computer system that is FOR OFFICIAL USE ONLY. This # system is subject to monitoring. Therefore, no expectation of privacy is to be assumed. # Individuals found performing unauthorized activities are subject to disciplinary action # including criminal prosecution.” with your desired text. when RULE_INIT { # Debug Level 0=off, 1=on, 2=verbose set static::internal_disclaimer_debug 0 } when CLIENT_ACCEPTED { set log_prefix_cs "[IP::remote_addr]:[TCP::remote_port clientside] <-> [IP::local_addr]:[TCP::local_port clientside]" if { $static::internal_disclaimer_debug > 1 } { log local0. "<$log_prefix_cs>: CLIENT_ACCEPTED" } } when HTTP_REQUEST { set bypass 0 if {[HTTP::uri] starts_with "/portal/info.jsp"} { if { $static::internal_disclaimer_debug > 0 } { log local0. "<$log_prefix_cs>: Portal Info request, bypassing further processing"} set bypass 1 } else { if {[HTTP::header exists "Content-Length"]} { set content_length [HTTP::header "Content-Length"] } else { # If the header is missing, use a sufficiently large number set content_length 5000 } if { $static::internal_disclaimer_debug > 1 } { log local0. "<$log_prefix_cs>: Set content-length to $content_length"} HTTP::collect $content_length if { [HTTP::path] == "/broker/xml" && [HTTP::header Expect] == "100-continue" } { SSL::respond "HTTP/1.0 100 Continue\r\n\r\n" if { $static::internal_disclaimer_debug > 1 } { log local0. "<$log_prefix_cs>: Application requested: client requires 100 continue response, sending 100-continue"} } } } when HTTP_REQUEST_DATA { if { [HTTP::payload] contains "set-locale" and ( not ($bypass)) } { HTTP::respond 200 content {<?xml version="1.0"?><broker version="9.0"><configuration><result>ok</result><broker-guid>aaaaaaaa-bbbb-cccc-ddddddddddddddddd</broker-guid><authentication><screen><name>disclaimer</name><params><param><name>text</name><values><value>This is a XXX computer system that is FOR OFFICIAL USE ONLY. This system is subject to monitoring. Therefore, no expectation of privacy is to be assumed. Individuals found performing unauthorized activities are subject to disciplinary action including criminal prosecution.</value></values></param></params></screen></authentication></configuration><set-locale><result>ok</result></set-locale></broker>} noserver "Connection" "close" "Content-Type" "text/xml;charset=UTF-8" if { $static::internal_disclaimer_debug > 1 } { log local0. "<$log_prefix_cs>: Sending Disclaimer Message"} } if { [HTTP::payload] contains "disclaimer" } { if { $static::internal_disclaimer_debug > 1 } { log local0. "<$log_prefix_cs>: Disclaimer Message Accepted - waiting for credentials."} } } This solution has been tested using Horizon 6.0 (and later) as well as the Horizon 3.0 (and later) Client. Earlier versions of the client and/or Horizon Connection Server could produce unexpected results. Big shout-out to Greg Crosby for his work on the iRule!679Views0likes1CommentDNS Profile Benefits in iRules
I released an article a while back on the DNS services architecture now built in to BIG-IP, as well as a solution article that showed some fancy DNS tricks utilizing the architecture to black hole malicious DNS requests. What might be lost in those articles is the difference maker the dns profile makes in using iRules to return DNS responses. I was working on a little project earlier this week and the VM I am hosting requires a single DNS response to a single question. The problem is that I don't have the particular fqdn defined in an external or internal name server. Adding the fqdn to either is problematic: Adding the FQDN to the external name server would require adding an internal view to bind, which adds risk and complexity. Adding the FQDN to the internal name server would require adding external zones to my internal server, which adds unnecessary complexity. So as I wasn't going down either of those roads...I had to find an alternate solution. Thankfully, I have BIG-IP VE at my disposal, and therefore, iRules. The DNS profile exposes in iRules the DNS:: namespace, and with it, native decodes for all the fields in requests/responses. The iRule, with the DNS namespace, is trivial: when DNS_REQUEST { if { [IP::addr [IP::remote_addr] equals 192.168.1.0/24] && ([DNS::question name] equals "www.mytest.com") } { DNS::answer insert "[DNS::question name]. 111 [DNS::question class] [DNS::question type] 192.168.1.200" DNS::return } else ( discard } } However, after trying to save the iRule, I realized I'm not licensed for dns services on my BIG-IP VE, so that path wouldn't work. So I took a packet capture of some local dns traffic on my desktop and started mapping the fields and preparing to settle in for some serious binary scan/format work, but then remembered there were already some iRules out in the codeshare that I though might get me started. Natty76's Fast DNS 2 seemed to fit the bill. So with just a little customization, I was up and running with no issues. But notice the amount of work required (both by author and by system resources) to make this happen when compared with the above iRule. when RULE_INIT priority 1 { # Domain Name = www mytest com set static::domain "www.mytest.com" # IP address in answer section (type A) set static::answer_string "192.168.1.200" } when RULE_INIT { # Header generation (in hexadecimal) # qr(1) opcode(0000) AA(1) TC(0) RD(1) RA(1) Z(000) RCODE(0000) set static::header "8580" # 1 question, X answer, 0 NS, 0 Addition set static::answer_record [format %04x [llength $static::answer_string]] set static::header "${static::header}0001${static::answer_record}00000000" # generate domain binary string set static::domainhex "" foreach static::d [split $static::domain "."] { set static::l [string length $static::d] scan $static::l %d static::h append static::domainhex [format %02x $static::h] foreach static::n [split $static::d ""] { scan $static::n %c static::h append static::domainhex [format %02x $static::h] } } set static::domainbin [binary format H* $static::domainhex] append static::domainhex 00 set static::answerhead $static::domainhex # Type = A set static::answerhead "${static::answerhead}0001" # Class = IN set static::answerhead "${static::answerhead}0001" # TTL = 1 day set static::answerhead "${static::answerhead}00015180" # Data length = 4 set static::answerhead "${static::answerhead}0004" set static::answer "" foreach static::a $static::answer_string { scan $static::a "%d.%d.%d.%d" a b c d append static::answer "${static::answerhead}[format %02x%02x%02x%02x $a $b $c $d]" } } when CLIENT_DATA { if { [IP::addr [IP::client_addr] equals 192.168.1.0/22] } { binary scan [UDP::payload] H4@12A*@12H* id dname question set dname [string tolower [getfield $dname \x00 1 ] ] switch -glob $dname \ $static::domainbin { #log local0. "match" set hex ${id}${static::header}${question}${static::answer} set payload [binary format H* $hex ] # to drop only a packet and keep UDP connection, use UDP::drop drop UDP::respond $payload } \ default { #log local0. "does not match" } } else { discard } } No native decode means you have to do all the decoding work of the protocol yourself. I don't get to share "from the trenches" as much as I used to, but this was too good a demonstration to pass up.525Views0likes3Comments