web services
8 TopicsForm Based Authentication with external SOAP web services
Problem this snippet solves: 1- You need to authenticate users against an external authentication system relying on SOAP calls. 2- The session identifier must be provided by an external third party system. How to use this snippet: Installation Files You need to upload an html login page using ifiles. You need to upload the SOAP body of the external web service using ifiles. irule You need to install the irule on your Virtual Server you need to protect. Variables set static::holdtime 3600 # session timeout set static::login_url "/login" # login url set static::sideband_vs "VS_EXTERNAL_AUTH_PROVIDER" # Virtual Server that publish the web service Features Version 1.0 Form based login (provided by a custom ifile) Authentication against an external SOAP web service Manage Session timeout Backlog Improve logging Allow 2-factor authentication (Challenge) Encrypt Session cookie Provide internal mecanism to generate a session cookie accept Basic Authentication External links Github : https://github.com/e-XpertSolutions/f5 Code : when RULE_INIT { set static::holdtime 3600 set static::login_url "/login" set static::sideband_vs "VS_EXTERNAL_AUTH_PROVIDER" } when HTTP_REQUEST { if { [HTTP::cookie exists SessionCook] and [table lookup -subtable "active_sessions" [HTTP::cookie SessionCook]] != "" } { return } else { if { [HTTP::path] eq $static::login_url } { if { [HTTP::method] eq "POST" } { if {[HTTP::header "Content-Length"] ne "" && [HTTP::header "Content-Length"] <= 1048576}{ set content_length [HTTP::header "Content-Length"] } else { set content_length 1048576 } if { $content_length > 0} { HTTP::collect $content_length } } else { HTTP::respond 200 content [ifile get login.html] "Cache-Control" "no-cache, must-revalidate" "Content-Type" "text/html" } } else { HTTP::respond 302 noserver "Location" $static::login_url "Cache-Control" "no-cache, must-revalidate" Set-Cookie "SessionCook=$result;domain=[HTTP::host];path=/" } } } when HTTP_REQUEST_DATA { set payload [HTTP::payload] set username "" set password "" regexp {Login1\%3AtxtUserName\=(.*)\&Login1\%3AtxtPassword\=(.*)\&Login1\%3AbtnSubmit\=(.*)} $payload -> username password garbage if {[catch {connect -timeout 1000 -idle 30 -status conn_status $static::sideband_vs} conn_id] == 0 && $conn_id ne ""}{ log local0. "Connect returns: $conn_id and conn status: $conn_status" } else { log local0. "Connection could not be established to sideband_virtual_server" } set content [subst -nocommands -nobackslashes [ifile get soap_body]] set length [string length $content] set data "POST /apppath/webservicename.asmx HTTP/1.1\r\nHost: www.hostname.com\r\nContent-Type: text/xml; charset=utf-8\r\nContent-Length: $length\r\nSOAPAction: http://schemas.microsoft.com/sqlserver/2004/SOAP\r\n\r\n$content" set send_bytes [send -timeout 1000 -status send_status $conn_id $data] set recv_data [recv -timeout 1000 $conn_id] # parse response to retrieve the authentication result, it gives 0 if authentication failed or a session_id if it succeed regexp { (.*) (.*)} $recv_data -> result garbage unset content unset length unset data unset recv_data close $conn_id # add a custom alert notification to the login page if { $result == 0 } { set alert " Invalid credentials. " HTTP::respond 200 content [subst -nocommands -nobackslashes [ifile get login.html]] "Cache-Control" "no-cache, must-revalidate" "Content-Type" "text/html" Set-Cookie "SessionCook=deleted;expires=Thu, 01-Jan-1970 00:00:10 GMT;domain=[HTTP::host];path=/" } else { HTTP::respond 302 noserver "Location" "/" "Cache-Control" "no-cache, must-revalidate" Set-Cookie "SessionCook=$result;domain=[HTTP::host];path=/" # save the cookie value in a cache for fast checking table add -subtable "active_sessions" $result $username indef $static::holdtime } } Tested this on version: 11.5450Views0likes1CommentClientless mode failing to interact with AD
Scenario: I have a webserice that is being called by some clients. When they hit a webservice, they should enter in a username / password combo for basic authentication. Those credentials should be taken by the APM, and processed in active directory. Here is an image of the flow: Per this conversation, I am creating this IRule to promt for username/password credentials and allow the APM to perform work. when HTTP_REQUEST { set apmsessionid [HTTP::cookie value MRHSession] if { [HTTP::cookie exists "MRHSession"] } {set apmstatus [ACCESS::session exists -state_allow $apmsessionid]} else {set apmstatus 0} if {!($apmstatus)} { Insert Clientless-mode header to start APM in clientless mode if { [catch {HTTP::header insert "clientless-mode" 1} ] } {log local0. "[IP::client_addr]:[TCP::client_port] : TCL error on HTTP header insert clientless-mode : URL : [HTTP::host][HTTP::path] - Headers : [HTTP::request]"} } } when ACCESS_POLICY_COMPLETED { Authentication request for non bowser user-agent session denied if { ([ACCESS::policy result] equals "deny") } { ACCESS::respond 401 noserver WWW-Authenticate "Basic realm=\"My Web Services Authentication\"" Connection close ACCESS::session remove return } } However, following that post and using that code, always leads me to the Deny portion. If I use the original solution here, I am able to authenticate successfully. Am I missing something to add?194Views0likes1CommentSecure a web service using APM
Hi I'm looking to use the F5 to secure (basic auth) a web service that needs to be called from a .net application. What is the best way to configure something like this, where the "client" isn't a browser? The application doesn't appear to support the 302 redirects that a browser would, so do I need to create a fairly vanilla access profile (logon page - AD Auth - Allow) and then write an irule to send the inital 401 response to the initial request? Cheers, Simon198Views0likes2Commentsxml signatures
I am trying to create a simple policy for web services/xml which would scan the incoming traffic against generic and xml related violations (without format or schema checks). After creating + assigning a Rapid Deployment Policy and assigning Generic + XML signatures to it will ASM check for the XML related violations? (when the XML-policy/web-services part is not configured) Do I understand correctly that everything going towards application will be wildcard matched against selected signature sets (including the XML stuff which should require manual policy configuration) or do the XML signatures only apply after the manual XML policy configuration?258Views0likes1CommentCloud Computing: Will data integration be its Achilles Heel?
Wesley: Now, there may be problems once our app is in the cloud. Inigo: I'll say. How do I find the data? Once I do, how do I integrate it with the other apps? Once I integrate it, how do I replicate it? If you remember this somewhat altered scene from the Princess Bride, you also remember that no one had any answers for Inigo. That's apropos of this discussion, because no one has any good answers for this version of Inigo either. And no, a holocaust cloak is not going to save the day this time. If you've been considering deploying applications in a public cloud, you've certainly considered what must be the Big Hairy Question regarding cloud computing: how do I get at my data? There's very little discussion about this topic, primarily because at this point there's no easy answer. Data stored in the cloud is not easily accessible for integration with applications not residing in the cloud, which can definitely be a roadblock to adopting public cloud computing. Stacey Higginbotham at GigaOM had a great post on the topic of getting data into the cloud, and while the conclusion that bandwidth is necessary is also applicable to getting your data out of the cloud, the details are left in your capable hands. We had this discussion when SaaS (Software as a Service) first started to pick up steam. If you're using a service like salesforce.com to store business critical data, how do you integrate that back into other applications that may need it? Web services were the first answer, followed by integration appliances and solutions that included custom-built adapters for salesforce.com to more easily enable access and integration to data stored "out there", in the cloud. Amazon offers URL-based and web services access to data stored in its SimpleDB offering, but that doesn't help folks who are using Oracle, SQL Server, or MySQL offerings in the cloud. And SimpleDB is appropriately named; it isn't designed to be an enterprise class service - caveat emptor is in full force if you rely upon it for critical business data. RDBMS' have their own methods of replication and synchronization, but mirroring and real-time replication methods require a lot of bandwidth and very low latency connections - something not every organization can count on having. Of course you can always deploy custom triggers and services that automatically replicate back into the local data center, but that, too, is problematic depending on bandwidth availability and accessibility of applications and databases inside the data center. The reverse scenario is much more likely, with a daemon constantly polling the cloud computing data and pulling updates back into the data center. You can also just leave that data out there in the cloud, implement, or take advantage of if they exist, service-based access to the data and integrate it with business processes and applications inside the data center. You're relying on the availability of the cloud, the Internet, and all the infrastructure in between, but like the solution for integrating with salesforce.com and other SaaS offerings, this is likely the best of a set of "will have to do" options. The issue of data and its integration has not yet raised its ugly head, mostly because very few folks are moving critical business applications into the cloud and admittedly, cloud computing is still in its infancy. But even non-critical applications are going to use or create data, and that data will, invariably, become important or need to be accessed by folks in the organization, which means access to that data will - probably sooner rather than later - become a monkey on the backs of IT. The availability of and ease of access to data stored in the public cloud for integration, data mining, business intelligence, and reporting - all common enterprise application use of data - will certainly affect adoption of cloud computing in general. The benefits of saving dollars on infrastructure (management, acquisition, maintenance) aren't nearly as compelling a reason to use the cloud when those savings would quickly be eaten up by the extra effort necessary to access and integrate data stored in the cloud. Related articles by Zemanta SQL-as-a-Service with CloudSQL bridges cloud and premises Amazon SimpleDB ready for public use Blurring the functional line - Zoho CloudSQL merges on-site and on-cloud As a Service: The many faces of the cloud A comparison of major cloud-computing providers (Amazon, Mosso, GoGrid) Public Data Goes on Amazon's Cloud309Views0likes2CommentsThe death of SOA has been greatly exaggerated
Amidst the hype of cloud computing and virtualization have been the publication of several research notes regarding SOA. Adoption, they say, is slowing. Oh noes! Break out the generators, stock up on water and canned food! An article from JavaWorld quotes research firm Gartner as saying: The number of organizations planning to adopt SOA for the first time decreased to 25 percent; it had been 53 percent in last year's survey. Also, the number of organizations with no plans to adopt SOA doubled from 7 percent in 2007 to 16 percent in 2008. This dramatic falloff has been happening since the beginning of 2008, Gartner said. Some have reacted with much drama to the news, as if the reports indicate that SOA has lost its shine and is disappearing into the realm of legacy technology along with COBOL and fat-clients and CORBA. Not true at all. The reports indicate a drop in adoption of SOA, not the use of SOA. That should be unsurprising. At some point the number of organizations who have implemented SOA should reach critical mass, and the number of new organizations adopting the technology will slow down simply because there are fewer of them than there are folks who have already adopted SOA. As Don pointed out when this discussion came up, the economy is factoring in heavily for IT and technology, and the percentages cited by Gartner are not nearly as bad as they look when applied to real numbers. For example, if you ask 100 organizations about their plans for SOA and 16 say "we're not doing anything with it next year" that doesn't sound nearly as impressive as 16%, especially considering that means that 84% are going to be doing something with SOA next year. As with most surveys and polls, it's all about how the numbers are presented. Statistics are the devil's playground. It is also true that most organizations don't consider that by adopting or piloting cloud computing in the next year that they will likely be taking advantage of SOA. Whether it's because their public cloud computing provider requires the use of Web Services (SOA) to deploy and manage applications in the cloud or they are building a private cloud environment and will utilize service-enabled APIs and SOA to integrate virtualization technology with application delivery solutions, SOA remains an integral part of the IT equation. SOA simply isn't the paradigm shift it was five years ago. Organizations who've implemented SOA are still using it, it's still growing in their organizations as they continue to build new functionality and features for their applications, as they integrate new partners and distributors and applications from inside and outside the data center. As organizations continue to get comfortable with SOA and their implementations, they will inevitably look to governance and management and delivery solutions with which to better manage the architecture. SOA is not dead yet; it's merely reached the beginning of its productive life and if the benefits of SOA are real (and they are) then organizations are likely to start truly realizing the return on their investments. Related articles by Zemanta HP puts more automation into SOA governance Gartner reports slowdown in SOA adoption Gartner picks tech top 10 for 2009 SOA growth projections shrinking285Views0likes1CommentAutomating scalability and high availability services
There are a lot of SOA governance solutions out there that fall into two distinct categories of purpose: one is to catalog services and associated security policies and the other is to provide run-time management for services, including enforcement of security and performance-focused policies. Vendors providing a full "SOA Stack" of functionality across the service lifecycle (design, development, testing, production) often integrate their disparate product sets for a more automated (and thus manageable) SOA infrastructure. But very few integrate those same products and functionality with the underlying network and application delivery infrastructure required to provide high-availability and scalability for those services. The question should (and must) be asked: why is that? Today's application delivery infrastructure, a.k.a. application delivery controllers and load-balancers, are generally capable of integration via standards-based APIs. These APIs provide complete control over the configuration and management of these platforms, making the integration of application delivery platforms with the rest of the SOA eco-system a definite reality. Most registry/repository solutions today offer the ability of external applications to subscribe to events. The events vary from platform to platform, but generally include some commonalities such as "artifact published" or "item changed". This means a listening application can subscribe to these events and take appropriate action when an event occurs. 1. A new WSDL describing a service interface (hosted in the service application infrastructure) is published. 2. The listening application is notified of the event and retrieves the new or modified WSDL. 3. The application parses the WSDL and determines the appropriate endpoint information, then automatically configures the application delivery controller to (a) virtualize the service and (b) load balance requests across applicable servers. 4. The application delivery controller begins automatically load-balancing service requests and providing high-availability and scalability services. There's some information missing that has to be supplied either via discovery, policy, or manual configuration. That's beyond the scope of this post, but would certainly be a part of the controlling application. Conceptually, as long as you have (a) a service-enabled application delivery controller and (b) an application capable of listening for events in the SOA registry/repository, you can automate the process of provisioning high-availability and scalability services for those SOA services. If you combine this with the ability to integrate application delivery control into the application itself, you can provide an even more agile, dynamic application delivery infrastructure than if you just used one concept or the other. And when you get right down to it, this doesn't just work for SOA, it could easily work just as well for any application framework, given the right integration. There already exist some integration of application delivery infrastructure with SOA governance solutions, like AmberPoint, but there could be more. There could be custom solutions for your unique architecture as well, given that the technology exists to build it. The question is, why aren't folks leveraging this integration capability to support initiatives like SOA and cloud computing that require a high level of agility and extensibility and upon which the ROI depends at least partially on the ability to reduce management costs and length of deployment cycles through automation? It's true that there seems to be an increasing awareness of the importance of application delivery infrastructure to architecting a scalable, highly available cloud computing environment. But we never really managed to focus on the importance of an agile, reusable, intelligent application delivery infrastructure to the success of SOA. Maybe it's time we backtrack a bit and do so, because many of the same architectural and performance issues that will arise in the cloud due to poor choices in application delivery infrastructure are the same as those that adversely impact SOA implementations. Related Links Why can't clouds be inside (the data center)? Governance in the Cloud Reliability does not come from SOA Governance Building a Cloudbursting Capable Infrastructure The Next Tech Boom: Infrastructure 2.0189Views0likes0CommentsSOA What?
David Bressler of Progress Software, who acquired SOA vendor Actional in January 2006 wrote a very thought provoking post on marketing that really ended up being a post about SOA and where Progress fits into the "SOA continuum". He raises some questions, and problems, with SOA and product categories that ties in nicely with an excellent blog post on the subjectTodd Biske wrote a while back containing some concepts that he presented at Burton's Catalyst 2006. One of the confusing things about any market is the wide variety of names used to describe the products and solutions that fit into the wider technology landscape. There are a distinct set of SOA product categories, or SOA What as David calls them. The problem is that there is a lot of overlap in responsibilities between those categories. For example, SOA gateways and SOA management both provide a similar set of proxy-focused capabilities: content based routing, transformation, even service-enablement, but SOA gateways rarely include the robust monitoring and alerting side of management, choosing instead to integrate with existing network management systems (NMS) like HP OpenView or IBM's Tivoli instead. That makes it difficult to know what you're getting out of a SOA Management product, because it could mean a completely different set of responsibilities when coming from vendor X as it does from vendor Y. So after thinking about for a while, I think that by combining the David's SOA What categories with Todd's list of intermediary responsibilities we can come up with two distinct categories that makes the picture of the market a bit clearer and simpler. It comes down to products falling into two primary focus areas with regards to SOA services: managing them and delivering them. Delivery requires a different focus than management, and vice-versa. Delivery is concerned with protocols, security, and functionality traditionally associated with the network, while management is primarily service-focused, concerned with access, integration, and monitoring and, in the case of design-time governance, managing the service life-cycle. So maybe one of the ways of clearing up the muddy landscape in SOA-land is for vendors to give a clearer picture of their SOA "focus". For example, F5 is, in this model, clearly focused on SOA delivery and not management, and I'd argue that Progress' portfolio is primarily focused on management, not delivery, with some design-time governance and testing thrown in for good measure (now where does that fit??) I'm not sure if there's really a good solution to the issues raised by David but I think one way to start would be to delineate responsibility of intermediaries across the infrastructure. It certainly makes the picture a lot clearer and by associating responsibilities (and not features) with a particular category it's easier for someone to understand what a particular vendors' solution offers. Part of the problem is certainly getting on the same page and using the same language. What's funny about that is that one of the premises of SOA was to get business and IT folks using the same language to better align IT with the business. We need to apply that to the vendor and product landscape, as well, so as vendors we can better align our products with customer needs. If you want high-availability and load-balancing of services, you should be able to easily find a vendor focusing on SOA delivery and not wonder whether those delivery features available in a product that focuses on management or governance is going to be "good enough" or not. And vice-versa.237Views0likes0Comments