xml
51 TopicsThe BIG-IP Application Security Manager Part 5: XML Security
This is the fifth article in a 10-part series on the BIG-IP Application Security Manager (ASM). The first four articles in this series are: What is the BIG-IP ASM? Policy Building The Importance of File Types, Parameters, and URLs Attack Signatures This fifth article in the series will discuss the basic concepts of XML and how the BIG-IP ASM provides security for XML. XML Concepts The Extensible Markup Language (XML) provides a common syntax for data transfer between similar systems. XML doesn't specify how to display data (HTML is used for that), but rather it is concerned with describing data that can be manipulated and presented using other languages. XML documents are built on a core set of basic nested structures, and developers can decide how tags are named and organized. XML is used extensively in web applications today, so it's important to have a basic understanding as well as a strong defense for this critical technology. The XML specification (described in this W3C publication) defines an XML document to be well-formed when it satisfies a list of syntax rules provided in the specification. If an XML processor encounters a violation of these rules, it is required to stop processing the file and report the error. A valid XML document is defined as a well-formed document that also conforms to the rules of a schema like the Document Type Definition (DTD) or the newer and more powerful XML Schema Definition (XSD). It's important to have valid XML documents when implementing and using web services. Web Service A web service is any service that is available over a network and that uses standardized XML syntaxes. You've heard of the "... as a Service" right? Well, this is the stuff we're talking about, and XML plays a big role. On a somewhat tangential note, it seems like there are too many "as a Service" acronyms flying around right now...I really need to make up a hilarious one just for the heck of it. I'll let you know how that goes... Anyway, back to reality...a web service architecture consists of a service provider, a service requestor, and a service registry. The service provider implements the service and publishes the service to the service registry using Universal Description, Discovery, and Integration (UDDI) which is an XML-based registry that allows users to register and locate web service applications. The service registry centralizes the services published by the service provider. The service requestor finds the service using UDDI and retrieves the Web Services Definition Language (WSDL) file, which consists of an XML-based interface used for describing the functionality offered by the web service. The service requestor is able to consume the service based on all the goodness found in the WSDL using the UDDI. Then, the service requestor can send messages to the service provider using a service transport like the Simple Object Access Protocol (SOAP). SOAP is a protocol specification for exchanging structured information when implementing web services...it relies on XML for its message format. Now you can see why XML is so closely tied to Web Services. All this craziness is shown in the diagram below. I know what you're thinking...it's difficult to find anything more exciting than this topic! (Picture copied from Wikipedia) Because XML is used for data transfer in the web services architecture, it's important to inspect, validate, and protect XML transactions. Fortunately, the BIG-IP ASM can protect several applications including: Web services that use HTTP as a transport layer for XML data Web services that use encryption and decryption in HTTP requests Web services that require verification and signing using digital signatures Web applications that use XML for client-server data communications (i.e. Microsoft Outlook Web Access) ASM Configuration Before you can begin protecting your XML content, you have to create a security policy using the "XML and Web Services" option. After you create the security policy, you create an XML profile and associate it with the XML security policy. You can read more about creating policies in the Policy Building article in this series. To create an XML profile, you navigate to Application Security >> Content Profiles >> XML Profiles. When all this is done, the XML profile will protect XML applications in the following ways: Validate XML formatting Mask sensitive data Enforce compliance with XML schema files or WSDL documents Provide information leakage protection Offer XML encryption and XML signatures Offer XML content based routing and XML switching Offer XML parser protection against DoS attacks Encrypt and decrypt parts of SOAP web services Validation resources provide the ASM with critical information about the XML data or web services application that the XML profile is protecting. As discussed earlier, many XML applications have a schema file for validation (i.e. DTD or XSD) or WSDL file that describes the language used to communicate with remote users. The XML profile is used to validate whether the incoming traffic complies with the predefined schemas or WSDL files. The following screenshot shows the configuration of the XML profile in the ASM. Notice all the different features it provides. You can download the all-important configuration files (WSDL), you can associate attack signatures to the profile (protects against things like XML parser attacks -- XML Bombs or External Entity Attacks), you can allow/disallow meta characters, and you can configure sensitive data protection for a specific namespace and a specific element or attribute. Another really cool thing is that most of these features are turned on/off using simple checkboxes. This is really cool and powerful stuff! I won't bore you with all the details of each setting, but suffice it to say, this thing let's you do tons of great things in order to protect your XML data. Well, that does it for this ASM article. I hope this sheds some light on how to protect your XML data. And, if you're one of the users who implements anything "as a Service" make sure you protect all that data by turning on the BIG-IP ASM. The next time someone throws an XML bomb your way, you'll be glad you did! Update: Now that the article series is complete, I wanted to share the links to each article. If I add any more in the future, I'll update this list. What is the BIG-IP ASM? Policy Building The Importance of File Types, Parameters, and URLs Attack Signatures XML Security IP Address Intelligence and Whitelisting Geolocation Data Guard Username and Session Awareness Tracking Event Logging3KViews1like1CommentLayer 7 Switching + Load Balancing = Layer 7 Load Balancing
Modern load balancers (application delivery controllers) blend traditional load-balancing capabilities with advanced, application aware layer 7 switching to support the design of a highly scalable, optimized application delivery network. Here's the difference between the two technologies, and the benefits of combining the two into a single application delivery controller. LOAD BALANCING Load balancing is the process of balancing load (application requests) across a number of servers. The load balancer presents to the outside world a "virtual server" that accepts requests on behalf of a pool (also called a cluster or farm) of servers and distributes those requests across all servers based on a load-balancing algorithm. All servers in the pool must contain the same content. Load balancers generally use one of several industry standard algorithms to distribute request. Some of the most common standard load balancing algorithms are: round-robin weighted round-robin least connections weighted least connections Load balancers are used to increase the capacity of a web site or application, ensure availability through failover capabilities, and to improve application performance. LAYER 7 SWITCHING Layer 7 switching takes its name from the OSI model, indicating that the device switches requests based on layer 7 (application) data. Layer 7 switching is also known as "request switching", "application switching", and "content based routing". A layer 7 switch presents to the outside world a "virtual server" that accepts requests on behalf of a number of servers and distributes those requests based on policies that use application data to determine which server should service which request. This allows for the application infrastructure to be specifically tuned/optimized to serve specific types of content. For example, one server can be tuned to serve only images, another for execution of server-side scripting languages like PHP and ASP, and another for static content such as HTML , CSS , and JavaScript. Unlike load balancing, layer 7 switching does not require that all servers in the pool (farm/cluster) have the same content. In fact, layer 7 switching expects that servers will have different content, thus the need to more deeply inspect requests before determining where they should be directed. Layer 7 switches are capable of directing requests based on URI, host, HTTP headers, and anything in the application message. The latter capability is what gives layer 7 switches the ability to perform content based routing for ESBs and XML/SOAP services. LAYER 7 LOAD BALANCING By combining load balancing with layer 7 switching, we arrive at layer 7 load balancing, a core capability of all modern load balancers (a.k.a. application delivery controllers). Layer 7 load balancing combines the standard load balancing features of a load balancing to provide failover and improved capacity for specific types of content. This allows the architect to design an application delivery network that is highly optimized to serve specific types of content but is also highly available. Layer 7 load balancing allows for additional features offered by application delivery controllers to be applied based on content type, which further improves performance by executing only those policies that are applicable to the content. For example, data security in the form of data scrubbing is likely not necessary on JPG or GIF images, so it need only be applied to HTML and PHP. Layer 7 load balancing also allows for increased efficiency of the application infrastructure. For example, only two highly tuned image servers may be required to meet application performance and user concurrency needs, while three or four optimized servers may be necessary to meet the same requirements for PHP or ASP scripting services. Being able to separate out content based on type, URI, or data allows for better allocation of physical resources in the application infrastructure.1.6KViews0likes2CommentsLoad Balancing as an ESB Service
Most people, upon hearing the term "load balancing" immediately think of web and application servers deployed at the edge of the network. After all, that's where load balancing is most often used - to ensure that a public facing web site is always as available and fast as possible. What many architects don't consider, however, is that in the process of deploying a SOA (Service Oriented Architecture) those same web and application servers end up residing deeper in the data center, away from the edge of the network. These web and application servers are hosting the services that make up a public facing application or site, but aren't necessarily afforded the same consideration in terms of availability as the initial entry point into that application. This situation is compounded by the fact that there may be an ESB (Enterprise Service Bus) orchestrating that application, and that services critical to the application are not afforded the same measure of reliability as those closer to the edge of the network. While most ESBs are capable of load balancing these critical services, this capability is often limited and lacking the more robust and dynamic features offered by application delivery controllers (ADC). Consider a public facing service that takes advantage of an ESB: The Compliance and Shipping Services are critical to the public facing service, as are the other services provided by theESB. And yet in a typical implementation only the public facing service (the Order Management Service) will be afforded the reliability and optimization provided by an application delivery controller. It may be the case that the ESB is load balancing the Compliance and Shipping services, as simple load balancing capabilities are provided by most ESBs. While you might find the load balancing capabilities adequate, consider that most ESBs lack the health monitoring capabilities of application delivery controllers in addition to being unable to balance the load based on real-time factors such as number of connections and response time, making it difficult to optimize your SOA. The lesson learned by application delivery vendors in the past have not yet been incorporated into ESBs. Just because a server responds to an ICMP ping or can successfully open a TCP socket does not mean the application - or service - is actually running and returning valid responses. An ADC is capable of monitoring services at the application level, ensuring that not only is the service running but that it's also returning valid responses. This is something most ESBs are not capable of providing, which reduces the effectiveness of such health checks and can result in a service being treated as available even if it's malfunctioning. When it's really important, i.e. when services are critical to a critical application - such as a customer/public facing application - then it's likely you should ensure that those back-end services are always available and optimized. An application delivery controller isn't limited to hanging out in the DMZ or on the edge of the network. In fact, an ADC can just as easily slide into your SOA and provide the same high availability, optimization, and failover services as it does for public facing applications. By integrating an application delivery controller into the depths of the data center you can let the ESB do what it's best at doing: transformation, message enrichment, and reliable messaging and remove the burden of performing tasks that are better suited to an ADC than an ESB such as load balancing, protocol optimization, and health monitoring. The addition of an ADC to assist the ESB and ensure reliability also provides a layer of abstraction that fits nicely into your SOA and aligns with one of SOA's primary goals: agility. SOA implementations are by their nature distributed. Composing applications from distributed services achieves agility and reuse, but introduces potential show-stopping problems usuallyexperiencedat the edge of the network into the heart of thedata center. It is important to consider the impact of failure of a service on the application(s) that may be using that service, and if that application is critical, then its dependent services should also be treated as critical and worthy of the protection of an application delivery controller. Imbibing: Orange Juice Technorati tags: MacVittie, F5, ESB, SOA, application delivery, service oriented application delivery, load balancing1KViews0likes0CommentsThe Stealthy Ascendancy of JSON
While everyone was focused on cloud, JSON has slowly but surely been taking over the application development world It looks like the debate between XML and JSON may be coming to a close with JSON poised to take the title of preferred format for web applications. If you don’t consider these statistics to be impressive, consider that ProgrammableWeb indicated that its “own statistics on ProgrammableWeb show a significant increase in the number of JSON APIs over 2009/2010. During 2009 there were only 191 JSON APIs registered. So far in 2010 [August] there are already 223!” Today there are 1262 JSON APIs registered, which means a growth rate of 565% in the past eight months, nearly catching up to XML which currently lists 2162 APIs. At this rate, JSON will likely overtake XML as the preferred format by the end of 2011. This is significant to both infrastructure vendors and cloud computing providers alike, because it indicates a preference for a programmatic model that must be accounted for when developing services, particularly those in the PaaS (Platform as a Service) domain. PaaS has yet to grab developers mindshare and it may be that support for JSON will be one of the ways in which that mindshare is attracted. Consider the results of the “State of Web Development 2010” survey from Web Directions in which developers were asked about their cloud computing usage; only 22% responded in the affirmative to utilizing cloud computing. But of those 22% that do leverage cloud computing, the providers they use are telling: PaaS represents a mere 7.35% of developers use of cloud computing, with storage (Amazon S3) and IaaS (Infrastructure as a Service) garnering 26.89% of responses. Google App Engine is the dominant PaaS platform at the moment, most likely owing to the fact that it is primarily focused on JavaScript, UI, and other utility-style services as opposed to Azure’s middle-ware and definitely more enterprise-class focused services. SaaS, too, is failing to recognize the demand from developers and the growing ascendancy of JSON. Consider this exchange on the Salesforce.com forums regarding JSON. Come on salesforce lets get this done. We need to integrate, we need this [JSON]. If JSON continues its steady rise into ascendancy, PaaS and SaaS providers alike should be ready to support JSON-style integration as its growth pattern indicates it is not going away, but is instead picking up steam. Providers able to support JSON for PaaS and SaaS will have a competitive advantage over those that do not, especially as they vie for the hearts and minds of developers which are, after all, their core constituency. THE IMPACT What the steady rise of JSON should trigger for providers and vendors alike is a need to support JSON as the means by which services are integrated, invoked, and data exchanged. Application delivery, service-providers and Infrastructure 2.0 focused solutions need to provide APIs that are JSON compatible and which are capable of handling the format to provide core infrastructure services such as firewalling and data scrubbing duties. The increasing use of JSON-based APIs to integrate with external, third-party services continues to grow and the demand for enterprise-class service to support JSON as well will continue to rise. There are drawbacks, and this steady movement toward JSON has in some cases a profound impact on the infrastructure and architectural choices made by IT organizations, especially in terms of providing for consistency of services across what is likely a very mixed-format environment. Identity and access management and security services may not be prepared to handle JSON APIs nor provide the same services as it has for XML, which through long established usage and efforts comes with its own set of standards. Including social networking “streams” in applications and web-sites is now as common as including images, but changes to APIs may make basic security chores difficult. Consider that Twitter – very quietly – has moved to supporting JSON only for its Streaming API. Organizations that were, as well they should, scrubbing such streams to prevent both embarrassing as well as malicious code from being integrated unknowingly into their sites, may have suddenly found that infrastructure providing such services no longer worked: API providers and developers are making their choice quite clear when it comes to choosing between XML and JSON. A nearly unanimous choice seems to be JSON. Several API providers, including Twitter, have either stopped supporting the XML format or are even introducing newer versions of their API with only JSON support. In our ProgrammableWeb API directory, JSON seems to be the winner. A couple of items are of interest this week in the XML versus JSON debate. We had earlier reported that come early December, Twitter plans to stop support for XML in its Streaming API. --JSON Continues its Winning Streak Over XML, ProgrammableWeb (Dec 2010) Similarly, caching and acceleration services may be confused by a change from XML to JSON; from a format that was well-understood and for which solutions were enabled with parsing capabilities to one that is not. IT’S THE DATA, NOT the API The fight between JSON and XML is one we continue to see in a general sense. See, it isn’t necessarily the API that matters, in the end, but the data format (the semantics) used to exchange that data which matters. XML is considered unstructured, though in practice it’s far more structured than JSON in the sense that there are meta-data standards for XML that constrain security, identity, and even application formats. JSON, however, although having been included natively in ECMA v5 (JSON data interchange format gets ECMA standards blessing) has very few standards aside from those imposed by frameworks and toolkits such as JQuery. This will make it challenging for infrastructure vendors to support services targeting application data – data scrubbing, web application firewall, IDS, IPS, caching, advanced routing – to continue to effectively deliver such applications without recognizing JSON as an option. The API has become little more than a set of URIs and nearly all infrastructure directly related to application delivery is more than capable of handling them. It is the data, however, that presents a challenge and which makes the developers’ choice of formats so important in the big picture. It isn’t just the application and integration that is impacted, it’s the entire infrastructure and architecture that must adapt to support the data format. The World Doesn’t Care About APIs – but it does care about the data, about the model. Right now, it appears that model is more than likely going to be presented in a JSON-encoded format. JSON data interchange format gets ECMA standards blessing JSON Continues its Winning Streak Over XML JSON versus XML: Your Choice Matters More Than You Think I am in your HTTP headers, attacking your application The Web 2.0 API: From collaborating to compromised Would you risk $31,000 for milliseconds of application response time? Stop brute force listing of HTTP OPTIONS with network-side scripting The New Distribution of The 3-Tiered Architecture Changes Everything Are You Scrubbing the Twitter Stream on Your Web Site?898Views0likes0CommentsLightboard Lessons: OWASP Top 10 - XML External Entities
The OWASP Top 10 is a list of the most common security risks on the Internet today. XML External Entities comes in at the #4spot in the latest edition of the OWASP Top 10. In this video, John discusses this security riskand outlines some mitigation steps to make sure your web application doesn't process malicious XML data and expose sensitive information. Related Resources: Securing against the OWASP Top 10: XML External Entity attacks639Views0likes0CommentsSOAP vs REST: The war between simplicity and standards
SOA is, at its core, a design and development methodology. It embraces reuse through decomposition of business processes and functions into core services. It enables agility by wrapping services in an accessible interface that is decoupled from its implementation. It provides a standard mechanism for application integration that can be used internally or externally. It is, as they say, what it is. SOA is not necessarily SOAP, though until the recent rise of social networking and Web 2.0 there was little real competition against the rising standard. But of late the adoption of REST and its use on the web facing side of applications has begun to push around the incumbent. We still aren't sure who swung first. We may never know, and at this point it's irrelevant: there's a war out there, as SOAP and REST duke it out for dominance of SOA. At the core of the argument is this: SOAP is weighted down by the very standards designed to promote interoperability (WS-I), security (WS-Security), and reliability (WS-Reliability). REST is a lightweight compared to its competitor, with no standards at all. Simplicity is its siren call, and it's being heard even in the far corners of corporate data centers. A February 2007 Evans Data survey found a 37% increase in those implementing or considering REST, with 25% considering REST-Based Web Services as a simpler alternative to SOAP-based services. And that was last year, before social networking really exploded and the integration of Web 2.0 sites via REST-based services took over the face of the Internet. It was postulated then that WOA (Web Oriented Architecture) was the face of SOA (Service Oriented Architecture). That REST on the outside was the way to go, but SOAP on the inside was nearly sacrosanct. Apparently that thought, while not wrong in theory, didn't take into account the fervor with which developers hold dear their beliefs regarding everything from language to operating system to architecture. The downturn in the economy hasn't helped, either, as REST certainly is easier and faster to implement, even with the plethora of development tools and environments available to carry all the complex WS-* standards that go along with SOAP like some sort of technology bellhop. Developers have turned to the standard-less option because it seems faster, cheaper, and easier. And honestly, we really don't like being told how to do things. I don't, and didn't, back in the day when the holy war was between structured and object-oriented programming. While REST has its advantages, certainly, standard-less development can, in the long-run, be much more expensive to maintain and manage than standards-focused competing architectures. The argument that standards-based protocols and architectures is difficult because there's more investment required to learn the basics as well as associated standards is essentially a red herring. Without standards there is often just as much investment in learning data formats (are you using XML? JSON? CSV? Proprietary formats? WWW-URL encoded?) as there is in learning standards. Without standards there is necessarily more documentation required, which cuts into development time. Then there's testing. Functional and vulnerability testing which necessarily has to be customized because testing tools can't predict what format or protocol you might be using. And let's not forget the horror that is integration, and how proprietary application protocols made it a booming software industry replete with toolkits and libraries and third-party packages just to get two applications to play nice together. Conversely, standards that are confusing and complex lengthen the implementation cycle, but make integration and testing as well as long term maintenance much less painful and less costly. Arguing simplicity versus standards is ridiculous in the war between REST and SOA because simplicity without standards is just as detrimental to the costs and manageability of an application as is standards without simplicity. Related articles by Zemanta RESTful .NET Has social computing changed attitudes toward reuse? The death of SOA has been greatly exaggerated Web 2.0: Integration, APIs, and scalability Performance Impact: Granularity of Services501Views0likes3CommentsXML Scripts to deploy 3-Tier Application with Cisco APIC and F5 BIG-IP LTM [End of Life]
The F5 and Cisco APIC integration based on the device package and iWorkflow is End Of Life. The latest integration is based on the Cisco AppCenter named ‘F5 ACI ServiceCenter’. Visit https://f5.com/cisco for updated information on the integration. As described in a previous article Under the hood of F5 BIG-IP LTM and Cisco ACI integration – Role of the device package , Cisco APIC provides the user with the ability to define a service graph to automate L4-L7 service insertion using F5 BIG-IP device package. In this article, learn how to deploy an application with Cisco APIC policy model and F5 BIG-IP LTM device package using Northbound APIs (XML) scripts. Let's look at the different APIC logical constructs before diving into the cookbooks of scripting. Application Policy Infrastructure Controller (APIC) Policy Model The Application Centric Infrastructure policy model provides a convenient way to specify application requirements, which the APIC then renders in the network infrastructure. The policy model consists of a number of constructs such as tenants, contexts, bridge domains, end point groups and service graphs. When a user or process initiates an administrative change to an object within the fabric, that change is first applied to the ACI policy model and then applied to the actual managed end point .All physical and logical components of the ACI fabric are represented as a hierarchical Management Information Tree (MIT). Some of the key components contained within the MIT are shown in the flow diagram Tenant A tenant is essentially a ‘container’, used to house other constructs and objects in the policy model (such as contexts, bridge domains, contracts, filters and application profiles). Tenants can be completely isolated from each other, or can share resources. A tenant can be used to define administrative boundaries – administrators can be given access to specific tenants only, resulting in other tenants being completely inaccessible to them Learn how to Create Tenant SJC Learn how to Create Tenant LAX Contexts A context is used to define a unique layer 3 forwarding domain within the fabric. One or more contexts can be created inside a tenant. A context is also known as a ‘private network’ and can be viewed as the equivalent of a VRF in the traditional networking world. As each context defines a separate layer 3 domain, IP addresses residing within a context can overlap with addresses in other contexts. Bridge Domains and Subnets A bridge domain is a construct used to define a layer 2 boundary within the fabric. A BD can be viewed as somewhat similar to regular VLANs in a traditional switching environment. BDs however are not subject to the same scale limitations as VLANs, and have a number of enhancements such as improved handling of ARP requests and no flooding behavior by default. A subnet defines the gateway(s) that will be used within a given bridge domain. This gateway will typically be used by hosts associated with a bridge domain as their first hop gateway. Gateways defined within a bridge domain are pervasive across all leaf switches where that bridge domain is active. End Point Groups (EPG) The End Point Group (EPG) is one of the most important objects in the policy model and is used to define a collection of end points. An end point is a device connected to the fabric (either directly or indirectly) and has an address, a location and other attributes. End points are grouped together into an EPG, where policy can be more easily applied consistently across the ACI fabric. An end point may be classified into an EPG based on a number of criteria, including: • Virtual NIC • Physical leaf port • VLAN Contracts A contract is a policy construct used to define the communication between End Point Groups (EPGs). Without a contract between EPGs, no communication is possible between those EPGs. Within an EPG, a contract is not required to allow communication as this is always allowed. An EPG will provide or consume a contract (or provide and consume different contracts). For example, EPG “Web” in the XML scripts will provide a contract which EPG “App” will consume. Similarly, EPG “App” provides separate contracts which are consumable by the “Web” and “DB” EPGs. Learn how to create contracts for Tenant SJC Learn how to create contracts for Tenant LAX Filters A filter is a rule specifying fields such as TCP port, protocol type, etc. and is referenced within a contract to define the communication allowed between EPGs in the fabric. A filter contains one or more “filter entries” that actually specify the rule. Subjects A subject is a construct contained within a contract and which typically references a filter. For example, contract “Web” contains a subject named “Web-Subj”, which references a filter named “Web-filter”. Application Profile The Application Profile is the policy construct that ties multiple EPGs together with contracts that each EPG provides or consumes. An application profile contains as many EPGs as necessary that logically relate to the capabilities provided by an application. Learn how to create Application Profile for Tenant SJC Learn how to create Application Profile for Tenant LAX Service Graph A service graph is a chain of service functions such as Web application Firewall (WAF), Load balancer or network firewall including the sequence with which the service functions need to be applied. The graph defines these functions based on a user-defined policy for a particular application. One or more service appliances might be needed to render the services required by the service graph. Learn how to create Service Graph "WebGraph" and how to attach the graph to contract in Tenant SJC Learn how to create Service Graph "WebGraph" andhow to attach the graph to contract in Tenant LAX Creating a Device Cluster Learn how to create Logical Device with device type Physicalunder Tenant mgmt Learn how to create F5 BIG-IP LTM concrete devices under the device clusterand confuring high availability Learn how to bind the logical interfaces with physical interfaces of BIG-IP LTM Exporting a Device Cluster to Tenant SJC and LAX from Tenant mgmt Learn how to export the device cluster created in Tenant mgmt to Tenant SJC Learn how to export the device cluster created in Tenant mgmt to Tenant LAX Setting up the Fabric for service Insertion Learn how to setup the VMM domain to integrate APIC with VMware VCenter environment to run BIG-IP LTM VE or Server VMs Learn how to setup the physical domain and assigning the vlan namespace to enable datapath forwarding on leaf switches Learn how to setup vlan namespace to dynamically assign the vlans to end points Wondering how to run these scripts ? Here is the recipe, run the two scripts below within python environment and verify the configurations on Cisco APIC and F5 BIG-IP LTM. Make sure you have a device package downloaded from download.f5.com and saved in the same directory with the scripts 1. python request.py infra.cfg 2. python request.py tenant.cfg Complete XML scripts directort can be downloaded from here . Video (showing the configuration through APIC Graphical User Interface) The recorded video here shows how to configure the ACI policy model to deploy an application in Cisco APIC and BIG-IP LTM through graphical user interface.500Views0likes1CommentHeatmaps, iRules Style: Part2
Last week I talked about generating a heat map 100% via iRules, thanks to the geolocation magic in LTM systems, and the good people over at Google letting us use their charting API. This was an outstanding way to visualize the the traffic coming to your application. For those interested in metrics it provides a great way to see this data in a visually pleasing manner. That said, it was pretty basic. All it showed was the United States which, for anyone that has used the internet much, is obviously not representative of the entire web. To be truly useful we’ll need to show the entire world. That’s simple enough. We’ll update the region the map we’re drawing zooms to, so it will look more like this: Let’s take a look at how this is going to work. Since we were collecting data based on state abbreviations before, we’ll need to first switch that up to use country codes instead. We’ll then change up our Google call so that we’re setting the range covered by the map to the entire world, rather than just the US. While we’re at it, let’s change the name of the subtable we’re using from states to countries, just to keep things more clear. What we end up with is some code that looks very familiar, if you’ve already seen last week’s solution, with a few minor changes: set chld "" set chd "" foreach country [table keys -subtable countries] { append chld $country append chd "[table lookup -subtable countries $country]," } set chd [string trimright $chd ","] HTTP::respond 200 content "<HTML><center><font size=5>Here is your site's usage by Country:</font><br><br><br><img src='http://chart.apis.google.com/chart?cht=t&chd=&chs=440x220&chtm=world&chd=t:$chd&chld=$chld&chco=f5f5f5,edf0d4,6c9642,365e24,13390a' border='0'><br>br><br><br><a href='/resetmap'>Reset All Counters</a></center></HTML>" So using that in place of the similar logic in last week’s solution you can get a simple world view of the traffic passing through your site. That’s great and all, but what if you can’t see the detail you’re looking for? What if you want to see the details of Asia’s traffic and be able to decipher the patterns in Japan and the Middle East? What we really need is to build a simple interface to make this more of an application, and less of a single image displayed on a web page. Well, first of all, we already have all the data collected that we’ll need, if you think about it. We’re already tracking the requests per country, so all we need to do is build out options to allow for users to click a link and zoom to a different region of the map. To do this we’ll set up some simple HTML navigation links at the bottom of the page being generated via the iRule, and set up a switch structure to handle each URI the links pass back into the iRule, and use those to format the HTML appropriately so that we get the right Google charts call. That sounds more complicated than it is. Here’s what it looks like: "/heatmap" { set chld "" set chd "" foreach country [table keys -subtable countries] { append chld $country append chd "[table lookup -subtable countries $country]," } set chd [string trimright $chd ","] HTTP::respond 200 content "<HTML><center><font size=5>Here is your site's usage by Country:</font><br><br><br><img src='http://chart.apis.google.com/chart?cht=t&chd=&chs=440x220&chtm=world&chd=t:$chd&chld=$chld&chco=f5f5f5,edf0d4,6c9642,365e24,13390a' border='0'><br><br>Zoom to region: <a href='/asia'>Asia</a> | <a href='/africa'>Africa</a> | <a href='/europe'>Europe</a> | <a href='/middle_east'>Middle East</a> | <a href='/south_america'>South America</a> | <a href='/usa'>United States</a> | <a href='/heatmap'>World</a><br><br><br><a href='/resetmap'>Reset All Counters</a></center></HTML>" "/asia" { set chld "" set chd "" foreach country [table keys -subtable countries] { append chld $country append chd "[table lookup -subtable countries $country]," } set chd [string trimright $chd ","] HTTP::respond 200 content "<HTML><center><font size=5>Here is your site's usage by Country:</font><br><br><br><img src='http://chart.apis.google.com/chart?cht=t&chd=&chs=440x220&chtm=asia&chd=t:$chd&chld=$chld&chco=f5f5f5,edf0d4,6c9642,365e24,13390a' border='0'><br><br>Zoom to region: <a href='/asia'>Asia</a> | <a href='/africa'>Africa</a> | <a href='/europe'>Europe</a> | <a href='/middle_east'>Middle East</a> | <a href='/south_america'>South America</a> | <a href='/usa'>United States</a> | <a href='/heatmap'>World</a><br><br><br><a href='/resetmap'>Reset All Counters</a></center></HTML>" … That section can be repeated once for each available region that Google will let us view (Asia | Africa | Europe | Middle East | South America | United States | World). That then gives us something that looks like this: As you can see, we now have a world view map that shows the heat of each country, and we have individual links that we can click on along the bottom to take us to a zoom of each country/region to get a more specific look at the info there. As an example, let’s take a look at the data from Asia: So we now have a nice little heatmapping application. It pulls up a world view of app traffic going to your site or app, it allows you to click around to the different regions of the world to get a more detailed view, and it even lets you re-set the data at will. I can hear some among you asking “What about the states, though?”. If I take away the state view of the US and give a world view, then I’m really trading one limitation for another. Ideally we’d be able to see both, right? If I want to be able to give a detailed view on both the countries around the world and the states within the US, then I need to expand my data collection a bit. I need to collect both country codes for incoming requests and state abbreviations, where applicable. This means creating a second sub-table within the iRule, and issuing a second whereis per request coming in. Something like this should do: set cloc [whereis [IP::client_addr] country] set sloc [whereis [IP::client_addr] abbrev] if {[table incr -subtable countries -mustexist $cloc] eq ""} { table set -subtable countries $cloc 1 indefinite indefinite } if {[table incr -subtable states -mustexist $sloc] eq ""} { table set -subtable states $sloc 1 indefinite indefinite } Above we’re using the cloc (country location) and sloc (state location) variables to simultaneously track both country codes and state abbreviations in separate sub tables within the iRule. This way we don’t mix up CA (Canada) and CA (California) or similar crossovers and throw our counts off. When doing this, don’t forget to update the resetmap case as well to empty both sub tables, not just one. This also means that we’ll need to slightly change the logic in the “usa” case as opposed to all of the other cases when doing a lookup. If the user wants to view the USA details, we need to do a subtable lookup on the states sub table, everything else uses the countries sub table. Not too horrible. Okay, we now have heatmaps for all countries, all available zoom regions and a zoom to state level in the US, complete with some rudimentary HTML to make this feel like an application, not just a static image on a web page. Unfortunately, we also have around 140 lines of code, much of which is being repeated. There’s no sense in repeating that HTML over and over, or those logic statements doing the lookups and whatnot. So it’s time to take out the scalpel and start slicing and dicing, looking for unnecessary code. I started with the HTML. There’s just no reason to repeat that HTML in every single switch case. So I set that in some static variables in the RULE_INIT section and did away with that all together in each switch case. Next, the actual iRules logic is identical if I want to view asia or africa or europe or anything other than the US. The only difference is the HTML changing one word to tell the API where to zoom in. Using a little extra “zoom” logic, I was able to cut down most of that repetitive code as well, by having all of the switch cases other than the USA fall through to the world view case, giving us just two chunks of iRules logic to deal with. Not including the extra variables and tidbits, the core of those two chunks of logic are: foreach country [table keys -subtable countries] { append chld $country append chd "[table lookup -subtable countries $country]," } foreach state [table keys -subtable states] { append chld $state append chd "[table lookup -subtable states $state]," } Don’t stop there, though, there’s more to trim! With some more advanced trickery we can combine these two table lookups into a single piece of logic. When all is said and done, here is the final iRule trimmed down to fighting form with a single switch case handling the presentation of all the possible heatmaps generated by Google…pretty cool stuff: when RULE_INIT { set static::resp1 "<HTML><center><font size=5>Here is your site's usage by Country:</font><br><br><br><img src='http://chart.apis.google.com/chart?cht=t&chd=&chs=440x220&chtm=" set static::resp2 "&chco=f5f5f5,edf0d4,6c9642,365e24,13390a' border='0'><br><br>Zoom to region: <a href='/asia'>Asia</a> | <a href='/africa'>Africa</a> | <a href='/europe'>Europe</a> | <a href='/middle_east'>Middle East</a> | <a href='/south_america'>South America</a> | <a href='/usa'>United States</a> | <a href='/heatmap'>World</a><br><br><br><a href='/resetmap'>Reset All Counters</a></center></HTML>" } when HTTP_REQUEST { switch -glob [string tolower [HTTP::uri]] { "/asia" - "/africa" - "/europe" - "/middle_east" - "/south_america" - "/usa" - "/world" - "/heatmap*" { set chld "" set chd "" set zoom [string map {"/" "" "heatmap" "world"} [HTTP::uri]] ## Configure the table query to be based on the countries subtable or the states subtable ## if {$zoom eq "usa"} { set region "states" } else { set region "countries" } ## Get a list of all states or countries and the associated count of requests from that area ## foreach rg [table keys -subtable $region] { append chld $rg append chd "[table lookup -subtable $region $rg]," } set chd [string trimright $chd ","] ## Send back the pre-formatted response, set in RULE_INIT, combined with the map zoom, list of areas, and request count ## HTTP::respond 200 content "${static::resp1}${zoom}&chd=t:${chd}&chld=${chld}${static::resp2}" } "/resetmap" { foreach country [table keys -subtable countries] { table delete -subtable countries $country } foreach state [table keys -subtable states] { table delete -subtable states $state } HTTP::respond 200 Content "<HTML><center><br><br><br><br><br><br>Table Cleared.<br><br><br> <a href='/heatmap'>Return to Map</a></HTML>" } default { ## Look up country & state locations ## set cloc [whereis [IP::client_addr] country] set sloc [whereis [IP::client_addr] abbrev] ## If the IP doesn't resolve to anything, pick a random IP (useful for testing on private networks) ## if {($cloc eq "") and ($sloc eq "")} { set ip [expr { int(rand()*255) }].[expr { int(rand()*255) }].[expr { int(rand()*255) }].[expr { int(rand()*255) }] set cloc [whereis $ip country] set sloc [whereis $ip abbrev] } ## Set Country ## if {[table incr -subtable countries -mustexist $cloc] eq ""} { table set -subtable countries $cloc 1 indefinite indefinite } ## Set State ## if {[table incr -subtable states -mustexist $sloc] eq ""} { table set -subtable states $sloc 1 indefinite indefinite } HTTP::respond 200 Content "Added" } } } There we have it, an appropriately trimmed down and sleek application to provide worldwide or regional views of heatmaps showing traffic to your application, all generated 100% via iRules. Again, this couldn’t be done without the awesome geolocation abilities of LTM or the Google charting API or, of course, iRules. In the next installment we’ll dig even deeper to see how to turn this application into something even more valuable to those interested in what the users of your site or app are up to.459Views0likes4CommentsA Billion More Laughs: The JavaScript hack that acts like an XML attack
Don is off in Lowell working on a project with our ARX folks so I was working late last night (finishing my daily read of the Internet) and ended up reading Scott Hanselman's discussion of threads versus processes in Chrome and IE8. It was a great read, if you like that kind of thing (I do), and it does a great job of digging into some of the RAMifications (pun intended) of the new programmatic models for both browsers. But this isn't about processes or threads, it's about an interesting comment that caught my eye: This will make IE8 Beta 2 unresponsive .. t = document.getElementById("test"); while(true) { t.innerHTML += "a"; } What really grabbed my attention is that this little snippet of code is so eerily similar to the XML "Billion Laughs" exploit, in which an entity is expanded recursively for, well, forever and essentially causes a DoS attack on whatever system (browser, server) was attempting to parse the document. What makes scripts like this scary is that many forums and blogs that are less vehement about disallowing HTML and script can be easily exploited by a code snippet like this, which could cause the browser of all users viewing the infected post to essentially "lock up". This is one of the reasons why IE8 and Chrome moved to a more segregated tabbed model, with each tab basically its own process rather than a thread - to prevent corruption in one from affecting others. But given the comment this doesn't seem to be the case with IE8 (there's no indication Chrome was tested with this code, so whether it handles the situation or not is still to be discovered). This is likely because it's not a corruption, it's valid JavaScript. It just happens to be consuming large quantities of memory very quickly and not giving the other processes in other tabs in IE8 a chance to execute. The reason the JavaScript version was so intriguing was that it's nearly impossible to stop. The XML version can be easily detected and prevented by an XML firewall and most modern XML parsers can be configured to stop parsing and thus prevent the document from wreaking havoc on a system. But this JavaScript version is much more difficult to detect and thus prevent because it's code and thus not confined to a specific format with specific syntactical attributes. I can think of about 20 different versions of this script - all valid and all of them different enough to make pattern matching or regular expressions useless for detection. And I'm no evil genius, so you can bet there are many more. The best option for addressing this problem? Disable scripts. The conundrum is that disabling scripts can cause many, many sites to become unusable because they are taking advantage of AJAX functionality, which requires...yup, scripts. You can certainly enable scripts only on specific sites you trust (which is likely what most security folks would suggest should be default behavior anyway) but that's a PITA and the very users we're trying to protect aren't likely to take the time to do this - or even understand why it's necessary. With the increasing dependence upon scripting to provide functionality for RIAs (Rich Interactive Applications) we're going to have to figure out how to address this problem, and address it soon. Eliminating scripting is not an option, and a default deny policy (essentially whitelisting) is unrealistic. Perhaps it's time for signed scripts to make a comeback.418Views0likes4CommentsRestful Access to BIG-IP Subtables
Surely by now you have tasted the goodness that is tables. No? Stop now and go read up on the table command. No, really, right now! I’ll wait… OK, now that you are a tables expert, it’s clear to you that this is very powerful, but as I’m sure you read in the fine print, there’s no way to get to the table data outside of tmm. That means no shell access, no icontrol access, no nada. But, as is often the case, iRules to the rescue! The REST interface REST came about as a lightweight response to the heavyweight champ SOAP standard for web services. With REST, the HTTP methods are used as an API. I’ll use two approaches here. The first is a cheat of sorts. It’s pretty common for developers to put the parameters in the URI instead of using PUT and DELETE, so I’ll show that first. With the second approach, I’ll use the normal REST methods (GET/POST/PUT/DELETE) to determine the operation. The table below (no pun intended) shows the actions the iRule will take given the request. The response to either approach should be formatted in xml, but I’ll leave that exercise to you. Common Logic As shown in the table above, for approach one, every request will be a get, so I need an action keyword to tell the iRule what table operation to use. The actions are lookup, add, replace, & delete. For the second approach, the HTTP method is used as the operator, so I only need to supply the table information. I built a class to control which tables could be accessed and manipulated, but one should probably put more controls (source IP’s allowed, maybe some authentication) around it as well. Finally, I’m using indefinite on the timeout and lifetime just so I don’t have to worry about my k/v pairs disappearing during testing. This might not be desirable in a production environment as indefinite times X connections/second could lead to a very large and perhaps catastrophic memory footprint. Approach 1 1: # class p_subtables { 2: # "foo" 3: # "bar" 4: # } 5: 6: when HTTP_REQUEST { 7: 8: set full_uri [split [URI::path [HTTP::uri]] "/"] 9: 10: if { [class match [lindex $full_uri 1] equals p_subtables] } { 11: 12: switch [llength $full_uri] { 13: 4 { scan [lrange $full_uri 1 end-1] %s%s tname action } 14: 5 { scan [lrange $full_uri 1 end-1] %s%s%s tname action key } 15: 6 { scan [lrange $full_uri 1 end-1] %s%s%s%s tname action key val } 16: default { HTTP::respond 200 content "<HTML><BODY>ERROR</BODY></HTML>" } 17: } 18: switch $action { 19: "lookup" { 20: if { [info exists tname] && [info exists key] } { 21: set kvpair [table lookup -notouch -subtable $tname $key] 22: } elseif { [info exists tname] } { 23: foreach tkey [table keys -subtable $tname] { 24: lappend kvpair "$tkey:[table lookup -notouch -subtable $tname $tkey]" 25: } 26: } else { HTTP::respond 200 content "<HTML><BODY>Table and/or Key information invalid</BODY></HTML>" } 27: HTTP::respond 200 content "<HTML><BODY>$kvpair</BODY></HTML>" 28: } 29: "add" { 30: if { [info exists tname] && [info exists key] && [info exists val] } { 31: table add -subtable $tname $key $val indefinite indefinite 32: HTTP::respond 200 content "<HTML><BODY>SUCCESS</BODY></HTML>" 33: } else { HTTP::respond 200 content "<HTML><BODY>Error! Must supply /table/key/value/</BODY></HTML>" } 34: } 35: "replace" { 36: if { [info exists tname] && [info exists key] && [info exists val] } { 37: table replace -subtable $tname $key $val indefinite indefinite 38: HTTP::respond 200 content "<HTML><BODY>SUCCESS</BODY></HTML>" 39: } else { HTTP::respond 200 content "<HTML><BODY>Error! Must supply /table/key/value/</BODY></HTML>" } 40: } 41: "delete" { 42: if { [info exists tname] && [info exists key] } { 43: table delete -subtable $tname $key 44: HTTP::respond 200 content "<HTML><BODY>SUCCESS</BODY></HTML>" 45: } else { HTTP::respond 200 content "<HTML><BODY>Error! Must supply /table/key/</BODY></HTML>" } 46: } 47: default { HTTP::respond 200 content "<HTML><BODY>Not a valid method for this interface</BODY></HTML>" } 48: } 49: } else { HTTP::respond 200 content "<HTML><BODY>Not Permitted</BODY></HTML>" } 50: } Approach 2 1: # class p_subtables { 2: # "foo" 3: # "bar" 4: # } 5: 6: when HTTP_REQUEST { 7: 8: set full_uri [split [URI::path [HTTP::uri]] "/"] 9: 10: if { [class match [lindex $full_uri 1] equals p_subtables] } { 11: 12: switch [llength $full_uri] { 13: 3 { set tname [lindex $full_uri 1] } 14: 4 { scan [lrange $full_uri 1 end-1] %s%s tname key } 15: 5 { scan [lrange $full_uri 1 end-1] %s%s%s tname key val } 16: default { HTTP::respond 200 content "<HTML><BODY>ERROR</BODY></HTML>" } 17: } 18: switch [HTTP::method] { 19: "GET" { 20: if { [info exists tname] && [info exists key] } { 21: set kvpair [table lookup -notouch -subtable $tname $key] 22: } elseif { [info exists tname] } { 23: foreach tkey [table keys -subtable $tname] { 24: lappend kvpair "$tkey:[table lookup -notouch -subtable $tname $tkey]" 25: } 26: } else { HTTP::respond 200 content "<HTML><BODY>Table and/or Key information invalid</BODY></HTML>" } 27: HTTP::respond 200 content "<HTML><BODY>$kvpair</BODY></HTML>" 28: } 29: "POST" { 30: if { [info exists tname] && [info exists key] && [info exists val] } { 31: table add -subtable $tname $key $val indefinite indefinite 32: HTTP::respond 200 content "<HTML><BODY>SUCCESS</BODY></HTML>" 33: } else { HTTP::respond 200 content "<HTML><BODY>Error! Must supply /table/key/value/</BODY></HTML>" } 34: } 35: "PUT" { 36: if { [info exists tname] && [info exists key] && [info exists val] } { 37: table replace -subtable $tname $key $val indefinite indefinite 38: HTTP::respond 200 content "<HTML><BODY>SUCCESS</BODY></HTML>" 39: } else { HTTP::respond 200 content "<HTML><BODY>Error! Must supply /table/key/value/</BODY></HTML>" } 40: } 41: "DELETE" { 42: if { [info exists tname] && [info exists key] } { 43: table delete -subtable $tname $key 44: HTTP::respond 200 content "<HTML><BODY>SUCCESS</BODY></HTML>" 45: } else { HTTP::respond 200 content "<HTML><BODY>Error! Must supply /table/key/</BODY></HTML>" } 46: } 47: default { HTTP::respond 200 content "<HTML><BODY>Not a valid method for this interface</BODY></HTML>" } 48: } 49: } else { HTTP::respond 200 content "<HTML><BODY>Not Permitted</BODY></HTML>" } 50: } The Test You can use a browser for the first approach as everything is a get method. For the second method, cURL is great for the ease of method allocation. I tested both approaches with cURL, results are below. Approach 1 jrahm@jrahm-dev:~$ curl http://10.10.20.50/foo/add/client1/jasonrahm/ <HTML><BODY>SUCCESS</BODY></HTML> jrahm@jrahm-dev:~$ curl http://10.10.20.50/foo/add/client2/colinwalker/ <HTML><BODY>SUCCESS</BODY></HTML> jrahm@jrahm-dev:~$ curl http://10.10.20.50/foo/lookup/ <HTML><BODY>client1:jasonrahm client2:colinwalker</BODY></HTML> jrahm@jrahm-dev:~$ curl http://10.10.20.50/foo/lookup/client1/ <HTML><BODY>jasonrahm</BODY></HTML> jrahm@jrahm-dev:~$ curl http://10.10.20.50/foo/replace/client1/jeffbrowning/ <HTML><BODY>SUCCESS</BODY></HTML> jrahm@jrahm-dev:~$ curl http://10.10.20.50/foo/lookup/client1/ <HTML><BODY>jeffbrowning</BODY></HTML> jrahm@jrahm-dev:~$ curl http://10.10.20.50/foo/delete/client2/ <HTML><BODY>SUCCESS</BODY></HTML> jrahm@jrahm-dev:~$ curl http://10.10.20.50/foo/lookup/ <HTML><BODY>client1:jeffbrowning</BODY></HTML> Approach 2 jrahm@jrahm-dev:~$ curl -X POST http://10.10.20.50/foo/client1/jasonrahm/ <HTML><BODY>SUCCESS</BODY></HTML> jrahm@jrahm-dev:~$ curl -X POST http://10.10.20.50/foo/client2/colinwalker/ <HTML><BODY>SUCCESS</BODY></HTML> jrahm@jrahm-dev:~$ curl -X GET http://10.10.20.50/foo/ <HTML><BODY>client1:jasonrahm client2:colinwalker</BODY></HTML> jrahm@jrahm-dev:~$ curl -X PUT http://10.10.20.50/foo/client1/joepruitt/ <HTML><BODY>SUCCESS</BODY></HTML> jrahm@jrahm-dev:~$ curl -X GET http://10.10.20.50/foo/ <HTML><BODY>client1:joepruitt client2:colinwalker</BODY></HTML> jrahm@jrahm-dev:~$ curl -X DELETE http://10.10.20.50/foo/client2/ <HTML><BODY>SUCCESS</BODY></HTML> jrahm@jrahm-dev:~$ curl -X GET http://10.10.20.50/foo/ <HTML><BODY>client1:joepruitt</BODY></HTML> Impressive, no? Much thanks to the genius that is Matt Cauthorn, who seeded the idea internally. So, now that you have access, what will you do with such power? I have several use cases I could share, but I'll wait to update until the mindshare flows in. Comment away!399Views0likes1Comment