protocols
13 TopicsLoad balance messaging protocols like AMQP, MQTT, STOMP over TLS for ActiveMQ Artemis
Messaging protocols like AMQP, MQT, STOMP, OpenWire, HornetQ can all be load balanced trough the F5. Making the setup I encountered some issues. There is some specific configuration that needs to be apllied, which i will try to clearify. The setup: devices sending AMQP messages trough the F5 to a backend server over TLS. The F5 is configured as full proxy. The Virtual Server was pretty basic config. Standard Virtuals server, TCP, HTTP, SSL client & server profiles. Also websocket auto SNAT automap enabled. I selected the virtual server which had the correct certificate in the Clientssl profile. The client initiating the request has our root /& issuing CA trusted in their truststore. Using a tcpdump and wireshark, I saw the corect TLS handshakes, Application data over TLS is being send, but not much data and due an error *** the client sends a TCP reset. The F5 will only apply a profile, when that specific data is being detected (trigger). So yeah, there are TCP profiles to handle TCP and i have an SSL Client & server profile to handle TLS. Strangly it doesn't work. In the capture i took on the F5, wireshark sees the TLS application data as http-over-tls. hmm.. what if the F5 did the same? Then it would apply the http profile on the AMQP data, and that might screw things up. Disable the HTTP processing for that hostname, bingo. The next problem: the hostname, part off [HTTP::host] is not our event scopes. It is only activated when http profile is triggered. So the solution is to get the hostname, in this case the SNI (server name indication) from the TLS Client hello in eventCLIENTSSL_HANDSHAKE. And check the SNI value to disable http processing. caveat: this will only work for TLS 1.2. When TLS 1.3 is used with encrypted SNI, another solution is needed. Solution in short: it should work using a seperate virtual server with only TCP, SSL client & server profiles and have a load balancing default pool. If your setup is more complex and are reusing an existing VS, do the following. I use a combination of an iRule and datagrouplists to extract the SNI, disable HTTP processing and send it to the right pool. For a current setup the SNI is also inserted server-side. I think this step is optional but i'll paste the code too. datagrouplist dgl_vs01_sni_targetpool: messaging.company.local and value "the targetpool" datagrouplist dgl_vs01_disable_http_procesin: messaging.company.local without a value when CLIENTSSL_HANDSHAKE { if { [SSL::extensions exists -type 0] } { set dgl "dgl_vs01_sni_targetpool" set dgl_nohttp "dgl_vs01_disable_http_processing" # read SNI value and place into variable sni_value binary scan [SSL::extensions -type 0] {@9A*} sni_value log local0. "sni value: [expr {[info exists sni_value] ? ${sni_value} : {not found} }]" #disable HTTP processing for AMQP, MQTT, STOMP, etc if { [class match $sni_value equals $dgl_nohttp] }{ HTTP::disable log local0. "HTTP Disabled for $sni_value" } if { [class match $sni_value equals $dgl] }{ set pool_target [class match -value [string tolower $sni_value] equals $dgl] pool $pool_target log local0. "pool chosen for $sni_value, pool $pool_target " } } } when SERVERSSL_CLIENTHELLO_SEND { #Inject SNI serverside if { [class match $sni_value equals $dgl] }{ SSL::extensions insert [binary format SSScSa* 0 [expr { [set sni_length [string length $sni_value]] + 5 }] [expr { $sni_length + 3 }] 0 $sni_length $sni_value] log local0. "SNI inserted $sni_value" } } For those less comfterable with iRule, you can do SNI-based load balancing like thishttps://community.f5.com/t5/technical-articles/sni-routing-with-big-ip/ta-p/282018 but I'm not sure how to disable HTTP processing with a policy in the ssl client hello. If someone knows please post below, thx. Now you are all set! Good luck with your F5 adventures. Documentation used: https://clouddocs.f5.com/training/community/irules/html/class1/module1/iRuleEventsFlowHTTPS.html https://community.f5.com/t5/crowdsrc/serverside-sni-injection-irule/ta-p/286745 https://community.f5.com/t5/crowdsrc/extracting-the-sni-server-name/ta-p/288029 https://activemq.apache.org/components/artemis/documentation/1.5.1/protocols-interoperability.html3.3KViews0likes0CommentsCipher string mismatch ??
Hi, I configured SSL cipher string, to support 6 different ciphers: config tmm --clientciphers TLSv1_2+ECDH-RSA-AES256-GCM-SHA384:TLSv1_2+ECDH-RSA-AES256-SHA384:TLSv1_2+ECDH-RSA-AES256-SHA:TLSv1_2+DHE-RSA-AES256-SHA256:TLSv1_2+DHE-RSA-AES256-SHA:TLSv1_2+DHE-RSA-AES256-GCM-SHA384 ID SUITE BITS PROT METHOD CIPHER MAC KEYX 0: 49202 ECDH-RSA-AES256-GCM-SHA384 256 TLS1.2 Native AES-GCM SHA384 ECDH_RSA 1: 49194 ECDH-RSA-AES256-SHA384 256 TLS1.2 Native AES SHA384 ECDH_RSA 2: 49167 ECDH-RSA-AES256-SHA 256 TLS1.2 Native AES SHA ECDH_RSA 3: 107 DHE-RSA-AES256-SHA256 256 TLS1.2 Native AES SHA256 EDH/RSA 4: 57 DHE-RSA-AES256-SHA 256 TLS1.2 Native AES SHA EDH/RSA 5: 159 DHE-RSA-AES256-GCM-SHA384 256 TLS1.2 Native AES-GCM SHA384 EDH/RSA I used that string in ClientSSL profile, but when I tested my page with SSLlabs, only ciphers 3,4,5 are supported. Any idea where is the problem?? Best regards, Spella712Views0likes5CommentsLearning EMC server stats for statistic collection
Once upon a time, not too long ago, the EMC's command line interface utilized a few known commands for gathering statistics. The commands I grew used to were called server_cifsstat and server_nfsstat but then one day when I called upon their mighty powers they responded to me with an unexpected echo: Info 26306752352: server_2 : This command has been deprecated and replaced with server_stats command. "Deprecated!?", I yelled into the air, fists raised high as if I was clutching doom! I slowly put my hands down, took a deep breath and started to focus my inner child towards more positive thoughts. Years of software development and design decisions played through my head as I realized, "there must be a new way of doing the old thing." I just had to search for an answer, much as my old friend Indiana Jones would do, I went into the deep dark caverns of documentation to solve the riddle. Okay, I'm really not that great at reading documentation but once I found a command called server_stats (keyword "stats" gave it away to me) I realized I was heading down the right path… but I was still lost in the darkness. When I executed the command I received a cryptic message: [nasadmin@EMC-VNX-SIM ~]$ server_stats USAGE: server_stats -list | -info [-all|[,...]] | -service { -start [-port ] | -stop | -delete | -status } | -monitor -action {status|enable|disable} |[ [{ -monitor {statpath_name|statgroup_name}[,...] | -monitor {statpath_name|statgroup_name} [-sort ] [-order {asc|desc}] [-lines ] }...] [-count ] [-interval ] [-terminationsummary {no|yes|only}] [-format {text [-titles {never|once|}]|csv}] [-type {rate|diff|accu}] [-file [-overwrite]] The command, which was pretty easy before, has turned into something a bit harder for me to wield but, thanks to some research, experimentation and friends here I can count on, we formulated a single-line statistics gathering command that would do what we needed. I emphasis friends that I can count on, because we have some pretty brilliant folks here. This is the magic sauce (all one one line): server_stats server_2 -monitor cifs.smb1,cifs.smb2,nfs.v2,nfs.v3,nfs.v4,cifs.global,nfs.basic -format csv -terminationsummary no -count 144 -interval 300 -type accu -file name-server_2.csv This command will capture statistics for CIFS/SMB version 1 and 2 as well as NFS version 2, 3 and 4 along with a few more statistics that we may want in the future (bandwidth, other stats and goodies, etc.) It will capture 144 statistical snapshots in time, every 300 seconds, and save them into name-server_2.csv (a comma-separated file with a nice header). One piece of the puzzle, that took me by surprise, was the -type accu option, which accumulates statistics upon each capture rather than starting back at a baseline of zero. You can also do 'diff' to capture the difference from interval to interval, which is nice… but unfortunately I am not able to utilize that feature. We have written tools to scan statistics on some storage devices like EMC and Network Appliance and, while this new command is super awesome, it's not consistent with anything else out there (even older releases prior to deprecation) so our in-house tools which calculate differences do the work for us. If you're looking to start working with the newerserver_stats feature, I suggest using the online manual pages (man server_stats) to get a slightly more clear understanding of all the features and what they can do for you. I believe the command is a bit large for what it needs to do for us, considering it deprecated a much simpler series of commands. However, we work with what we have and hopefully our example command line implementation will give you an understanding of how you can unlock the potential of server_stats for your own needs.436Views0likes1CommentThe Many Faces of DDoS: Variations on a Theme or Two
Many denial of service attacks boil down to the exploitation of how protocols work and are, in fact, very similar under the hood. Recognizing these themes is paramount to choosing the right solution to mitigate the attack. When you look across the “class” of attacks used to perpetrate a denial of service attack you start seeing patterns. These patterns are important in determining what resources are being targeted because it provides the means to implement solutions that mitigate the consumption of those resources while under an attack. Once you recognize the underlying cause of a service outage due to an attack you can enact policies and solutions that mitigate that root cause, which better serves to protect against the entire class of attacks rather than employing individual solutions that focus on specific attack types. This is because attacks are constantly evolving, and the attacks solutions protect against today will certainly morph into a variation on that theme, and solutions that protect against specific attacks rather than addressing the root cause will not necessarily be capable of defending against those evolutions. In general, there are two types of denial of service attacks: those that target the network layers and those that target the application layer. And of course as we’ve seen this past week or so, attackers are leveraging both types simultaneously to exhaust resources and affect outages across the globe. NETWORK DoS ATTACKS Network-focused DoS attacks often take advantage of the way network protocols work innately. There’s nothing wrong with the protocols, no security vulnerabilities, nada. It’s just the way they behave and the inherent trust placed in the communication that takes place using these protocols. Still others simply attempt to overwhelm a single host with so much traffic that it falls over. Sometimes successful, other times it turns out the infrastructure falls over before the individual host and results in more a disruption of service than a complete denial, but with similar impact to the organization and customers. SYN FLOOD A SYN flood is an attack against a system for the purpose of exhausting that system’s resources. An attacker launching a SYN flood against a target system attempts to occupy all available resources used to establish TCP connections by sending multiple SYN segments containing incorrect IP addresses. Note that the term SYN refers to a type of connection state that occurs during establishment of a TCP/IP connection. More specifically, a SYN flood is designed to fill up a SYN queue. A SYN queue is a set of connections stored in the connection table in the SYN-RECEIVED state, as part of the standard three-way TCP handshake. A SYN queue can hold a specified maximum number of connections in the SYN-RECEIVED state. Connections in the SYN-RECEIVED state are considered to be half-open and waiting for an acknowledgement from the client. When a SYN flood causes the maximum number of allowed connections in the SYN-RECEIVED state to be reached, the SYN queue is said to be full, thus preventing the target system from establishing other legitimate connections. A full SYN queue therefore results in partially-open TCP connections to IP addresses that either do not exist or are unreachable. In these cases, the connections must reach their timeout before the server can continue fulfilling other requests. ICMP FLOOD (Smurf) The ICMP flood, sometimes referred to as a Smurf attack, is an attack based on a method of making a remote network send ICMP Echo replies to a single host. In this attack, a single packet from the attacker goes to an unprotected network’s broadcast address. Typically, this causes every machine on that network to answer with a packet sent to the target. UDP FLOOD The UDP flood attack is most commonly a distributed denial-of-service attack (DDoS), where multiple remote systems are sending a large flood of UDP packets to the target. UDP FRAGMENT The UDP fragment attack is based on forcing the system to reassemble huge amounts of UDP data sent as fragmented packets. The goal of this attack is to consume system resources to the point where the system fails. PING of DEATH The Ping of Death attack is an attack with ICMP echo packets that are larger than 65535 bytes. Since this is the maximum allowed ICMP packet size, this can crash systems that attempt to reassemble the packet. NETWORK ATTACK THEME: FLOOD The theme with network-based attacks is “flooding”. A target is flooded with some kind of traffic, forcing the victim to expend all its resources on processing that traffic and, ultimately, becoming completely unresponsive. This is the traditional denial of service attack that has grown into distributed denial of service attacks primarily because of the steady evolution of web sites and applications to handle higher and higher volumes of traffic. These are also the types of attacks with which most network and application components have had long years of experience with and are thus well-versed in mitigating. APPLICATION DoS ATTACKS Application DoS attacks are becoming the norm primarily because we’ve had years of experience with network-based DoS attacks and infrastructure has come a long way in being able to repel such attacks. That and Moore’s Law, anyway. Application DoS attacks are likely more insidious simply because like their network-based counterparts they take advantage of application protocol behaviors but unlike their network-based counterparts it requires far fewer clients to overwhelm a host. This is part of the reason application-based DoS attacks are so hard to detect – because there are fewer clients necessary (owing to the large chunks of resources consumed by a single client) they don’t fit the “blast” pattern that is so typical of a network-based DoS. It can take literally millions of ICMP requests to saturate a host and its network, but it requires only tens of thousands of requests to consume the resources of an application host such that it becomes unreliable and unavailable. And given the ubiquitous nature of HTTP – over which most of these attacks are perpetrated – and the relative ease with which it is possible to hijack unsuspecting browsers and force their participation in such an attack – an attack can be in progress and look like nothing more than a “flash crowd” – a perfectly acceptable and in many industries desirable event. A common method of attack involves saturating the target (victim) machine with external communications requests, so that the target system cannot respond to legitimate traffic, or responds so slowly as to be rendered effectively unavailable. In general terms, DoS attacks are implemented by forcing the targeted computer to reset, or by consuming its resources so that it can no longer provide its intended service, or by obstructing the communication media between the intended users and the victim so that they can no longer communicate adequately. HTTP GET FLOOD An HTTP GET flood is as exactly as it sounds: it’s a massive influx of legitimate HTTP GET requests that come from large numbers of users, usually connection-oriented bots. These requests mimic legitimate users and are nearly impossible for applications and even harder for traditional security components to detect. This result of this attack is similar to the effect: server errors, increasingly degraded performance, and resource exhaustion. This attack is particularly dangerous to applications deployed in cloud-based environments (public or private) that are enabled with auto-scaling policies, as the system will respond to the attack by launching more and more instances of the application. Limits must be imposed on auto-scaling policies to ensure the financial impact of an HTTP GET flood does not become overwhelming. SLOW LORIS Slowloris consumes resources by “holding” connections open by sending partial HTTP requests. It subsequently sends headers at regular intervals to keep the connections from timing out or being closed due to lack of activity. This causes resources on the web /application servers to remain dedicated to the clients attacking and keeps them unavailable for fulfilling legitimate requests. SLOW HTTP POST A slow HTTP Post is a twist on Slow Loris in which the client sends POST headers with a legitimate content-length. After the headers are sent the message body is transmitted at slow speed, thus tying up the connection (server resources) for long periods of time. A relatively small number of clients performing this attack can effectively consume all resources on the web / application server and render it useless to legitimate users. APPLICATION ATTACK THEME: SLOW Notice a theme, here? That’s because clients can purposefully (and sometimes inadvertently) affect a DoS on a service simply by filling its send/receive queues slowly. The reason this works is similar to the theory behind SYN flood attacks, where all available queues are filled and thus render the server incapable of accepting/responding until the queues have been emptied. Slow pulls or pushes of content keep data in the web/application server queue and thus “tie up” the resources (RAM) associated with that queue. A web/application server has only so much RAM available to commit to queues, and thus a DoS can be affected simply by using a small number of v e r y slow clients that do little other than tie up resources with what are otherwise legitimate interactions. While the HTTP GET flood (page flood) is still common (and works well) the “slow” variations are becoming more popular because they require fewer clients to be successful. Fewer clients makes it harder for infrastructure to determine an attack is in progress because historically flooding using high volumes of traffic is more typical of an attack and solutions are designed to recognize such events. They are not, however, generally designed to recognize what appears to be a somewhat higher volume of very slow clients as an attack. THEMES HELP POINT to a SOLUTION Recognizing the common themes underlying modern attacks are helpful in detecting the attack and subsequently determining what type of solution is necessary to mitigate such an attack. In the case of flooding, high-performance security infrastructure and policies regarding transaction rates coupled with rate shaping based on protocols can mitigate attacks. In the case of slow consumption of resources, it is generally necessary to leverage a high-capacity intermediary that essentially shields the web/application servers from the impact of such requests, coupled with emerging technology that enables a context-aware solution better detect such attacks and then act upon that knowledge to reject them. When faced with a new attack type, it is useful to try to determine the technique behind the attack – regardless of implementation – as it can provide the clues necessary to implement a solution and address the attack before it can impact the availability and performance of web applications. It is important to recognize that solutions only mitigate denial of service attacks. They cannot prevent them from occurring. What We Learned from Anonymous: DDoS is now 3DoS There Is No Such Thing as Cloud Security Jedi Mind Tricks: HTTP Request Smuggling When Is More Important Than Where in Web Application Security Defeating Attacks Easier Than Detecting Them The Application Delivery Spell Book: Contingency What is a Strategic Point of Control Anyway? Layer 4 vs Layer 7 DoS Attack Putting a Price on Uptime Why Single-Stack Infrastructure Sucks420Views0likes3CommentsThe IPv6 Application Integration Factor
#IPv6 Integration with partners, suppliers and cloud providers will make migration to IPv6 even more challenging than we might think… My father was in the construction business most of the time I was growing up. He used to joke with us when we were small that there was a single nail in every house that – if removed – would bring down the entire building. Now that’s not true in construction, of course, but when the analogy is applied to IPv6 it may be more true than we’d like to think, especially when that nail is named “integration”. Most of the buzz around IPv6 thus far has been about the network; it’s been focused on getting routers, switches and application delivery network components supporting the standard in ways that make it possible to migrate to IPv6 while maintaining support for IPv4 because, well, we aren’t going to turn the Internet off for a day in order to flip from IPv4 to IPv6. Not many discussions have asked the very important question: “Are your applications ready for IPv6?” It’s been ignored so long that many, likely, are not even sure about what that might mean let alone what they need to do to ready their applications for IPv6. IT’S the INTEGRATION The bulk of issues that will need to be addressed in the realm of applications when the inevitable migration takes off is in integration. This will be particularly true for applications integrating with cloud computing services. Whether the integration is at the network level – i.e. cloud bursting – or at the application layer – i.e. integration with SaaS such as Salesforce.com or through PaaS services – once a major point of integration migrates it will likely cause a chain reaction, forcing enterprises to migrate whether they’re ready or not. Consider for example, that cloud bursting, assumes a single, shared application “package” that can be pushed into a cloud computing environment as a means to increase capacity. If – when – a cloud computing provider decides to migrate to IPv6 this process could become a lot more complicated than it is today. Suddenly the “package” that assumed IPv4 internal to the corporate data center must assume IPv6 internal to the cloud computing provider. Reconfiguration of the OS, platform and even application layer becomes necessary for a successful migration. Enterprises reliant on SaaS for productivity and business applications will likely be first to experience the teetering of the house of (integration) cards. Enterprises are moving to the cloud, according to Yankee Group’s 2011 US FastView: Cloud Computing Survey. Approximately 48 percent of the respondents said remote/mobile user connectivity is driving the enterprises to deploy software as a service. This is significant as there is a 92 percent increase over 2010. Around 38 percent of enterprises project the deployment of over half of their software applications on a cloud platform within three years compared to 11 percent today, Yankee Group said in its “2011 Fast View Survey: Cloud Computing Motivations Evolve to Mobility and Productivity.” -- Enterprise SaaS Adoption Almost Doubles in 2011: Yankee Group Survey Enterprise don’t just adopt SaaS and cloud services, they integrate them. Data stored in cloud-hosted software is invaluable to business decision makers but first must be loaded – integrated – into the enterprise-deployed systems responsible for assisting in analysis of that data. Secondary integration is also often required to enable business processes to flow naturally between on- and off-premise deployed systems. It is that integration that will likely first be hit by a migration on either side of the equation. If the enterprise moves first, they must address the challenge of integrating two systems that speak incompatible network protocol versions. Gateways and dual-stack strategies – even potentially translators – will be necessary to enable a smooth transition regardless of who blinks first in the migratory journey toward IPv6 deployment. Even that may not be enough. Peruse RFC 4038, “Application Aspects of IPv6 Transition”, and you’ll find a good number of issues that are going to be as knots in wood to a nail including DNS, conversion functions between hostnames and IP addresses (implying underlying changes to development frameworks that would certainly need to be replicated in PaaS environments which, according to a recent report from Gartner, indicates a 267% increase in inquiries regarding PaaS this year alone), and storage of IP addresses – whether for user identification, access policies or integration purposes. Integration is the magic nail; the one item on the migratory checklist that is likely to make or break the success of IPv6 migration. It’s also likely to be the “thing” that forces organizations to move faster. As partners, sources and other integrated systems make the move it may cause applications to become incompatible. If one environment chooses an all or nothing strategy to migration, its integrated partners may be left with no option but to migrate and support IPv6 on a timeline not their own. TOO TIGHTLY COUPLED While the answer for IPv6 migration is generally accepted to be found in a dual-stack approach, the same cannot be said for Intercloud application mobility. There’s no “dual stack” in which services aren’t tightly coupled to IP address, regardless of version, and no way currently to depict an architecture without relying heavily on topological concepts such as IP. Cloud computing – whether IaaS or PaaS or SaaS – is currently entrenched in a management and deployment system that tightly couples IP addresses to services. Integration relying upon those services, then, becomes heavily reliant on IP addresses and by extension IP, making migration a serious challenge for providers if they intend to manage both IPv4 and IPv6 customers at the same time. But eventually, they’ll have to do it. Some have likened the IPv4 –> IPv6 transition as the network’s “Y2K”. That’s probably apposite but incomplete. The transition will also be as challenging for the application layers as it will for the network, and even more so for the providers caught between two versions of a protocol upon which so many integrations and services rely. Unlike Y2K we have no deadline pushing us to transition, which means someone is going to have to be the one to pull the magic nail out of the IPv4 house and force a rebuilding using IPv6. That someone may end up being a cloud computing provider as they are likely to have not only the impetus to do so to support their growing base of customers, but the reach and influence to make the transition an imperative for everyone else. IPv6 has been treated as primarily a network concern, but because applications rely on the network and communication between IPv4 and IPv6 without the proper support is impossible, application owners will need to pay more attention to the network as the necessary migration begins – or potentially suffer undesirable interruption to services.277Views0likes0CommentsDo you control your application network stack? You should.
Owning the stack is important to security, but it’s also integral to a lot of other application delivery functions. And in some cases, it’s downright necessary. Hoff rants with his usual finesse in a recent posting with which I could not agree more. Not only does he point out the wrongness of equating SaaS with “The Cloud”, but points out the importance of “owning the stack” to security. Those that have control/ownership over the entire stack naturally have the opportunity for much tighter control over the "security" of their offerings. Why? because they run their business and the datacenters and applications housed in them with the same level of diligence that an enterprise would. They have context. They have visibility. They have control. They have ownership of the entire stack. Owning the stack has broader implications than just security. The control, visibility, and context-awareness implicit in owning the stack provides much more flexibility in all aspects covering the delivery of applications. Whether we’re talking about emerging or traditional data center architectures the importance of owning the application networking stack should not be underestimated. The arguments over whether virtualized application delivery makes more sense in a cloud computing- based architecture fail to recognize that a virtualized application delivery network forfeits that control over the stack. While it certainly maintains some control at higher levels, it relies upon other software – the virtual machine, hypervisor, and operating system – which shares control of that stack and, in fact, processes all requests before it reaches the virtual application delivery controller. This is quite different from a hardened application delivery controller that maintains control over the stack and provides the means by which security, network, and application experts can tweak, tune, and exert that control in myriad ways to better protect their unique environment. If you don’t completely control layer 4, for example, how can you accurately detect and thus prevent layer 4 focused attacks, such as denial of service and manipulation of the TCP stack? You can’t. If you don’t have control over the stack at the point of entry into the application environment, you are risking a successful attack. As the entry point into application, whether it’s in “the” cloud, “a” cloud, or a traditional data center architecture, a properly implemented application delivery network can offer the control necessary to detect and prevent myriad attacks at every layer of the stack, without concern that an OS or hypervisor-targeted attack will manage to penetrate before the application delivery network can stop it. The visibility, control, and contextual awareness afforded by application delivery solutions also allows the means by which finer-grained control over protocols, users, and applications may be exercised in order to improve performance at the network and application layers. As a full proxy implementation these solutions are capable of enforcing compliance with RFCs for protocols up and down the stack, implement additional technological solutions that improve the efficiency of TCP-based applications, and offer customized solutions through network-side scripting that can be used to immediately address security risks and architectural design decisions. The importance of owning the stack, particularly at the perimeter of the data center, cannot and should not be underestimated. The loss of control, the addition of processing points at which the stack may be exploited, and the inability to change the very behavior of the stack at the point of entry comes from putting into place solutions incapable of controlling the stack. If you don’t own the stack you don’t have control. And if you don’t have control, who does?251Views0likes0CommentsFile Virtualization Performance: Understanding CIFS Create ANDX
Once upon a time, files resided on a local disk and file access performance was measured by how fast the disk head accessed the platters, but today those platters may be miles away; creating, accessing and deleting files takes on a new challenge and products like the F5 ARX become the Frodo Baggins of the modern age. File Virtualization devices are burdened with a hefty task (this is where my Lord of the Ring Analogy really beings to play out) of becoming largely responsible for how close your important files are to your finger tips. How fast do you want it to perform? Quite expectedly, “as fast as possible,” will be the typical response. File Virtualization requires you to meet the expectations of an entire office—many of which are working miles away from the data they access and every single user hates to see a waiting cursor. To judge the performance of a storage environment we often ask the question, “How many files do you create and how many files do you open?” Fortunately, Microsoft CIFS allows humans to interact with their files over a network and it does so with many unique Remote Procedure Calls (RPCs). One such procedure call, Create ANDX, was initially intended to create new files on a file system but became the de-facto standard for opening files as well. While you and I can clearly see an obvious distinction between opening a file and creating a file, CIFS liberal use of Create ANDX, gives us pause, as this one tiny procedure has been overloaded to perform both tasks. Why is this a problem? Creating a file and opening a file requires a completely separate amount of work with entirely different results, one of the great challenges of File Virtualization. Imagine if you were given the option between writing a book like The Fellowship of The Ring or simply opening the one already created. Which is easier? Creating a file may require metadata about the file (security information, other identifiers, etc.) and allocating sufficient space on disk takes a little time. Opening a file is a much faster operation compared to “create” and, often, will be followed by one or more read operations. Many storage solutions, EMC and Network Appliance come to mind, have statistics to track just how many CIFS RPC’s have been requested by clients in the office. These statistics are highly valuable when analyzing the performance of a storage environment for File Virtualization with the F5 ARX. Gathering the RPC statistics over a fixed interval of time allow easier understanding of the environment but one key statistic, Create ANDX, leaves room for improvement… this is the “all seeing eye” of RPC’s because of its evil intentions. Are we creating 300 files per second or simply opening them? Perhaps it’s a mix of both and we’ve got to better understand what’s going on in the storage network. When we analyze a storage environment we put additional focus on the Create ANDX RPC and utilize a few other RPC’s to try to guess what the client’s intentions so we can size the environment for the correct hardware. In a network with 300 Create ANDX procedures a second, we would then look into how many read RPC’s we can find compared to the write RPC’s and attempt to judge what the client is intending to perform as an action. For example, a storage system with 300 “creates” that then performs 1200 reads and five writes is probably spending much of its time opening files, not creating them. Logic dictates that a client would open a file to read from it and not create a 0-byte file and read emptiness, which just doesn’t make much sense. Tracking fifteen minute intervals of statistics on your storage device, over a 24-hour period, will give you a bit of understanding as to what RPC’s are heavily used in the environment (a 48-hour sample will yield even more detailed results.) Taking a bit of time to read into the intentions of Create ANDX and try to understand how your clients are using the storage environment, are they opening files or are they creating files? Just as creating files on storage systems is a more intensive process compared to the simple open action, the same can be said for the F5 ARX. The ARX will also track metadata for newly created files for its virtualization layer and the beefier the ARX hardware, the more file creations can be done in a short interval of time. Remember, while it’s interesting and often times impressive to know just how many files are virtualized behind an F5 ARX or sitting on your storage environment, it’s much more interesting when you know how many are actually actively accessed. With a handful of applications, multiple protocols, dozens of RPC’s, hundreds of clients and several petabytes of information, do you know how your files are accessed?235Views0likes0CommentsHTTP: The de facto application transport protocol of the Web
When the OSI defined its model it included a transport layer which was supposed to handle end-to-end connections and address communication reliability. In the early days of the web HTTP sat at the application layer (layer 7) and rode atop TCP, its transport layer. An interesting thing happened on the way to the 21st century; HTTP became an application transport layer. Many web applications today use HTTP to transport other application protocols such as JSON and SOAP and RSS. Applications now "speak" using a variety of languages to communicate, but underlying them all is HTTP. This is not the same as tunneling a different application through port 80 simply because almost all HTTP traffic flows through that port and it is therefore likely to be open on the corporate firewall. Those applications that simply tunnel through port 80 use TCP and their own application layer protocols, they're essentially just pretending to be HTTP by using the same port to fool firewalls into allowing their traffic to pass unhindered. No, this is different. This is the use of HTTP to wrap other application protocols and transport them. The web server interprets the HTTP and handles sessions and cookies and parameters, but another application is required to interpret the messages contained within because they represent the protocol of yet another application. In today's world the availability of exponentially expanding collaboration and syndication applications, all requiring different applications, is driving the need for smarter application delivery solutions to ensure availability, reliability, and scalability. Simple layer 4 (TCP) load balancing is not enough, neither is load balancing based on layer 7 (HTTP). Load balancing requests based on TCP or HTTP doesn't address the need to distribute application requests because the app is no longer HTTP, it's something else entirely. HTTP has been relegated to the status of application transport protocol, and that means in order to intelligently deliver an application we have to dig even deeper than layer 7. We've got to get inside. The problem is, of course, that there are no standards beyond HTTP. My JSON-based Web 2.0 application looks nothing like your SOAP-based Web 2.0 application. And yet a single solution must be able to adapt to those differences and provide the same level of scalability and reliability for me as it does you. It has to be extensible. It has to provide some mechanism for adding custom behavior and addressing the specific needs of application protocols that are unknown at the time the solution is created. This is an important facet of application delivery that is often overlooked. Applications aren't about HTTP anymore, they're about undefined and unknowable protocols. An application delivery solution can't distribute application load across servers unless it can understand which application it's supposed to be managing. And because HTTP connections are artificially limited by browsers, multiple application protocols are using the same HTTP connections over which to exchange data. That means an application delivery solution has to be able to dig into the application protocol and figure out where that request should be directed, and how to treat it, and what policies to apply. Application delivery today is about the message, not the protocol, and the message is undefined until it's created by a developer. There's a lot of traffic out there that's just HTTP, as it was conceived of and implemented years ago. But there's a growing amount of traffic out there that's more than HTTP, that's relegated this ubiquitous protocol to an application transport layer protocol and uses it as such to deliver custom applications that use protocols without RFCs, without standards bodies, without the W3C. If your application delivery solution doesn't offer a way that easily allows you to dig into the real application protocols, but instead relegates you to making load balancing and routing decisions based solely on HTTP, you need to reconsider your solution. HTTP is the de facto application transport protocol today, but because it's so often used this way we have to get smarter about how we load balance and distribute those messages riding on HTTP if we want to architect smarter, greener, more efficient architectures. Imbibing: Coffee216Views0likes1CommentNew TCP vulnerability about trust, not technology
I read about a "new" TCP flaw that, according to C|Net News, Related Posts puts Web sites at risk. There is very little technical information available; the researchers who discovered this tasty TCP tidbit canceled a conference talk on the subject and have been sketchy about the details of the flaw when talking publicly. So I did some digging and ran into a wall of secrecy almost as high as the one Kaminsky placed around the DNS vulnerability. Layer 4 vs Layer 7 DoS Attack The Unpossible Task of Eliminating Risk Soylent Security So I hit Twitter and leveraged the simple but effective power of asking for help. Which resulted in several replies, leading me to Fyodor and an April 2000 Bugtraq entry. The consensus at this time seems to be that the wall Kaminsky built was for good reason, but this one? No one's even trying to ram it down because it doesn't appear to be anything new. Which makes the "oooh, scary!" coverage by mainstream and trade press almost amusing and definitely annoying. The latest 'exploit' appears to be, in a nutshell, a second (or more) discovery regarding the nature of TCP. It appears to exploit the way in which TCP legitimizes a client. In that sense the rediscovery (I really hesitate to call it that, by the way) is on par with Kaminsky's DNS vulnerability simply because the exploit appears to be about the way in the protocol works, and not any technical-based vulnerability like a buffer overflow. TCP and applications riding atop TCP inherently trust any client that knocks on the door (SYN) and responds correctly (ACK) when TCP answers the door (SYN ACK). It is simply the inherent trust of the TCP handshake as validation of the legitimacy of a client that makes these kinds of attacks possible. But that's what makes the web work, kids, and it's not something we should be getting all worked up about. Really, the headlines should read more like "Bad people could misuse the way the web works. Again." This likely isn't about technology, it's about trust, and the fact that the folks who wrote TCP never thought about how evil some people can be and that they'd take advantage of that trust and exploit it. Silly them, forgetting to take into account human nature when writing a technical standard. If they had, however, we wouldn't have the Internet we have today because the trust model on the Web would have to be "deny everything, trust no one" rather than "trust everyone unless they prove otherwise." So is the danger so great as is being portrayed around the web? I doubt it, unless the researchers have stumbled upon something really new. We've known about these kinds of attacks for quite some time now. Changing the inherent nature of TCP isn't something likely to happen anytime soon, but contrary to the statements made regarding there being no workarounds or solutions to these problem, there are plenty of solutions that address these kinds of attacks. I checked in with our engineers, just in case, and got the low-down on how BIG-IP handles this kind of a situation and, as expected, folks with web sites and applications being delivered via a BIG-IP really have no reason to be concerned about the style of attack described by Fyodor. If it turns out there's more to this vulnerability, then I'll check in again. But until then, I'm going to join the rest of the security world and not worry much about this "new" attack. In the end, it appears that the researchers are not only exploiting the trust model of TCP, they're exploiting the trust between people; the trust that the press has in "technology experts" to find real technical vulnerabilities and the trust that folks have in the trade press to tell them about it. That kind of exploitation is something that can't be addressed with technology. It can't be fixed by rewriting a TCP stack, and it certainly can't be patched by any vendor.200Views0likes2CommentsHow Coding Standards Can Impair Application Performance
One of the "real world" lessons rarely taught in the university setting is that in the "real world" you're going to have to follow coding standards. Back in the day, when I was allowed to code, I often railed against some of those coding standards on the basis that they impaired application performance. Anyone with a firm grounding in computer science knows that the introduction of a local scope necessarily means more work (and thus memory and cycles consumed) to set up the stack: copying variables, pushing parameters, etc... That means that a conditional statement with just one statement unnecessarily introducing a local scope degrades performance. But that price is often willingly (or unwittingly) paid in the enterprise because of coding standards which dictate format of code. Sure, modern compilers often optimize that code anyway but are you sure yours does? Have you checked? Does it require a flag and has that been set for all applications? The same problems exist with certain language constructs, such as if-then-else versus switch versus the "{condition} ? {true statement} : {false statement}" construct. Enterprise coding standards meant to ensure readability and thus maintainability of the code require one over the other, regardless of which construct may be the most efficient in terms of execution (hint: it's the "? :" form). And I won't even go into the inefficiencies introduced by layering abstraction upon abstraction over standard data structures. But it's likely that somewhere in the enterprise there is a document that says "you will use a Vector and not a primitive array" because it's easier, better documented, and less likely to introduce an array of potential vulnerabilities into the application. My apologies for the pun. Seriously, I feel bad for that one. Needless to say, I lost my arguments and it's likely that most folks today would as well. So what's an architect or developer to do when poor application performance has users and management screaming at them? There's not a lot you can do. At some point it becomes impossible to optimize your application further without destroying manageability and readability. The long term benefits of coding standards for web applications generally outweigh the gains in performance. So you have to start looking elsewhere for ways to make that application faster - like the underlying protocol stack and network connectivity. But developers don't generally have control over those aspects of an application, either. Unfortunately, neither do web administrators or network architects. That's when it becomes necessary to look at external solutions, and that's where application acceleration and optimization can help. Technologies like application acceleration and optimization can't address the inefficiencies in code introduced by data structure choices or coding standards-imposed use of specific constructs, but they can help offset the degradations introduced by those coding standards by improving the performance of network and application protocols and employing a variety of data reduction techniques. By taking advantage of application acceleration technologies, you can improve the performance of your application without having to sacrifice coding standards or, if you're willing to do so, rewrite the application. Application acceleration and optimization solutions can improve the performance of applications by modifying the behavior of underlying transport and network protocols, reducing the amount of data being exchanged, and optimizing network connections. These technologies improve application performance transparently, without the need to re-architect applications or its network infrastructure.191Views0likes3Comments