application acceleration manager
11 TopicsHourly Licensing Model – F5 delivers in AWS Marketplace
#cloud #SDAS #AWS And you can try it out for free... June 30, 2014 (The Internet) Today F5 Networks, which delivers solutions for an application world, announced it had completed jumping through the hoops necessary to offer an hourly (utility) billing model for its BIG-IP VE (Virtual Edition) in the Amazon Web Services (AWS) cloud. Not only has F5 announced availability of the industry's leading application delivery services for deployment in AWS with a utility billing model, but the offering includes a variety of options which organizations can take advantage of: Three sizes of BIG-IP VE including 25 Mbps, 200 Mbps and 1Gbps. Two BYOL (Bring Your Own License) options as well as a modular option A free 30 day trial offering with Best licensing and 200 Mbps of throughput BIG-IP VE for AWS includes not only the industry's most trusted load balancing service but also the following capabilities and Software Defined Application Services (SDAS) designed to protect and enhance application security and performance: An integrated WAF (Web Application Firewall) DDoS Protection Caching, compression and acceleration Advanced Load Balancing Algorithms including least connections and weighted round robin. Additionally, BIG-IP VE for AWS supports the use of iApps for rapid provisioning in the cloud or on-premise. iApps are application-driven service templates that encapsulate best practice configurations as determined by lengthy partnerships with leading application providers as well as hundreds of thousands of real deployments across a broad set of verticals including 94% of the Fortune 50. The availability of BIG-IP VE for AWS further supports F5 Synthesis' vision to leave no application behind, regardless of location or service requirements. Through Synthesis' Intelligent Service Orchestration, organizations can enjoy seamless licensing, deployment and management of F5 services across on-premise and cloud-based environments. The availability of BIG-IP VE for AWS extends F5 service fabric into the most popular cloud environment today, and gives organizations the ability to migrate applications to the cloud without compromising on security or performance requirements. To celebrate this most momentous occasion you can try out BIG-IP in the AWS marketplace for free (for 30 days) or receive $100 credit from AWS by activating participating products between July 1 and July 31, 2014: These offers apply to the F5 BIG-IP Virtual Edition for AWS 200Mbps Hourly (Best) through AWS Marketplace: 30 Day Free Trial Available The BIG-IP Virtual Edition is an application delivery services platform for the Amazon Web Services cloud. From traffic management and service offloading to acceleration and security, the BIG-IP Virtual Edition delivers agility - and ensures your applications are fast, secure, and available. Options include: - BIG-IP Local Traffic Manager, Global Traffic Manager, Application Acceleration Manager, Advanced Firewall Manager, Access Policy Manager, Application Security Manager, SDN Services and Advanced Routing, including support for AWS CloudHSM for cryptographic operations and key storage. AWS – Offer (Credit) Customers who activate a free trial for any participating product (that includes F5 BIG-IP) between July 1 and July 31, 2014 and use the product for a minimum of 120 hours before August 31, 2014 , will receive a $100 AWS Promotional Credit. Limit two, $100 AWS Promotional Credits per customer; one per participating software seller. . For more information: F5 Synthesis F5 Solutions Available in the AWS Marketplace F5 BIP-IP Virtual Editions479Views0likes2CommentsWhiteboard Wednesday: Basic F5 BIG-IP Nomenclature
In this neck of the woods, we have historically primarily focused on deep dives into the programmability features in the BIG-IP. We break occasionally from this mold, and we're constantly requesting feedback on how to better meet the needs of the community. In recent surveys, both online and in person at conferences, we are getting more requests for the low hanging fruit--basic nuts and bolts of how the product works. To an extent we have done this with some of our more popular series of articles like the SSL and TCP profiles, the basics of F5 BIG-IP Application Security Manager, and our introduction to F5 BIG-IP Application Acceleration Manager (formerly WebAccelerator.) I say all that to say...we hear you. And we want to start with a bare bones understanding of the product nomenclature. In this video, Jason Rahm and John Wagnon discuss some of the foundational objects of the BIG-IP: interfaces, vlans, self ips, nodes, pools, and virtual servers. This video is an overview of these topics. For details please see the resources below. So…what other basic nomenclature would you like us to talk about? Route domains? Auto-lasthop? Host/TMM relationship? TMOS fundamentals? Drop a comment below and let us know! Resources Self IP Vlans & Vlan Groups Manual Sample Implementation with Link Aggregation Nodes Pools Virtual Servers335Views0likes1CommentF5 Synthesis: Your gateway to the future (of HTTP)
#SDAS #HTTP #webperf #SSL De facto standards can be as difficult to transition off of as official ones If you haven't heard about HTTP 2.0 it's time to start paying attention. It is anticipated that in November the latest version of the specification will become "the standard" for applications. It includes enhancements designed to improve the security and performance of web applications, which have become critical strategic components to just about every organization on the planet. Go ahead, name an organization that doesn't rely on at least one web-based application to conduct business today. Exactly. Performance and security being imperatives along with the presence of applications means that HTTP 2.0 should be a welcome addition to the family of Internet protocols. But it will likely be met with some amount of trepidation by those tasked with supporting it on the data center side of applications because one of the downsides of updating standard protocols after so many years (HTTP 1.1 was ratified in RFC 2616 in 1999) is that they're rarely compatible. That's because in technology years, that 15 years is more like 75 years. Consider for a moment IPv6, which was officially standardized way back in 1995 (RFC1883). Yes, I said 1995. Before the great dot bomb. Before Web 2.0. Before mobile apps. And how's that been going for us? Well, as of May 2014 more than 96% of all Internet traffic was still carried via IPv4. Go ahead, read that again because you're right - a 4% adoption rate over nearly 20 years is somewhat hard to swallow, isn't it? But, you might think, IP affects everything. We're only talking about apps, here. And web apps, at that. Well, let's consider that for a moment. According to our data, 65% of all apps are delivered via HTTP right now. in other words, HTTP is pretty darned important to app delivery and it'd be pretty hard to convince someone to upgrade all the things that need upgrading in order to support HTTP 2.0 (particularly with its requirement for encryption via SSL or TLS). And yet major browsers (and consumer demand for speed, more speed and even MOAR SPEED) are already pushing adoption by broadly supporting SPDY (the protocol upon which HTTP 2.0 is based and which is the primary cause behind compatibility headaches). According to this site, which tracks SPDY adoption across browsers, all major browsers already have at least partial (if not full) support for SPDY. They're ready to go. The app side? Not so much. That's where an app gateway comes into play. App Gateway: Bridging the Old and the New Like IPv6, the answer to the conundrum of transitioning from one protocol to another is a gateway. In the case of HTTP, it's an app gateway because HTTP is an app layer protocol. In the latest release of the ADC platform on which F5 Synthesis High Performance Services Fabric is built we've included both SPDY 1.3 and HTTP 2.0 support, enabling a gateway architectural approach to supporting the latest (soon to be) standard and the existing, more prominent one. This architectural feat is accomplished by way of BIG-IP's full proxy architecture, which lets our ADC speak one version a protocol on the outside (the client) and another on the inside (to the app). But what about all that security stuff you might ask. The requirement for SSL and TLS is as disruptive as the changes to the core protocol, after all. You're right, it is, but again - the nature of being a full proxy means we can support SSL or TSL on the outside and plain old HTTP on the inside, sans encryption. While some organizations require end-to-end encryption of all traffic, those that don't will benefit from the ability to leverage client-side (outside) encryption without doing so on the inside (server-side) where lots of Layer 4-7 services may need visibility into traffic to do their respective jobs. Using a gateway approach also enables a mix of HTTP 2.0 and HTTP 1.x on the inside (server side). That means organizations can take a transitory approach to adoption of the latest app protocol, moving if and when it seems most prudent based on upgrade and refresh cycles, not standards body meeting schedules. The performance and security (and let's not forget business) benefits to moving to HTTP 2.0 with its SSL/TLS requirements and improvements in core transport of data between client and server are worth exploring. But it's understandable that a protocol so entrenched like HTTP 1.x is not easily ripped out and replaced with something new. Taking a gateway approach to adoption enables organizations to support the old while exploring the new and making sure that consumers and employees using the latest and greatest browsers will be able to enjoy improved performance and productivity. Additional Resources: F5 Synthesis Demo of F5 HTTP 2.0 Gateway with Sr. Product Manager Dawn Parzych @ Velocity 2014255Views0likes0CommentsLa transition vers HTTP/2, l'envisager, s'y préparer, la réaliser
HTTP/2 est désormais un standard avec son support intégré dans les browsers modernes. Les serveurs Web, proposent aussi dans leurs dernières versions, la compatiliblité avec cette évolution. Ce qu'il faut retenir est qu'HTTP/2 vient accéler le transport du contenu Web en maintenant la confidentialité à travers SSL. Un des bénéfices pour les developpeurs et fournisseurs de contenu est la capacité à se rendre compte des apports de ce protocole sans remettre en cause toute son infrastructure. Les démonstrations montrent bien les gains à travers un browser sur un ordinateur portable, choses encore plus appréciables sur les plateformes mobiles. La version 12.0 de TMOS permet de se comporter comme un serveur HTTP/2 vis à vis des clients tout en continuant à solliciter le contenu en HTTP/1.0 et HTTP/1.1 auprès des serveurs. Pour trouver des raisons de s'interesser à ce protocole, plusieurs sources d'information peuvent y aider : Making the journey to HTTP/2 HTTP/2 home253Views0likes0CommentsWhy think about HTTP 2.0?
#webperf #HTTP #mobile The problem with web application performance is directly related to the increasing page size and number of objects comprising pages today. Increasing corporate bandwidth (the pipe between the Internet and the organization) doesn't generally help. The law of diminishing returns is at work; at some point more bandwidth (like more hardware) just isn't enough because the problem isn't in how fast bits are traveling, but how many times bits are traversing the network. And for some clients - like mobile - it doesn't matter. They're getting 1-4Mbps and there's nothing you can do to change that. The problem is HTTP isn't utilizing TCP efficiently, and thus the round trip - the time it takes for clients to talk to the application - is almost always the real culprit when looking for the source of web application performance issues. Especially for mobile clients, where a round trip carries with it an average latency of 150-300 ms. More efficient use of TCP, better connection management, compression and other acceleration techniques are a must if we're going to really address web application performance. And that's what HTTP 2.0 is designed to do.242Views0likes0CommentsWeb Accelerator / AAM – fonctionnalité IBR
Faisant suite à la vidéo de découverte du module WA / AAM (https://devcentral.f5.com/s/articles/dcouverte-de-web-accelerator-aam-7341), voici une vidéo présentant l’une des principales fonctionnalités du module WA / AAM. Cette fonctionnalité nommé IBR - Intelligent Browser Referencing – permet d’optimiser le cache côté client. Le BIGIP va modifier les informations de cache control et va modifier le contenu web afin que les navigateurs gardent en cache les objets bien plus longtemps que prévu initialement par le serveur WEB. Un mécanisme de réécriture permet d'éviter les faux positifs au niveau du cache. Voici l’explication et la démonstration en images.233Views0likes2CommentsDécouverte de Web Accelerator / AAM
Web Accelerator, renommé AAM à partir de la release 11.4, est un module malheureusement peu connu de nos clients et partenaires. Pourtant, celui-ci a de quoi séduire. Son rôle est simple, c’est une “Reverse Proxy Cache & Optimization”. Voici une liste non exhaustive de ses capacités et de ses fonctionnalités : Mise en cache du contenu web des serveurs dans le disque dur et la RAM du BIGIP. Réécriture des headers de “Cache Control” et du contenu web afin d’optimiser le cache côté navigateur client Optimisation du code source HTML afin de supprimer tout code superflu Optimisation des images par conversion ou compression Et bien plus encore … Cette vidéo vous permettra de comprendre son fonctionnement et surtout d’être capable de créer votre première politique d’optimisation WA / AAM. Vous trouverez ensuite les autres vidéos autour de WA / AAM sous le tag “france”.206Views0likes0CommentsCaching, CDNs and Optimization: a bit like a trip to the store.
We know that people like fast websites. So how do you speed yours up? Recently I’ve been having conversations with customers and my colleagues in the field about caching appliances, content delivery networks (CDNs), and web application optimization. What’s the best approach? Caching appliances place the most commonly requested objects in a cache upstream of the origin web servers. Objects (like images, JavaScript files etc.) requested by browsers can be served straight from cache without ever hitting the backend servers. CDN’s go a step further, they place commonly requested objects into multiple caches around the world with the aim to position the objects as near as possible to the browsers. Web application Optimization solutions use a range of tools and techniques to deliver applications more effectively, by doing things like shrinking content and manipulating browser caches (and there I’ve just oversimplified the life’s work of a number of F5’ers, but this blog is all about trying to make things simple so bite me Dawn Parzych.) After a long and technical debate I’ve decided it all comes down to a trip to a grocery store. Hang with me readers, I’m 70% sure this is all going to make sense. Caching appliances simply put the most common things you need at the edge of the parking lot. It’s all the same size as it was in the store, but it’s a little bit closer and you don’t clog up the aisles. You still have to head back to the store after a few hours to check your food is still in date. Good for the store, not so useful for you. CDNs deliver your shopping most of the way home, but then charge the store every time you check your food is still fresh. Good for you, not so hot for the store who get faced with extra charges. Plus the shopping still takes up as much space as it did before. Web application optimization tools (like Application Acceleration Manager) shrink the size of your shopping, remove the excess packaging you don’t need and then let you know when it’s out of date. More of your shopping stays fresh for longer and it can even make your drive home faster. Just as I’ve tried to simplify some of the choices for web application acceleration, we’re shortly going to release a new reference architecture to make actually implementing web application optimization just as easy.201Views0likes0CommentsTop5 06/23/2014
Every day that I get to write a Top5 post feels like an auspicious day, as it is rather one of my favorite things to write, I must admit. This is partly because I get to dig into all the cool stuff that everyone else has been up to and posting with the ever lively DevCentral community. It’s partly because I get to write about exactly what I want, not that, you know, I’m a control freak or anything. And it’s partly because there is SO MUCH COOL STUFFhappening to write about. Pardon the shouty caps and all, but man, technology is just freaking cool sometimes. So, clearly in fan boy mode (and not in the slightest ashamed, athankyouverymuch), I bring to you, with excitement, this week’s Top5: From the University of the Obvious: Faster Applications are Better https://devcentral.f5.com/s/articles/from-the-university-of-the-obvious-faster-applications-are-better With what is very possibly the best title in the history of the Top5, I simply could not resist this piece by Robert Haynes. I’m sure precisely zero of you are shocked by this ground breaking revelation upon which he reflects: Faster Applications are Better. Take a moment to absorb that revolutionary data point. Ready? Okay, now in all seriousness, go check out this tongue in cheek, highly entertaining post. He really does hit a particularly annoying nail on the head. Why on earth are all of these reports out there showing nowhere near enough detail to matter, but rather loudly exclaiming “ZOMGZ! YOU GAIZ! FASTER IS BETTER!!” as if we hadn’t figured that out. I prefer rapid downloads, rapid restaurant service, and the pedal on the right. None of this is shocking, I would think. What it really comes down to is the how. How do you make things faster? How do you get the improvements you seek? That’s the meat you’re looking to chew, and Robert and Dawn are looking to team up to provide just such sustenance. I’m excited to see what they come up with, as there’s a host of accelerating to be done within BIG-IP. Stay tuned to see what they pump out, and rest assured it will be more than “Acceleration is good” advice. At least, it better be, or no more beer for Robert. 20 Lines or Less #75: URIs, URIs and More URIs https://devcentral.f5.com/s/articles/20-lines-or-less-75-uris-uris-and-more-uris URIs are a thing. They’re a thing on “the web”. They’re a thing on “the web” that gets mucked with quite a lot, actually. Therefore being able to do said mucking to said thing on said “web” in a rather robust and rapid fashion could most likely be characterized as “a good thing”. This edition of the 20LoL shows three handy ways to do just that kind of thing. Also, in unrelated news, I like “quotes”. This edition of the 20LoL is the first in a targeted attack of hawesome (no, auto-correct, I did notmean ‘awesome’). By focusing on a single type of operation I’m hoping to make these a little more targeted at particular groups of users / community members and perhaps even easier to historically search to find examples of what you’d like to do. Handy? Let me know whether you’re team #singletopic20lol or #randomness20lol and guide my experiment. Otherwise expect to see future installments similarly guided towards a single topic until more data can be gathered. For science! Devops: The Operational Amplifier https://devcentral.f5.com/s/articles/devops-the-operational-amplifier What’s this? A post that is a confluence of electrical engineering concepts and Devops goodness? Surely this must be on the Top5! Sprinkle in a little bit of MacVittie goodness and you’ve got a winner. Herein lies an excellent depiction of precisely why Devops is such a powerful and important movement in modern IT driven businesses. I am immediately in love with the term “Operational Amplifier” as an attempt to describe the role Devops can play. Take the resources you have and turn up the gain to the point that your output far exceeds what seem to be the expected limitations. This is imperative in growing businesses rapidly, especially when attempting to support the plethora of applications that most IT departments are saddled with in today’s app-centric world. Lori dives into this topic and has some excellent commentary that is absolutely worth a read. Go take a look for yourself, you’ll get a real charge out of it I’m sure. Security Sidebar: I Can See Your Browsing History https://devcentral.f5.com/s/articles/security-sidebar-i-can-see-your-browsing-history You know those stories we all heard growing up? The ones that were part horror story and part parable? Something like “If you don’t eat your peas the gremlins will get you!” No? Your parents didm’t use abject terror as a motivational tool? Oh. Well. Uhh..moving on, then. Anyway, this is the kind of thing that just might help you scare your security reluctant friends straight into a security seminar/book/something useful. It’s easy to forget how much data we’re offering up to the world when we do something as simple as browse the web. John takes a moment to remind (and terrify) us that we are giving nearly as much as we get. Sometimes despite our best efforts to the contrary. Your browsing history is a tasty morsel for many companies out there. If you have a history, meaning you don’t delete it every time you close your browser, you may want to take a look at this post to see just what kind of risk you might be running. What are you waiting for? https://devcentral.f5.com/s/articles/what-are-you-waiting-for In this post Dawn Parzych digs into the many benefits of SPDY and HTTP/2. An acceleration post titled “What are you waiting for?” was simply too good to pass up, and I’m glad I didn’t miss this one. There is a huge list of benefits to properly making use of the features offered in more recent years by implementing technologies such as SPDY. Dawn happens to be an expert in such things and shares much of her knowledge with the community here on her blog. Get a taste here, where she’s diving into the timeline of events since SPDY was introduced 5 years ago, some gains that can be expected by swapping over to the newer content delivery mechanisms, and a very handy graphic showing the logical flow differences between HTTP 1, 1.1, and 2. If you’ve struggled to understand why you should care about these things or what you can expect, this is an excellent place to start. Followed immediately by digging into the rest of Dawn’s work.200Views0likes0CommentsA Tale of Testing, SPDY, browsers and the other F5.
You shouldn't be surprised to learn that when we create Reference Architectures we actually test them. The settings you find in the Configuration Best Practice Guides have been created, tested and documented pretty carefully to work well in most environments. Recently I've moved my testing environment to a new cloud provider. It's a great service, providing exactly what you need from a cloud: elasticity, speed, and ease of use. I don't need to ask anyone to create me a new network, or provision me a server, and I've got access to a cool catalog of application and infrastructure images. My first job was to recreate my application acceleration test rig. Not a problem, I've documented the setup and it's deliberately simple - my reference architecture is big on returns and small on investment (in both your time and money). In an hour or so I have my design up and running - this cloud stuff is a real boon for the lazy. Now to run some testing. Hmm. That's not right - my acceleration test page is being pretty inconsistent - it's sometimes only marginally faster than the baseline un-accelerated version. Now I know from my previous test results this setup, despite being pretty simple (TCP optimization, SPDY gateway and a very basic layer 7 acceleration policy) can give good, measurable improvements. To be honest, at this point I'm starting to worry. I'm about to push out a reference architecture that suddenly isn't working right. So back to my help desk and sysadmin days (see, I used to have a proper job). First thing to check when something breaks - what have you changed? In this case the list is quite long - I've moved environments entirely, I'm using a different version of the web server O/S and a different WAN emulation component, plus I've altered my test page to have more JavaScript. The only thing that's 100% the same is my BIG-IP version and configuration. Looks like I’ll have to actually work out what’s going on. Now I’m of the opinion that any time you are breaking out analysis tools like Wireshark or HTTPWatch (both excellent tools, of course) you’re having a bad day. But in this case there seemed no alternative. Running the tests on a new browser session – the SPDY advantage is clear – all the images load almost concurrently and the trace shows all the page objects being loaded together. However when I’ve used CTRL + F5 to reload the page without the browser cache the behavior has reverted to the classic HTTP 1.1 waterfall with a few objects being requested in parallel, then another few. No wonder my test results bad. Shutting down the browser and reloading the page restores the proper SPDY behavior, my test page loads about twice as quick through the accelerated config. I’m going to test this behavior out some different versions and browsers, but for now I can sleep at night knowing that our forthcoming acceleration Reference Architecture actually works and I’ve learnt some valuable testing methodology lessons.184Views0likes0Comments