f5 apm
11 TopicsSAML IdP - Can you have one APM support multiple SAML IdPs?
We have setup one vip and one APM that we want to use for all SAAS logins. We are currently federating with about four saas cloud vendors (Salesforce, box, and others). I don't want to create multiple virtual servers or APMs but for the APM you can only pick one SSO configuration. Each SAML IdP services shows up as it's own SSO configuration. Will I need to do an iRule to switch between them? Also in the documentation it says that you can have multiple IdP for a virtual server. Current Setup SAML IdP Connfiguration * IdP Services -> idp_salseforce (bound to sp connector) sp_connector_salesforce * IdP Services -> idp_box (bound to sp connector) sp_connector_box VirtualServer_SSO_SAML -> APM_SSO_SAML -> SSO Configuration -> Only allowed to pick one idp services (this is the problem)Solved1.2KViews0likes12CommentsUnsecure web browsing on my SSL VPN
Hi All/DC Experts, I am having issue right now in my web browser. I have SSL CLient configured with certificate signed by DigiCert. Current I cant access my VPN to Google Chrome, but I can access it via Internet Explorer and Mozilla firefox. But the browsers says unsecured connection. Anyone would like to help or suggest or give any thoughts regarding this. Thanks, -Nat375Views0likes8CommentsSTREAM::disable and APM
TMOS 11.3.0 HF6 Does this: when HTTP_REQUEST { Disable the stream filter for all requests STREAM::disable } break APM? If not does anybody know why I get this line in /var/log/ltm: local/tmm err tmm[5477]: 01220001:3: TCL error: /Common/stream_test - Operation not supported (line 1) invoked from within "STREAM::disable" Thanks.358Views0likes3CommentsWhy Developers Should Demand Web App Firewalls.
The Web Application Firewall debate has been raging for a very long time, and we keep hearing the same comments going back and forth. Many organizations have implemented them as a fast-track to compliance, primarily compliance with PCI-DSS, but the developer community is still hesitant to embrace them as a solution to their problems. And that’s because like so many things out there, they are seen as an “either-or” proposition. Either they can relieve a developer of the need to write security code, or they can’t. If they can’t, then why have them? I’m a developer by trade, and I get the sentiment. It’s tough to say “spend all this money, and I’ll keep spending my time”, could almost seem to make no sense. But very little out there is really an either/or proposition. Many of the things that a Web Application Firewall can do for you – like DOS/DDOS attack resistance – are outside the realm of the developer. If you’ve taken the time to write DDOS protection code into your web applications, you might just be in the wrong job. Many other things that Web Application Firewalls (WAF) can do are well within the bounds of the developer domain, but could save time if they were implemented in the centralized location of the WAF instead of over and over in each app. Not relieve you of the burden of writing secure code, but save you time by eliminating bits to focus on and reducing redundant development, freeing up CPU cycles in the process. My Friendly Local Game Store (FLGS) – with doors. And that’s the key. Web App Firewalls don’t have to do everything, they have to do something that makes them worth the time to install and utilize. While many orgs have installed them, I would argue they’re not utilizing them effectively because the installation was a check-box on a compliance report, not something the organization wanted to make adequate use of. Other organizations don’t have them. But there IS one driving reason to install a web app firewall. A significant number of attacks are stopped before they get to your machine. Let me say that again… Risk of a ne’er-do-well getting root and messing up your application and your server by taking advantage of some flaw in the OS or a library is zero for every attack that is stopped before the packets reach your server. And that’s important. Because no matter how secure your code, or the code of your purchased package, or whatever, the overall system is only as strong as the weakest point the attackers can reach. Just like a soldier hiding behind a window is harder to shoot because most of his body is protected, an application resting behind a web application firewall is mostly protected, and thus, harder for attackers to take control of. Yes, services and apps have to be open to the world in order for them to present value to your customer, but the fact that a business needs a front door to let customers in does not mean that all businesses should dispense with the front wall of their shop. Another Branch of the FLGS – Without Doors! So get the protection a Web Application Firewall offers, put it to its maximum uses that you trust (did I mention I was a coder? They can do more than I’d hand off to them ;-)), and spend the time you save on security development doing coding/projects that add business value, because security is only important when it’s breached, the next rev of an important customer portal will be talked about all the time. Hopefully with praise, but talked about, either way. Or you could take the extra time and come to Green Bay to visit those FLGS’. They do rock, whether you’re into Monopoly, RPGs, CCGs, CMGs, Minis, or any of the other XXGs. And no, I don’t have any financial interest in them, and no, they don’t have a web presence, so you’ll have to visit Green Bay to see ‘em – but the top store is right by Packer Stadium, so you can get a double benefit for the trip. I will point out that the store with no front door? Yeah, it’s counting on the protection of the mall’s doors, you know, kind of like an application and a Web App Firewall. Still has a drop down chain door though – because layered security is good.267Views0likes0CommentsForce Multipliers and Strategic Points of Control Revisited
On occasion I have talked about military force multipliers. These are things like terrain and minefields that can make your force able to do their job much more effectively if utilized correctly. In fact, a study of military history is every bit as much a study of battlefields as it is a study of armies. He who chooses the best terrain generally wins, and he who utilizes tools like minefields effectively often does too. Rommel in the desert often used Wadis to hide his dreaded 88mm guns – that at the time could rip through any tank the British fielded. For the last couple of years, we’ve all been inundated with the story of The 300 Spartans that held off an entire army. Of course it was more than just the 300 Spartans in that pass, but they were still massively outnumbered. Over and over again throughout history, it is the terrain and the technology that give a force the edge. Perhaps the first person to notice this trend and certainly the first to write a detailed work on the topic was von Clausewitz. His writing is some of the oldest military theory, and much of it is still relevant today, if you are interested in that type of writing. For those of us in IT, it is much the same. He who chooses the best architecture and makes the most of available technology wins. In this case, as in a war, winning is temporary and must constantly be revisited, but that is indeed what our job is – keeping the systems at their tip-top shape with the resources available. Do you put in the tool that is the absolute best at what it does but requires a zillion man-hours to maintain, or do you put in the tool that covers everything you need and takes almost no time to maintain? The answer to that question is not always as simple as it sounds like it should be. By way of example, which solution would you like your bank to put between your account and hackers? Probably a different one than the one you would you like your bank to put in for employee timekeeping. An 88 in the desert, compliments of WW2inColor Unlike warfare though, a lot of companies are in the business of making tools for our architecture needs, so we get plenty of options and most spaces have a happy medium. Instead of inserting all the bells and whistles they inserted the bells and made them relatively easy to configure, or they merged products to make your life easier. When the terrain suits a commanders’ needs in wartime, the need for such force multipliers as barbed wire and minefields are eliminated because an attacker can be channeled into the desired defenses by terrain features like cliffs and swamps. The same could be said of your network. There are a few places on the network that are Strategic Points of Control, where so much information (incidentally including attackers, though this is not, strictly speaking, a security blog) is funneled through that you can increase your visibility, level of control, and even implement new functionality. We here at F5 like to talk about three of them… Between your users and the apps they access, between your systems and the WAN, and between consumers of file services and the providers of those services. These are places where you can gather an enormous amount of information and act upon that information without a lot of staff effort – force multipliers, so to speak. When a user connects to your systems, the strategic point of control at the edge of your network can perform pre-application-access security checks, route them to a VPN, determine the best of a pool of servers to service their requests, encrypt the stream (on front, back, or both sides), redirect them to a completely different datacenter or an instance of the application they are requesting that actually resides in the cloud… The possibilities are endless. When a user accesses a file, the strategic point of control between them and the physical storage allows you to direct them to the file no matter where it might be stored, allows you to optimize the file for the pattern of access that is normally present, allows you to apply security checks before the physical file system is ever touched, again, the list goes on and on. When an application like replication or remote email is accessed over the WAN, the strategic point of control between the app and the actual Internet allows you to encrypt, compress, dedupe, and otherwise optimize the data before putting it out of your bandwidth-limited, publicly exposed WAN connection. The first strategic point of control listed above gives you control over incoming traffic and early detection of attack attempts. It also gives you force multiplication with load balancing, so your systems are unlikely to get overloaded unless something else is going on. Finally, you get the security of SSL termination or full-stream encryption. The second point of control gives you the ability to balance your storage needs by scripting movement of files between NAS devices or tiers without the user having to see a single change. This means you can do more with less storage, and support for cloud storage providers and cloud storage gateways extends your storage to nearly unlimited space – depending upon your appetite for monthly payments to cloud storage vendors. The third force-multiplies the dollars you are spending on your WAN connection by reducing the traffic going over it, while offloading a ton of work from your servers because encryption happens on the way out the door, not on each VM. Taking advantage of these strategic points of control, architectural force multipliers offers you the opportunity to do more with less daily maintenance. For instance, the point between users and applications can be hooked up to your ADS or LDAP server and be used to authenticate that a user attempting to access internal resources from… Say… and iPad… is indeed an employee before they ever get to the application in question. That limits the attack vectors on software that may be highly attractive to attackers. There are plenty more examples of multiplying your impact without increasing staff size or even growing your architectural footprint beyond the initial investment in tools at the strategic point of control. For F5, we have LTM at the Application Delivery Network Strategic Point of Control. Once that investment is made, a whole raft of options can be tacked on – APM, WOM, WAM, ASM, the list goes on again (tired of that phrase for this blog yet?). Since each resides on LTM, there is only one “bump in the wire”, but a ton of functionality that can be brought to bear, including integration with some of the biggest names in applications – Microsoft, Oracle, IBM, etc. Adding business value like remote access for devices, while multiplying your IT force. I recommend that you check it out if you haven’t, there is definitely a lot to be gained, and it costs you nothing but a little bit of your precious time to look into it. No matter what you do, looking closely at these strategic points of control and making certain you are using them effectively to meet the needs of your organization is easy and important. The network is not just a way to hook users to machines anymore, so make certain that’s not all you’re using it for. Make the most of the terrain. And yes, if you also read Lori’s blog, we were indeed watching the same shows, and talking about this concept, so no surprise our blogs are on similar wavelengths. Related Blogs: What is a Strategic Point of Control Anyway? Is Your Application Infrastructure Architecture Based on the ... F5 Tech Field Day – Intro To F5 As A Strategic Point Of Control What CIOs Can Learn from the Spartans What We Learned from Anonymous: DDoS is now 3DoS What is Network-based Application Virtualization and Why Do You ... They're Called Black Boxes Not Invisible Boxes Service Virtualization Helps Localize Impact of Elastic Scalability F5 Friday: It is now safe to enable File Upload256Views0likes0CommentsIt Is Not What The Market Is Doing, But What You Are.
We spend an obsessive amount of time looking at the market and trying to lean toward accepted technologies. Seriously, when I was in IT management, there were an inordinate number of discussions about the state of market X or Y. While these conversations almost always revolved around what we were doing, and thus were put into context, sometimes an enterprise sits around waiting for everyone else to jump on board before joining in the flood. While sometimes this is commendable behavior, it is just as often self-defeating. If you have a project that could use technology X, then find the best implementation of said technology for your needs, and implement it. Using an alternative or inferior technology just because market adoption hasn’t happened will bite you as often as it will save you. Take PDAs, back in the bad old days when cell phones were either non-existent of just plain phones. Those organizations that used them reaped benefits from them, those that did not… Did not. While you could talk forever about the herky-jerky relationship of IT with small personal devices like PDAs, the fact is that they helped management stay better organized and kept salespeople with a device they could manage while on the road going from appointment to appointment. For those who didn’t wait to see what happened or didn’t raise a bunch of barriers and arguments that, retrospectively, appear almost ridiculous. Yeah, data might leak out on them. Of course, that was before USB sticks and in more than one case entire hard disks of information walked away, proving that a PDA wasn’t as unique in that respect as people wanted to claim. There is a whole selection of technologies that seem to have fallen into that same funky bubble – perhaps because, like PDAs, the value proposition was just not quite right. When cell phones became your PDA also, nearly all restrictions on them were lifted in every industry, simply because the cell phone + PDA was a more complete solution. One tool to rule them all and all that. Palm Pilot, image courtesy of wikipedia Like PDAs, there is benefit to be had from going “no, we need this, let’s do it”. Storage tiering was stuck in the valley of wait-and-see for many years, and finally seems to be climbing out of that valley simply because of the ongoing cost of storage combined with the parallel growth of storage. Still, there are many looking to save money on storage that aren’t making the move – almost like there’s some kind of natural resistance. It is rare to hear of an organization that introduced storage tiering and then dumped it to go back to independent NAS boxes/racks/whatever, so the inhibition seems to be strictly one of inexperience. Likewise, cloud suffers from some reluctance that I personally attribute to not only valid security concerns, but to very poor handling of those concerns by industry. If you read me regularly, you know I was appalled when people started making wild claims like “the cloud is more secure than your DC”, because that lack of touch with reality made people more standoffish, not less. But some of it is people not seeing a need in their organization, which is certainly valid if they’ve checked it out and come to that conclusion. Quite a bit of it, I think, is the same resistance that was applied to SaaS early on – if it’s not in your physical network, is it in your control? And that’s a valid fear that often shuts down the discussion before it starts – particularly if you don’t have an application that requires the cloud – lots of spikes in traffic, for example. Application Firewalls are an easier one in my book – having been a developer, I know that they’re going to be suspicious of canned security protecting their custom app. While I would contend that it isn’t “canned security” in the case of an Application Firewall, I can certainly understand their concern, and it is a communications issue on the part of the Application Firewall vendor community that will have to be resolved if uptake is to spike. Regulatory issues are helping, but far better an organization purchase a product because they believe it helps than because someone forced them to purchase. With HP’s exit from the tablet market, this is another field that is in danger of falling into the valley of waiting. While it’s conjecture, I’ll contend that not every organization will be willing to go with iPads as a corporate roll-out for groups that can benefit from tablet PCs – like field sales staff – and RIM is in such a funk organizations are unlikely to rush their money to them. The only major contender that seems to remain is Samsung with the Galaxy Tab (android-based), but I bought one for Lori for her last birthday, and as-delivered it is really a mini gaming platform, not a productivity tool. Since that is configurable within the bounds set in the Android environment, it might not be such a big deal, but someone will have to custom-install them for corporate use. But the point is this. If you’re spending too much on storage and don’t have tiering implemented, contact a vendor that suits your needs and look into it. I of course recommend F5ARX, but since I’m an F5 employee, expecting anything else would be silly. Along the same lines, find a project and send it to the cloud. Doesn’t matter how big or small it is, the point is to build your expertise so you know when the cloud will be most useful to you. And cloud storage doesn’t count, for whatever reason it is seeing a just peachy uptake (see last Thursday’s blog), and uses a different skill set than cloud for application deployment. Application Firewalls can protect your applications in a variety of ways, and those smart organizations that have deployed them have done so as an additive protection to application development security efforts. If for some odd reason you’re against layered protection, then think about this… They can stop most attacks before they ever get to your servers, meaning even fingerprinting becomes a more difficult chore. Of course I think F5 products rule the roost in this market – see my note above about ARX. As to tablet PCs, well, right now you have a few choices, if you can get a benefit from them now, determine what will work for you for the next couple of years and run with it. You can always do a total refresh after the market has matured those couple of years. Right now I see Apple, RIM, and Samsung as your best choices, with RIM being kind of shaky. Lori and I own Playbooks and love them, but RIM has managed to make a debacle of itself right when they hit the market, and doesn’t seem to be climbing out with any speed. Or you could snatch up a whole bunch of those really inexpensive Web-OS pads and save a lot of money until your next refresh :-). But if you need it, don’t wait for the market. The market is like FaceBook… Eventually consistent, but not guaranteed consistent at any given moment. Think more about what’s best for your organization and less about what everyone else is doing, you’ll be happier that way. And yes, there’s slightly more risk, but risk is not the only element in calculating best for the organization, it is one small input that can be largely mitigated by dealing with companies likely to be around for the next few years.210Views0likes0CommentsIT is not Ala Carte’. Or is it?
There has been a lot written about “IT Democratization” and how it will change the world. To some extent that is true, and I’ve previously encouraged IT management to support the process. But listening to those who see a “Bright new future” makes me realize that while we agree in principal, as always, the devil is in the details. In high school, we could take the standard lunch for a set fee or eat ala-carte’, which was essentially a short-order grill. Others could bring their own lunch, whatever they (or their parents) could pack into a bag or box. In the case of ala carte’, the school had to plan ahead, make facilities ready, and be prepared to serve up quality food at affordable prices that would meet the whims of hundreds of high-school kids on any given day. A work of art that surely deserved more recognition than we gave it. In the case of bag lunches, well, the school provided nothing but tables. If the food was bad, ill-prepared, not suitable for human consumption, or otherwise not correct, this was not the school’s problem in any way. The thing is that it was far easier for the school to eliminate all responsibility for the food and let children bring their own, no need to maintain the kitchens, stock food, suffer safety inspections, etc. The flip side of that is of course that the school has no ability to insure the quality of the food being consumed either. Ala-Carte’ was the best solution. Children got a choice, but the school got some say in what was prepared. It was not “that piece of salami that sat out all day Sunday for uncle Herb’s party” slapped into a sandwich. And IT needs to come to the same realization… And guide business to that realization. Accepting connections from a variety of devices, even customizing content to meet the needs of some devices is fine, but removing all constraints makes security and quality assurance nearly impossible. There are some great tools out there – like our BIG-IPAccess Policy Manager that will help your systems support a growing array of products, but you will still have to do the testing. Or customers/employees will, if your organization is of that mindset. And even then, these devices do not support every possible combination or do anything to insure the user experience is better than those bag lunches some people brought to school. The key here, is that IT Democratization cannot become a call to a chaotic “bring whatever you have” bag-lunch style arrangement, simply because what is being consumed is company property on company servers, and what stands to be wasted is company resources. You need to approach the problem from “we need to expand support, what can we offer” not either of the two extremes that seem prevalent at the moment. Of course users will push for more, that’s part of what they do. But IT is responsible for security and usability of IT systems, so there has to be an acknowledgement of user desires meeting with the requirements of corporate data and systems needs. And you have to drive that conversation. Certainly IT management, but anyone in IT that deals regularly with the rest of the company needs to reiterate the same thing… That IT wants to meet the needs of the organization, and user desires are certainly part of that, but security and usability require that the roll-out be controlled, so users need to prioritize what devices are most important to them to guide IT in its implementations. And IT needs to do the research. There is a growing industry offering all sorts of solutions for right-sizing content, along with the industry to extend enterprise-grade security to portable devices, and even specialized acceleration tools for low bandwidth devices. You just have to find the tools that best suit your needs and use them to enable users. Is it possible that all of this is a fad? Yes possible, but not likely. The first thing everyone does on new gadgets is games, so there are a lot of people out there saying they game on their iPad and work on their laptop, but not everyone is saying that. We have three tablet PCs (Samsung and two RIM), and mostly we game on them at the moment, but we also work from them when our situation makes that more convenient than one of the many laptops strategically placed about the house. No doubt the ratio will tip as time goes on, and some are already talking about ditching their laptops. So enable, but use the fact that you’re enabling to control the flood. Each new gadget that comes out does not need IT support. Some do, some don’t. Make certain your users know you are there to support them, but doing so in the manner that will work best for the organization. And if you don’t have some form of tablet PC yet, play with one. Seriously. They’re a different experience, and you’ll understand why your users want support for them yesterday.185Views0likes0CommentsTechnical Options. Opportunity and Confusion
One of the things that I love about technology is the fact that every time there is a problem, five solutions crop up to solve it. One of the things I hate about technology is the fact that every time there is a problem, five solutions crop up to solve it… And there are marketing geeks and pundits willing to tell you which one to choose before you even know that you have the problem. I was out in Anaheim last week with F5’s rockstar salesforce, telling them about the Future of IT. Or trying to, you’ll have to ask them if I imparted any worthwhile information, since I haven’t seen evaluations of my presentations yet. One thing that struck me from the ensuing discussions though is that there are people in IT who know their stuff, but are still confused about what solutions are best for long-distance problems. The sales team told me repeatedly that their customers sometimes are uncertain of their needs when talking about access control and acceleration. They of course got the F5-biased, product laden answers, I’ll skip that for you all here and just mention that “F5 has products in each of these spaces – talk to your sales folks”. Though I’ve included the F5 product list in this article’s tags if you want an idea what to talk with sales people about. Remote office communications are often slowed by the need for a WAN connection to the home datacenter. They also have more precise security requirements than your average Internet connection – you need to know that those accessing your applications from the remote office actually have the rights to do so, since most often remote office users have access to your core systems. So you need an SSL VPN and/or application level authentication, along with something to make those connections speedy. Normally this would be Application Acceleration, but you might possibly also require WAN optimization if there is a lot of repetitive data being thrown across the line. If you’re not using an SSL VPN, then you need some form of secure tunnel over the line between remote office and datacenter – after all, locking down both ends does you no good if you’re unencrypted in the middle. I didn’t get a picture of any of my sessions, so you’ll have to settle for this PowerPoint image Datacenter to datacenter communications are less user intensive, and thus less browser intensive, so the benefit of Application Acceleration is less, and the benefit of WAN Optimization is commensurately greater. You still need secure connections, but perhaps not an SSL VPN – you might, it all depends upon how the secondary data center systems are managed. If they’re managed from the primary datacenter, then you probably want to have an SSL VPN just to put something between the ne’er-do-wells and your systems. Otherwise, secure, encrypted tunnels to transfer data will do the trick. Of course there are a lot of considerations here, and you know your systems better than anyone else, so consider how many remote logins the remote datacenter has, and that will give you an idea if you need an SSL VPN. For users hitting your website, the requirements are closer to a remote office, but not quite so stringent. You’ll still want an application firewall, and you’ll want to speed things up in a manner that won’t impact browsers negatively – faster is only useful if the page remains unchanged from your implementation. So Application Acceleration and a web application firewall should do the trick. My experience with application acceleration is that you want a tool that has a lot of knobs and dials because no two websites are the same. You’ll want to exclude some content from acceleration, tweak the settings on other content, etc. And with all of these solutions you’ll want frequent updates (particularly to firewalls), and a world-class service organization because the products sit right in your line of production and you don’t want to waste a ton of time figuring out what’s going wrong or waiting for replacement parts. We’re not the only vendor on the planet that offers you solutions in these spaces, so check out the market. Of course I think ours are the best – if I didn’t, I’d be off working where I DID think they were the best. But every organization is different, find a vendor (or some vendors) that suit your organization’s needs the best. And check to see how they support cloud, because it is coming to a datacenter near you.178Views0likes0CommentsOnce Again, it Really IS About the Applications.
(Booming voiceover voice); Are you running the same tired old network tools? Does your network staff have to administer security and load balancing for each and every application? Do you find application analysts and owners show a growing frustration with the network team’s response times due to overloading? Well get in there and fix that network! Get the tools that you need to make your network more application friendly, reduce fatigue amongst your network staff, and give application owners more control of their applications! That was, of course, a joke poking fun at both the way we run our networks and the advertisement that tries to sell by listing common problems in a booming voice. But as is almost always the case, there’s a grain of serious in that joke. Many organizations have their infrastructure configured such that the networking staff must intercede with a lot of functionality that is in the application domain. Be it more capacity, granular security, or routing to a new instance of the application, the network staff carries these burdens, while the application staff waits for them to do so and in many cases, the application owner gets frustrated. But the days when ADC functionality – be it security, adding servers, or shipping connections to a remote instance of the application – had to rest completely in the realm of networking staff are behind us. If you still have those problems, you need to look into a state-of-the-art ADC (yes, like F5 sells, but we do have competition if you prefer). Assuming the application people can spin up new instances, they can also get them included in the ADC’s available servers. Since most application folks can spin up a new instance, this extra step means less waiting around for another team. When security issues crop up relative to a particular application, you’ll have the application owner, systems administrators, security… Do you really need to throw the network folks in there too? You used to have to, but technology has relieved that burden. When application owners (or sysadmins) can administer the security policy for a given application, then they just need the advice of the security team (assuming you’re a big enough org to have a security team). This not only makes the organization more nimble, it reduces errors by having those directly responsible for the application implementing policy for the application without a middle-man. Need to do cloud-bursting? The networking team needs to set that all up for you, but once it’s configured and the application can take advantage of it, then when/where/how is up to the application staff, not the networking team. Again, more agile. Just in terms of reducing burden on the networking staff, and thus making them more productive on the other important things they need to do, the move to a newer ADC is worth it. But throw in the concept that the application staff is also empowered to act without waiting to consult with yet another busy team, and the improved IT response time makes the overall organization more adaptable. If you choose an ADC that also resolves other pressing issues your organization has, you can really drive home solutions, while laying the groundwork for future architectural developments. Pick an ADC that enhances VMotion over long distances, for example, and moving apps from DC-to-DC becomes simple and reliable. So if your load balancing solution is just that – load balancing – it is time to look into where the market has gone. If you use a command line for most of your ADC configuration and management, it is again time to check where the market has gone. Enable applications staff to free up time for networking staff. And take advantage of a whole new set of capabilities while you’re at it. Explore what’s out there that might just make your life easier and your company more productive. And if you have an older solution, check out scalability too. Things have come a long way in a few short years, that’s for sure. That’s not to say that you shouldn’t have a command line – F5’s tmsh is a complete command line version of the UI – but not everyone wants to type 50 lines of script when one webpage will do, and to push functionality out beyond the network team, web interfaces are definitely needed, both to increase accessibility and to reduce errors.172Views0likes0CommentsApplications. Islands No More.
There was a time when application developers worried only about the hardware they were developing the application for. Those days passed a good long while ago, and then AppDev’s big concern was the OS the application was being developed for. But the burgeoning growth of the World Wide Web combined with the growth of Java and Linux to drive development out of the OS and into the JVM. Then, the developer focused on the JVM in question, or in many cases on the interpreted language interfaces – but not the OS or hardware. For our purposes I have lumped .NET in with JVMs as a simple head-not to where it came from. If you find that offensive, pretend I called them “AVMs” for “Application Virtual Machines”. In the end, interpreted bytecodes, no matter how you slice it. An interesting and increasing trend in this process was the growing dependence of both application and developer on the network. When I was a university lecturer, I made no bones about the fact that application developers would be required to know about the network going forward, and if the students didn’t like that news, they should find an alternative career. I stopped teaching eight years ago to dedicate more time to some of my other passions (like embedded dev, gaming, and my family), so that was a long time ago, and here we are. Developers know more about the network than they ever dreamed they would. Even DBAs have to know a decent amount about general networking to do their jobs. But they don’t know enough, because the network is one of the primary sources of application performance woes. And that makes the network a developers’ problem. Not maintaining it or managing it, but knowing how your applications are reliant upon it, both directly and indirectly, to deliver content to customers. And by knowing what your application needs, influencing changes in the architecture. In the end, your application has to have the appearance of performance to end users. In that sense it is much like the movies. It can look totally ridiculous while they are filming a moving – with people hanging in air and no backdrop or props around them – as long as the final product looks realistic, it impacts the viewers not one bit. But if computer animation or green screening causes the work of actors to look bad, people will be turned off. So it is with applications. If your network causes your application performance to suffer, the vast majority of the population is going to blame the application. The network is a nebulous thing, the application is the thing they were accessing. Technology is catching up to this concept – F5 has its new application based ADC functionality from iApps to application level failover – and that will be a huge boon to developers, but you are still going to have to care – about security, about network performance, about ADCs, about load balancing algorithms. The more you know about the environment your application runs in, the better a developer you will be. It is far better to be able to say “that change would increase traffic from the database to our application and from the application to the end user – meaning out our Internet connection, and unless I’m wrong, we don’t have the bandwidth for that…” than it is to make the change and then discover this simple truth. If you’re lucky enough to have an F5 product in your network, ask about iAPP, it’s well worth looking into, as it brings an application-oriented view to network infrastructure management. And when the decision is made to move to the cloud, you can take your iAPP templates with you. So they aren’t simply a single-architecture tool, running in BIG-IPLTM-VE, they will apply in the cloud also. While you’re learning, you’ll discover that some network appliances can help you do your job. Our Application Protocol Manager and Web Application Manager spring to mind, one helping with common security tasks, the other helping with delivery performance through caching, compression, and TCP optimization of application-specific data. You’ll also get ideas for optimizing the way your application communicates on the network, because you’ll understand the underlying technology better. It’s more to learn, but you have been freed from the nitty-gritty of the hardware and the OS, and in many cases from the difficult parts of algorithmic development. Which means adding high-level networking knowledge is a wash at worst. And it is one more way to separate yourself from the pack of developers, a way that will make you better at what you do.165Views0likes0Comments