v11
46 TopicsWhat Does Mobile Mean, Anyway?
We tend to assume characteristics upon hearing the term #mobile. We probably shouldn’t… There are – according to about a bazillion studies - 4 billion mobile devices in use around the globe. It is interesting to note that nearly everyone who notes this statistic and then attempts to break it down into useful data (usually for marketing) that they almost always do so based on OS or device type – but never, ever, ever based on connectivity. Consider the breakdown offered by W3C for October 2011. Device type is the chosen taxonomy, with operating system being the alternative view. Unfortunately, aside from providing useful trending on device type for application developers and organizations, this data does not provide the full range of information necessary to actually make these devices, well, useful. Consider that my Blackberry can either connect to the Internet via 3G or WiFi. When using WiFi my user experience is infinitely better than via 3G and, if one believes the hype, will be even better once 4G is fully deployed. Also not accounted for is the ability to pair my Blackberry Playbook to my Blackberry phone and connect to the Internet via that (admittedly convoluted) chain of connectivity. Bluetooth to 3G or WiFi (which in my house has an additional chain on the LAN and then back out through a fairly unimpressive so-called broadband connection). But I could also be using the Playbook’s built-in WiFi (after trying both this is the preferred method, but in a pinch…) You also have to wonder how long it will be before “mobile” is the GPS in your car, integrated with services via Google Map or Bing to “find nearby” while you’re driving? Or, for some of us an even better option, find the nearest restroom off this highway because the four-year old has to use it – NOW. Trying to squash “mobile” into a little box is about as useful as trying to squash “cloud” into a bigger box. It doesn’t work. The variations in actual implementation in communication channels across everything that is “mobile” require different approaches to mitigating operational risk, just as you approach SaaS differently than IaaS differently than PaaS. Defining “mobile” by its device characteristics is only helpful when you’re designing applications or access management policies. In order to address real user-experience issues you have to know more about the type of connection over which the user is connecting – and more. CONTEXT is the NEW BLACK in MOBILE This is not to say that device type is not important. It is, and luckily device type (as well as browser and often operating system), are an integral part of the formula we all “context.” Context is the combined set of variables that make it possible to interpret any given connection with respect to its unique client, server, network, and application needs. It’s what allows organizations to localize, to hyperlocalize, and to provide content based on location. It’s what enables the ability to ensure performance whether over 3G, 4G, LAN, or congested WAN connections. It’s the agility to route application requests to the best server-side location based on a combination of client location, connection type, and current capacity across multiple sites – whether cloud, managed hosting, or secondary data centers. Context is the ‘secret sauce’ to successful application delivery. It’s the ingredient that makes it possible to make the right decisions at the right time based on current conditions that address operational risk – performance, security, and availability. Context is what makes the application delivery tier of the modern data center able to adapt dynamically. It’s the shared data that forms the foundation for the collaboration between application delivery network infrastructure and provisioning systems both local and in the cloud, enabling on-demand scalability and at some point, instant mobility in an inter-cloud architecture. Context is a key component to an agile data center, because it is only be inspecting all the variables that you can interpret them in a way that leads to optimal decisions with respect to the delivery of an application, which includes choosing the right application instance whether it’s deployed remotely in a cloud computing environment or locally on an old-fashioned piece of hardware. Knowing what device a given request is coming from is not enough, especially when the connection type and conditions cannot be assumed. The same user on the same device may connect via two completely different networking methods within the same day – or even same hour. It is the network connection which becomes a critical decision point around which to apply proper security and performance-related policies, as different networks vary in their conditions. So while we all like to believe that our love of our chosen mobile platform is vindicated by statistics, we need to dig deeper when we talk about mobile strategies within the walls of IT. The device type is only one small piece of a much larger puzzle called context. “Mobile” is as much about the means of connectivity as it is the actual physical characteristic of a small untethered device. We need to recognize that, and incorporate it into our mobile delivery strategies sooner rather than later. [Updated: This post was updated 2/17/2012 - the graphic was updated to reflect the proper source of the statistics, w3schools ] Long-distance live migration moves within reach HTML5 Web Sockets Changes the Scalability Game At the Intersection of Cloud and Control… F5 Friday: The Mobile Road is Uphill. Both Ways More Users, More Access, More Clients, Less Control Cloud Needs Context-Aware Provisioning Call Me Crazy but Application-Awareness Should Be About the Application The IP Address – Identity Disconnect The Context-Aware Cloud407Views0likes2CommentsSharePoint SSL handshake good but TCP RST with no content
I'm using SharePoint 2013 iApps setup for SSL offload. Using Wireshark on a tcpdump from browsing to a FQDN shows the SSL handshake finishing and a single application data packet followed by a TCP RST. The handshake is also validated using openssl s_client. After the handshake, I've tried a simple GET and a host specific GET (per the iApp monitor string). Both return errno=104. Fidder shows a raised exception 'Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host'. I believe both openssl and Fidder errors refer to the TCP RST. An alternate IIS iApp is setup in the same Partition/VLAN/Route Domain. It uses non-SharePoint web servers but in the same subnet. It is browsed by the same web browser and succeeds. This VIP resolves to web server peers from the same client. Is this typical of a SharePoint server with incorrect bindings or AAMs? Or, is there a network issue to debug? What is the best way to validate networking to SharePoint when bindings/AAMs are suspect?244Views0likes0CommentsF5 BIG-IP Platform Security
When creating any security-enabled network device, development teams must fully investigate security of the device itself to ensure it cannot be compromised. A gate provides no security to a house if the gap between the bars is large enough to drive a truck through. Many highly effective exploits have breached the very software and hardware that are designed to protect against them. If an attacker can breach the guards, then they don’t need to worry about being stealthy, meaning if one can compromise the box, then they probably can compromise the code. F5 BIG-IP Application Delivery Controllers are positioned at strategic points of control to manage an organization’s critical information flow. In the BIG-IP product family and the TMOS operating system, F5 has built and maintained a secure and robust application delivery platform, and has implemented many different checks and counter-checks to ensure a totally secure network environment. Application delivery security includes providing protection to the customer’s Application Delivery Network (ADN), and mandatory and routine checks against the stack source code to provide internal security—and it starts with a secure Application Delivery Controller. The BIG-IP system and TMOS are designed so that the hardware and software work together to provide the highest level of security. While there are many factors in a truly secure system, two of the most important are design and coding. Sound security starts early in the product development process. Before writing a single line of code, F5 Product Development goes through a process called threat modeling. Engineers evaluate each new feature to determine what vulnerabilities it might create or introduce to the system. F5’s rule of thumb is a vulnerability that takes one hour to fix at the design phase, will take ten hours to fix in the coding phase and one thousand hours to fix after the product is shipped—so it’s critical to catch vulnerabilities during the design phase. The sum of all these vulnerabilities is called the threat surface, which F5 strives to minimize. F5, like many companies that develop software, has invested heavily in training internal development staff on writing secure code. Security testing is time-consuming and a huge undertaking; but it’s a critical part of meeting F5’s stringent standards and its commitment to customers. By no means an exhaustive list but the BIG-IP system has a number of features that provide heightened and hardened security: Appliance mode, iApp Templates, FIPS and Secure Vault Appliance Mode Beginning with version 10.2.1-HF3, the BIG-IP system can run in Appliance mode. Appliance mode is designed to meet the needs of customers in industries with especially sensitive data, such as healthcare and financial services, by limiting BIG-IP system administrative access to match that of a typical network appliance rather than a multi-user UNIX device. The optional Appliance mode “hardens” BIG-IP devices by removing advanced shell (Bash) and root-level access. Administrative access is available through the TMSH (TMOS Shell) command-line interface and GUI. When Appliance mode is licensed, any user that previously had access to the Bash shell will now only have access to the TMSH. The root account home directory (/root) file permissions have been tightened for numerous files and directories. By default, new files are now only user readable and writeable and all directories are better secured. iApp Templates Introduced in BIG-IP v11, F5 iApps is a powerful new set of features in the BIG-IP system. It provides a new way to architect application delivery in the data center, and it includes a holistic, application-centric view of how applications are managed and delivered inside, outside, and beyond the data center. iApps provide a framework that application, security, network, systems, and operations personnel can use to unify, simplify, and control the entire ADN with a contextual view and advanced statistics about the application services that support business. iApps are designed to abstract the many individual components required to deliver an application by grouping these resources together in templates associated with applications; this alleviates the need for administrators to manage discrete components on the network. F5’s new NIST 800-53 iApp Template helps organizations become NIST-compliant. F5 has distilled the 240-plus pages of guidance from NIST into a template with the relevant BIG-IP configuration settings—saving organizations hours of management time and resources. Federal Information Processing Standards (FIPS) Developed by the National Institute of Standards and Technology (NIST), Federal Information Processing Standards are used by United States government agencies and government contractors in non-military computer systems. FIPS 140 series are U.S. government computer security standards that define requirements for cryptography modules, including both hardware and software components, for use by departments and agencies of the United States federal government. The requirements cover not only the cryptographic modules themselves but also their documentation. As of December 2006, the current version of the standard is FIPS 140-2. A hardware security module (HSM) is a secure physical device designed to generate, store, and protect digital, high-value cryptographic keys. It is a secure crypto-processor that often comes in the form of a plug-in card (or other hardware) with tamper protection built in. HSMs also provide the infrastructure for finance, government, healthcare, and others to conform to industry-specific regulatory standards. FIPS 140 enforces stronger cryptographic algorithms, provides good physical security, and requires power-on self tests to ensure a device is still in compliance before operating. FIPS 140-2 evaluation is required to sell products implementing cryptography to the federal government, and the financial industry is increasingly specifying FIPS 140-2 as a procurement requirement. The BIG-IP system includes a FIPS cryptographic/SSL accelerator—an HSM option specifically designed for processing SSL traffic in environments that require FIPS 140-1 Level 2–compliant solutions. Many BIG-IP devices are FIPS 140-2 Level 2–compliant. This security rating indicates that once sensitive data is imported into the HSM, it incorporates cryptographic techniques to ensure the data is not extractable in a plain-text format. It provides tamper-evident coatings or seals to deter physical tampering. The BIG-IP system includes the option to install a FIPS HSM (BIG-IP 6900, 8900, 11000, and 11050 devices). BIG-IP devices can be customized to include an integrated FIPS 140-2 Level 2–certified SSL accelerator. Other solutions require a separate system or a FIPS-certified card for each web server; but the BIG-IP system’s unique key management framework enables a highly scalable secure infrastructure that can handle higher traffic levels and to which organizations can easily add new services. Additionally the FIPS cryptographic/SSL accelerator uses smart cards to authenticate administrators, grant access rights, and share administrative responsibilities to provide a flexible and secure means for enforcing key management security. Secure Vault It is generally a good idea to protect SSL private keys with passphrases. With a passphrase, private key files are stored encrypted on non-volatile storage. If an attacker obtains an encrypted private key file, it will be useless without the passphrase. In PKI (public key infrastructure), the public key enables a client to validate the integrity of something signed with the private key, and the hashing enables the client to validate that the content was not tampered with. Since the private key of the public/private key pair could be used to impersonate a valid signer, it is critical to keep those keys secure. Secure Vault, a super-secure SSL-encrypted storage system introduced in BIG-IP version 9.4.5, allows passphrases to be stored in an encrypted form on the file system. In BIG-IP version 11, companies now have the option of securing their cryptographic keys in hardware, such as a FIPS card, rather than encrypted on the BIG-IP hard drive. Secure Vault can also encrypt certificate passwords for enhanced certificate and key protection in environments where FIPS 140-2 hardware support is not required, but additional physical and role-based protection is preferred. In the absence of hardware support like FIPS/SEEPROM (Serial (PC) Electrically Erasable Programmable Read-Only Memory), Secure Vault will be implemented in software. Even if an attacker removed the hard disk from the system and painstakingly searched it, it would be nearly impossible to recover the contents due to Secure Vault AES encryption. Each BIG-IP device comes with a unit key and a master key. Upon first boot, the BIG-IP system automatically creates a master key for the purpose of encrypting, and therefore protecting, key passphrases. The master key encrypts SSL private keys, decrypts SSL key files, and synchronizes certificates between BIG-IP devices. Further increasing security, the master key is also encrypted by the unit key, which is an AES 256 symmetric key. When stored on the system, the master key is always encrypted with a hardware key, and never in the form of plain text. Master keys follow the configuration in an HA (high-availability) configuration so all units would share the same master key but still have their own unit key. The master key gets synchronized using the secure channel established by the CMI Infrastructure as of BIG-IP v11. The master key encrypted passphrases cannot be used on systems other than the units for which the master key was generated. Secure Vault support has also been extended for vCMP guests. vCMP (Virtual Clustered Multiprocessing) enables multiple instances of BIG-IP software to run on one device. Each guest gets their own unit key and master key. The guest unit key is generated and stored at the host, thus enforcing the hardware support, and it’s protected by the host master key, which is in turn protected by the host unit key in hardware. Finally F5 provides Application Delivery Network security to protect the most valuable application assets. To provide organizations with reliable and secure access to corporate applications, F5 must carry the secure application paradigm all the way down to the core elements of the BIG-IP system. It’s not enough to provide security to application transport; the transporting appliance must also provide a secure environment. F5 ensures BIG-IP device security through various features and a rigorous development process. It is a comprehensive process designed to keep customers’ applications and data secure. The BIG-IP system can be run in Appliance mode to lock down configuration within the code itself, limiting access to certain shell functions; Secure Vault secures precious keys from tampering; and optional FIPS cards ensure organizations can meet or exceed particular security requirements. An ADN is only as secure as its weakest link. F5 ensures that BIG-IP Application Delivery Controllers use an extremely secure link in the ADN chain. ps Resources: F5 Security Solutions Security is our Job (Video) F5 BIG-IP Platform Security (Whitepaper) Security, not HSMs, in Droves Sometimes It Is About the Hardware Investing in security versus facing the consequences | Bloor Research White Paper Securing Your Enterprise Applications with the BIG-IP (Whitepaper) TMOS Secure Development and Implementation (Whitepaper) BIG-IP Hardware Updates – SlideShare Presentation Audio White Paper - Application Delivery Hardware A Critical Component F5 Introduces High-Performance Platforms to Help Organizations Optimize Application Delivery and Reduce Costs Technorati Tags: F5, PCI DSS, virtualization, cloud computing, Pete Silva, security, coding, iApp, compliance, FIPS, internet, TMOS, big-ip, vCMP488Views0likes1CommentHaving to upgrade from v10 to v11 manually?
Hello everybody, I need to replace an old BIG-IP 1500 platform with a stronger and newer one, and now I’m facing a weird situation: The old one cannot be run with v11, the new one not with v10. Sticking to the askf5 best practice articles, I couldn’t find anything for such a situation. In DevCentral I found the /usr/libexec/bigpipe daol command and gave it a try: Well, I had to delete nearly all of the HA-configuration and a lot of other stuff because of error messages. After that, it proceeded some steps further but then ran into a segmentation fault. :-( So, is there any other way I could do a migration automatically? Converting it manually would be a life’s work… Thank you very much for your contributions!339Views0likes2CommentsIt is All About Repeatability and Consistency.
#f5 it is often more risky to skip upgrading than to upgrade. Know the risk/benefits of both. Not that I need to tell you, but there are several things in your network that you could have better control of. Whether it is consistent application of security policy or consistent configuration of servers, or even the setup of network devices, they’re in there, being non-standard. And they’re costing you resources in the long run. Sure, the staff today knows exactly how to tweak settings on each box to make things perform better, and knows how to improve security on this given device for this given use, but eventually, it won’t be your current staff responsible for these things, and that new staff will have one heck of a learning curve unless you’re far better at documentation of exceptions than most organizations. Sometimes, exceptions are inevitable. This device has a specific use that requires specific settings you would not want to apply across the data center. That’s one of the reasons IT exists, is to figure that stuff out so the business runs smoothly, no? But sometimes it is just technology holding you back from standardizing. Since I’m not slapping around anyone else by doing so, I’ll use my employer as an example of technology and how changes to it can help or hinder you. Version 9.X of TMOS – our base operating system – was hugely popular, and is still in use in a lot of environments, even though we’re on version 11.X and have a lot of new and improved things in the system. The reason is change limitation (note: Not change control, but limitation). Do you upgrade a network device that is doing what it is supposed to simply because there’s a newer version of firmware? it is incumbent upon vendors to give you a solid reason why you should. I’ve had reason to look into an array of cloud based accounting services of late, and frankly, there is not a compelling reason offered by the major software vendors to switch to their cloud model and become even more dependent upon the vendor (who would now be not only providing software but storing your data also). I feel that F5 has offered plenty of solid reasons to upgrade, but if you’re in a highly complex or highly regulated environment, solid reasons to upgrade do not always equate to upgrades being undertaken. Again, the risk/reward ratio has to be addressed at some point. And I think there is a reluctance in many enterprises to consider the benefits of upgrading. I was at a large enterprise that was using Windows 95 as a desktop standard in 2002. Why? Because they believed the risks inherent to moving to a new version of Windows corporate wide were greater than the risks of staying. Frankly, by the time it was 2002, there was PLENTY of evidence that Windows 98 was stable and a viable replacement for Windows 95. You see the same phenomenon today. Lots of enterprises are still limping along with Windows XP, even though by-and-large, Windows 7 is a solid OS. In the case of F5, there is a feature in the 11.X series of updates to TMOS that should, by itself, offer driving reason to upgrade. I think that it has not been seriously considered by some of our customers for the same reason as the Windows upgrades are slow – if you don’t look at what benefits it can bring, the risk of upgrading can scare you. But BIG-IP running TMOS 11.X has an astounding set of functionality called iApps that allow you to standardize how network objects – for load balancing, security, DNS services, WAN Optimization, Web App Firewalling, and a host of other network services – are deployed for a given type of application. Need to deploy, load balance, and protect Microsoft Exchange? Just run the iApp in the web UI. It asks you a few questions, and then creates everything needed, based upon your licensing options and your answers to the questions. Given that you can further implement your own iApps, you can guarantee that every instance of a given application has the exact same network objects deployed to make it secure, fast, and available. From an auditing perspective, it gives a single location (the iApp) for information about all applications of the same type. There are pre-generated iApps for a whole host of applications, and a group here on DevCentral that is dedicated to user developed iApps. There is even a repository of iApps on DevCentral. And what risk is perceived from upgrading is more than mitigated by the risk reduction in standardizing the deployment and configuration of network objects to support applications. IIS has specific needs, but all IIS can be configured the same using the IIS iApp, reducing the risk of operator error or auditing gotcha. I believe that Microsoft did a good job of putting out info about Windows 7, and that organizations were working on risk avoidance and cost containment. The same is true of F5 and TMOS 11.X. I believe that happens a lot in the enterprise, and it’s not always the best solution in the long run. You cannot know which is more risky – upgrading or not – until you know what the options are. I don’t think there are very many professional IT staff that would say staying with Windows 95 for years after Windows 98 was out was a good choice, hindsight being 20/20 and all. Look around your datacenter. Consider the upgrade options. Do some research, make sure you are aware of what not upgrading a device, server, desktop, whatever is as well as you understand the risks of performing the upgrade. And yeah, I know you’re crazy busy. I also know that many upgrades offer overall time savings, with an upfront cost. If you don’t invest in time-saving, you’ll never reap time savings. Rocking it every day, like most of you do, is only enough as long as there are enough hours in the day. And there are never enough hours in the IT day. As I mentioned at #EnergySec2012 last week, there are certainly never enough hours in the InfoSec day.241Views0likes0CommentsF5 Friday: Devops for DNS
#devops #cloud Managing a global presence – especially in the cloud – can introduce additional complexity. Back in the day when virtualization and cloud were just making waves, one of the first challenges made obvious was managing IP addresses. As VM density increased, there were more IP network management tasks that had to be handled – from distributing and assigning IP addresses to VLAN configuration to DNS entries. All this had to be done manually. It was recognized there was a growing gap between the ability of operations to handle the volatility in the IP network due to virtualization and cloud, but very little was done to address it. One of the forerunners of automation in the IP management space was Infoblox. Only we didn't call it "automation" then, we called it "Infrastructure 2.0". After initially focusing on managing the internal volatility in the IP network, the increase in architectures adopting a hyper-hybrid cloud model are turning that focus outward, toward the need to more efficiently manage the global IP network space. The global IP network space, too, has volatility and may in fact require more flexibility as organizations seek to leverage cloud bursting and balancing architectures to assure availability and performance to its end-users. One of the requisites of a highly available global-spanning architecture is the deployment of multiple global server load balancing (GSLB) solutions such as BIG-IP Global Traffic Manager (GTM). To assure availability a la disaster recovery/business continuity initiatives, it is imperative to deploy what are essentially redundant yet independently operating global load balancing devices. This distribution means multiple, remote devices that must be managed and, just as importantly, that must tie into global IP address management frameworks. Most of this today is not automated; organizations advancing their devops initiatives may have already begun to embrace this demesne and automate using available tooling such as scripting and device APIs, but for the most part organizations have not yet focused on this problem (having quite a bit of work to do internal in the first place). This is integration work, it's management work, it's a job for devops – and it's an important one. The ability to integrate and seamlessly manage hyper-hybrid architectures is paramount to enabling federated cloud ecosystems in which organizations can move about as demand and costs require without requiring labor-intensive activity on the part of operations. Automating and centralizing a federated ecosystem at the global IP network layer is a transformational shift on par with the impact of the steam train in the US's old west. The impact of faster and further was profound and enabled expansion of population and business alike. Federation enabled by the appropriate toolsets and processes will provide similar benefits, enabling business and IT to expand and improve its services to its end-users by leaps and bounds, without incurring the costs or risks of a disconnected set of remotely deployed resources. F5 and Infoblox have enabled exactly this type of solution comprising integration of F5 GTM via our iControl API with Infoblox Load Balancer Manager (LBM). The solution merges appliance-based DNS, DHCP, and IP address management with a network of standalone BIG-IP GTM devices to create a single management grid. With lots of devops goodness like changing and synchronizing configuration in a hyper-hybrid (or just highly distributed) environment, the integrated solution is an enabler of broader more dynamic and distributed architectures. It enables the automation of tasks without scripting, assures a consistent workflow with pre-configured "best practices" for DNS management, as well as automating daily operational tasks such as synchronizing updates and checking on status. You can read more in the solution profile Automate DNS Network and Global Traffic Management or in one of Don's excellent blogs on the topic: F5 Friday: Infoblox and F5 Do DNS and Global Load Balancing Right. DNS Architecture in the 21st Century Related blogs & articles: Global Server Load Balancing Resources: Creating a DNS Blackhole. On Purpose DNS Is Like Your Mom No DNS? No… Anything DNS Gets an Upgrade BIG-IP v11: Operational Efficiency for Federal Government Agencies DNSSEC: Is Your Infrastructure Ready? Global Server Load Balancing Overview High-Performance DNS Services in BIG-IP Version 11 [PDF] The DDoS Threat Spectrum [PDF] Availability and the Cloud [PDF] Technology Alliance Partnership Update Week of September 14th 2012 Lori MacVittie is a Senior Technical Marketing Manager, responsible for education and evangelism across F5’s entire product suite. Prior to joining F5, MacVittie was an award-winning technology editor at Network Computing Magazine. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University. She is the author of XAML in a Nutshell and a co-author of The Cloud Security Rules227Views0likes0CommentsF5 Friday: Workload Optimization with F5 and IBM PureSystems
#IBMPureSystems #devops #cloud Optimizing and assuring availability of applications is critical to the success of any application architecture This week IBM announced its next-generation platform, IBM PureSystems. IBM PureSystems comprise a fully integrated set of solutions that is managed through a single interface, providing a faster time to value by reducing system configuration time and simplifying system administration tasks. It’s kind of like IBM’s version of devops in a box. But like most integrated compute, network, and storage systems today designed for rapid provisioning and simplified management in what are the increasingly complex interconnected systems making up cloud computing, it needed a little something to make it even better: application delivery. That’s where F5 comes in. F5 brings its application delivery expertise to IBM PureSystems through a hybrid application delivery network (ADN) comprised of both virtual and hardware-hosted application delivery services. Together, F5 & IBM provide an integrated computing solution for consolidating IT by increasing application availability, adding security and efficiency of virtualized servers, storage and networks with unified management for application delivery for the enterprise and to the cloud. F5 Solutions: F5 BIG-IP LTM VE running on VMware on the IBM PureFlex System and BIG-IP product line front ending IBM PureFlex Systems to provide offload, optimization, disaster recovery and security. In testing scenarios, both BIG-IP hardware and virtualized platforms enhance the service quality of IBM PureSystems and provide a high availability environment for an enterprise-class application — in this case IBM WebSphere® Application Server. Using both the hardware and virtual editions of BIG-IP products provide certain advantages in large environments: · Hardware-based platforms can offload the IBM PureFlex Systems’ server node’s high-processor utilization tasks, such as encryption, compression, and increasing virtual machine (VM) density. It can also be used as a frontline defense from distributed denial of service and other attacks before they reach the application environment. · The usage of the virtualized edition for individual applications, customers, or lines of business can provide greater control, granularity, and elasticity. Additional modules can then augment standard high availability, optimization, and security scenarios with single-sign on capability, application firewalling, web acceleration, and wide area network (WAN) optimization. · Hardware and software work in tandem to create a single, unified platform for application delivery, high availability, optimization, and control for IBM PureSystems environments. For example, the addition of F5 BIG-IP services to the IBM PureSystems architecture allowed the system to handle failures at all layers – whether it was the IBM PureFlex Systems’ node or the application server, whether it was a virtual instance of BIG-IP VE or the active physical BIG-IP LTM. Regardless of where the failure occurred, the systems dynamically compensate for failure and assure the highest availability possible within and across the systems. The advantages of deploying both hardware and virtual editions of BIG-IP products with IBM PureSystems include: Offload encryption and compression onto hardware-based platforms to increase virtual machine (VM) density and improve performance. Improve front-line security to defeat attacks before they reach the application environment. Enhance control, granularity, and elasticity with a virtualized Application Delivery Controller (ADC) for individual applications, customers, or lines of business. Further enhancements include software modules providing single sign on capability, application firewalls, web acceleration, and WAN optimization. Hardware, virtual editions, and software work in tandem to deliver a unified, high-availability platform for application delivery, optimization, and control in IBM PureSystems environments. Additional Resources: F5 and IBM PureSystems: A Foundation for the Next Generation Data Center F5 Solutions for IBM applications Bursting to the IBM SmartCloud F5 and IBM's Dynamic Infrastructure Strategy IBM and F5 — Extending data center networks to the cloud F5 Friday: Addressing the Unintended Consequences of Cloud F5 Friday: HP Cloud Maps Help Navigate Server Flexing with BIG-IP F5 Friday: Enhancing FlexPod with F5 The Conspecific Hybrid Cloud The Three Axioms of Application Delivery At the Intersection of Cloud and Control… The Pythagorean Theorem of Operational Risk206Views0likes0CommentsF5 Friday: Addressing the Unintended Consequences of Cloud
Operationally unified architectures able to leverage cloud computing without compromising on control and security are key to mitigating unintended consequences of cloud The adoption of cloud computing introduces operational challenges that would once required a single-vendor architecture solution. Today, thanks to service-focused APIs and control planes, it is possible to overcome operational challenges posed by the need for diverse infrastructure components and systems to collaborate. By enabling F5 solutions with a flexible, service-focused control plane architecture, F5 infrastructure can collaborate with components, systems and APIs to enable the automation and flexibility required to realize a dynamic data center. F5 solutions are deployed in strategic points in the network, forming the foundation for a diversified, dynamic data center powered by a flexible control plane and designed on the premise of integration. This position in the data center allows F5 solutions to enable integration, replication, and automation of policies, processes, applications, and systems in an operationally consistent way. By maintaining the cost savings and operational efficiencies of cloud and virtualization, F5 mitigates operational risks to performance, availability, and security across every environment - on and off premise. This approach is necessary because the unintended consequences (side-effects) of cloud can directly impact operational complexity, costs, and risk, which may completely or partially negate its benefits. To avoid this requires a strategy that leverages operational consistency and the ability to replicate operational processes, policies, and data across multiple deployments. That means anywhere an organizational resource is deployed should be integrated into and subject to the same processes and policies that govern existing, data-center deployed resources. Strategic points of control form the framework for ensuring the benefits of virtualization and cloud computing are not lost to unintended consequences. Technologies from F5 that specifically address these unintended consequences by supporting a dynamic and operationally consistent delivery platform capable of spanning environments and data center models include: iApp iApp is a feature name for what are fundamentally programmable application templates. These templates make simple user interfaces for complex system configurations. Consistent, repeatable deployment processes that replicate infrastructure policies responsible for mitigating operational risk. Makes repeatable, automated deployments of specific applications a reality, freeing operations and reducing possibility of mismatched policies and/or misconfiguration leading to downtime, poor performance, or a security breach. ADDRESSES: Fast deployment of applications without compromising security and increases operational efficiencies through a more service-oriented, black-box style approach to deployment. TMOS TMOS is the underlying, shared product platform for F5 BIG-IP products that is unlike anything else in the industry. TMOS application control plane architecture creates a unified pool of highly scalable, resilient, and reusable services that can adapt to dynamically changing data center and virtual conditions Enables mitigation of operational risk – security, availability, performance – consistently across dispersed and heterogeneous environments. As a unified platform TMOS is the foundation for an operationally unified architecture that enables control over policies, processes, and resources regardless of deployment location reduces administrative overhead, mitigates operational risk, and enables freedom to integrate the right resources at the right time without regard to location. ADDRESSES: A shared platform allows consolidation of application delivery services onto a single platform, providing boosts for efficiency without requiring multiple management systems or frameworks. Scale N The Scale N architecture provides you with the ability to scale up or scale out on demand, creating an elastic application delivery controller (ADC) processing platform that can grow as your business needs change. The Scale N approach delivers a superior way to scale application delivery services that creates true deployment flexibility and simplifies system and application-level maintenance and departs from the traditional N+1 model. Brings flexibility and scalability to infrastructure, ensuring the “network” is never a performance impediment; replicates policies while scaling out, even across environments. Device Service Clustering (DSC) – a core component of Scale N - provides the ability to group devices and services across an array of systems (appliances, VIPRION chassis, or virtual editions) to create a horizontal cluster across which synchronization of policies can be achieved. ADDRESSES: Auto-scalability of both infrastructure and applications without sacrificing the elasticity associated with cloud computing models. APIs F5’s open-standards based API, iControl, provides granular control over BIG-IP solutions throughout the application delivery lifecycle. Enables integration of F5 BIG-IP with management systems, automation frameworks, and orchestration solutions to ensure operational collaboration across the data center and into cloud computing environments. ADDRESSES: Auto-scalability of applications by providing an integration interface for popular virtualization provisioning and orchestration solutions. iStats iStats are custom configurable control and data plane statistics that allow greater visibility into the business and operational performance of applications and BIG-IP. Brings additional flexibility to visibility, enabling gathering of performance and availability statistics that better allow operations to set thresholds and parameters that form the basis for elasticity in application scalability. ADDRESSES: Auto-scalability of applications by providing an greater visibility options. By taking advantage of an operationally consistent application delivery tier to integrate cloud resources, organizations can mitigate the unintended consequences of cloud and realize its benefits without compromising elsewhere.187Views0likes0Comments