strategic point of control
22 TopicsWhat is a Strategic Point of Control Anyway?
From mammoth hunting to military maneuvers to the datacenter, the key to success is control Recalling your elementary school lessons, you’ll probably remember that mammoths were large and dangerous creatures and like most animals they were quite deadly to primitive man. But yet man found a way to hunt them effectively and, we assume, with more than a small degree of success as we are still here and, well, the mammoths aren’t. Marx Cavemen PHOTO AND ART WORK : Fred R Hinojosa. The theory of how man successfully hunted ginormous creatures like the mammoth goes something like this: a group of hunters would single out a mammoth and herd it toward a point at which the hunters would have an advantage – a narrow mountain pass, a clearing enclosed by large rock, etc… The qualifying criteria for the place in which the hunters would finally confront their next meal was that it afforded the hunters a strategic point of control over the mammoth’s movement. The mammoth could not move away without either (a) climbing sheer rock walls or (b) being attacked by the hunters. By forcing mammoths into a confined space, the hunters controlled the environment and the mammoth’s ability to flee, thus a successful hunt was had by all. At least by all the hunters; the mammoths probably didn’t find it successful at all. Whether you consider mammoth hunting or military maneuvers or strategy-based games (chess, checkers) one thing remains the same: a winning strategy almost always involves forcing the opposition into a situation over which you have control. That might be a mountain pass, or a densely wooded forest, or a bridge. The key is to force the entire complement of the opposition through an easily and tightly controlled path. Once they’re on that path – and can’t turn back – you can execute your plan of attack. These easily and highly constrained paths are “strategic points of control.” They are strategic because they are the points at which you are empowered to perform some action with a high degree of assurance of success. In data center architecture there are several “strategic points of control” at which security, optimization, and acceleration policies can be applied to inbound and outbound data. These strategic points of control are important to recognize as they are the most efficient – and effective – points at which control can be exerted over the use of data center resources. DATA CENTER STRATEGIC POINTS of CONTROL In every data center architecture there are aggregation points. These are points (one or more components) through which all traffic is forced to flow, for one reason or another. For example, the most obvious strategic point of control within a data center is at its perimeter – the router and firewalls that control inbound access to resources and in some cases control outbound access as well. All data flows through this strategic point of control and because it’s at the perimeter of the data center it makes sense to implement broad resource access policies at this point. Similarly, strategic points of control occur internal to the data center at several “tiers” within the architecture. Several of these tiers are: Storage virtualization provides a unified view of storage resources by virtualizing storage solutions (NAS, SAN, etc…). Because the storage virtualization tier manages all access to the resources it is managing, it is a strategic point of control at which optimization and security policies can be easily applied. Application Delivery / load balancing virtualizes application instances and ensures availability and scalability of an application. Because it is virtualizing the application it therefore becomes a point of aggregation through which all requests and responses for an application must flow. It is a strategic point of control for application security, optimization, and acceleration. Network virtualization is emerging internal to the data center architecture as a means to provide inter-virtual machine connectivity more efficiently than perhaps can be achieved through traditional network connectivity. Virtual switches often reside on a server on which multiple applications have been deployed within virtual machines. Traditionally it might be necessary for communication between those applications to physically exit and re-enter the server’s network card. But by virtualizing the network at this tier the physical traversal path is eliminated (and the associated latency, by the way) and more efficient inter-vm communication can be achieved. This is a strategic point of control at which access to applications at the network layer should be applied, especially in a public cloud environment where inter-organizational residency on the same physical machine is highly likely. OLD SKOOL VIRTUALIZATION EVOLVES You might have begun noticing a central theme to these strategic points of control: they are all points at which some kind of virtualization – and thus aggregation – occur naturally in a data center architecture. This is the original (first) kind of virtualization: the presentation of many resources as a single resources, a la load balancing and other proxy-based solutions. When there is a one —> many (1:M) virtualization solution employed, it naturally becomes a strategic point of control by virtue of the fact that all “X” traffic must flow through that solution and thus policies regarding access, security, logging, etc… can be applied in a single, centrally managed location. The key here is “strategic” and “control”. The former relates to the ability to apply the latter over data at a single point in the data path. This kind of 1:M virtualization has been a part of datacenter architectures since the mid 1990s. It’s evolved to provide ever broader and deeper control over the data that must traverse these points of control by nature of network design. These points have become, over time, strategic in terms of the ability to consistently apply policies to data in as operationally efficient manner as possible. Thus have these virtualization layers become “strategic points of control”. And you thought the term was just another square on the buzz-word bingo card, didn’t you?1.1KViews0likes6CommentsSolutions are Strategic. Technology is Tactical.
And it all begins with the business. Last week was one of those weeks where my to-do list was growing twice as fast as I was checking things off. And when that happens you know some things end up deprioritized and just don’t get the attention you know they deserve. Such was the case with a question from eBizQ regarding the relationship between strategy and technology: Does strategy always trump technology? As Joe Shepley wonders in this interesting post, Strategy Trumps Technology Every Time, could you have an enterprise content management strategy without ECM technology. So do you think strategy trumps technology every time? I answered with a short response because, well, it was a very long week: I wish I had more time to expound on this one today but essentially technology is a tactical means to implement a solution as part of the execution on a strategy designed to address a business need/problem. That definitely deserves more exploration and explanation. STRATEGY versus TACTICS The reason this was my answer is the difference between strategy and tactics. Strategy is the overarching goal; it’s the purpose to which you are working. Tactics, on the other hand, are specific details regarding how you’re going to achieve that goal. Let’s apply it to something more mundane. For example: The focus of the strategy may be very narrow – consuming a sammich – or it may be very broad and vague, as it often is when applied to military or business strategy. Regardless, a strategy is always in response to some challenge and defines the goal, the solution, to addressing the challenge. Business analysts don’t sit around, after all, and posit that the solution to increasing call duration in the call center is to implement software X deployed on a cloud computing framework. The solution is to improve the productivity of the customer service representatives. That may result in the implementation of a new CRM system, i.e. technology, but it just as well may be a more streamlined business process that requires changes in the integration of the relevant IT systems. The implementation, the technology, is tactical. Tactics are more specific. In military strategy the tactics are often refined as the strategy is imparted down the chain of command. If the challenge is to stop the enemy from crossing a bridge, the tactics will be very dependent on the resources and personnel available to each commander as they receive their orders. A tank battalion, for example, is going to use different tactics than the engineer corps, because they have different resources, equipment and ultimately perspectives on how to go about achieving any stated goal. The same is true for IT organizations. The question posed was focused on enterprise content management, but you can easily abstract this out to an enterprise architecture strategy or application delivery strategy or cloud computing strategy. Having a strategy does not require a related technology because technology is tactical, solutions are strategic. The challenge for an organization may be too much content or it may be that it’s process-related, e.g. the approval process for content as it moves through the publication cycle is not well-defined, or has a single point of failure in it that causes delays in publication. The solution is the strategy. For the former it may be to implement an enterprise content management solution, for the latter it may be to sit down and hammer out a better process and even to acquire and deploy a workflow or BPM (Business Process Management) solution that is better able to manage fluctuations in people and the process. The tactics are the technology; it’s the how we’re going to do it as opposed to the what we’re going to do. CHALLENGE –> SOLUTION –> TECHNOLOGY This is an important distinction, to separate solutions from technology; strategy from tactics. If the business declares that the risk of a data breach is too high to bear, the enterprise IT strategy is not to implement a specific technology but to discover and plug all the possible “holes” in the strategic lines of defense. The solution to a vulnerability in an application is “web application security”. The technology may be a web application firewall (WAF) or it may be vulnerability scanning solutions run on pre-deployed code to identify potential vulnerabilities. When we talk about strategic points of control we aren’t necessarily talking about specific technology but rather solutions and those locations within the data center that are best able to be leveraged tactically to a wide variety of strategic solutions. The strategic trifecta is a good example of this model because it’s based on the same concepts: that a strategy is driven by a business challenge or need and executed upon using technology. The solution is not the implementation; it’s not the tactical response. Technology doesn’t enter into the picture into we get down to the implementation, to specific products and platforms we need to implement a strategy consistent with meeting the defined business goal or challenge. The question remains whether “strategy trumps technology” or not and what I was trying to impart is what a subsequent response said much eloquently and concisely: The question isn't which one trumps but how should they be aligned in order to provide value to the customer. -- Kathy Long There shouldn’t be a struggle between the two for top billing honors. They are related, after all; a strategy needs to be implemented, to be executed upon, and that requires technology. It’s more a question of which comes first in a process that should be focused on solving a specific problem or meeting some business challenge. Strategy needs to be defined before implementation because if you don’t know what the end-goal is, you really can’t claim victory or admit defeat. A solution is strategic, technology is tactical. This distinction can help IT by forcing more attention on the business and solutions layer as it is at the strategic layer that IT is able to align itself with the business and provide greater value to the entire organization. Does strategy always trump technology? What CIOs Can Learn from the Spartans Operational Risk Comprises More Than Just Security The Strategy Not Taken: Broken Doesn’t Mean What You Think It Means What is a Strategic Point of Control Anyway? Cloud is the How not the What456Views0likes0CommentsF5 Friday: Latency, Logging, and Sprawl
#v11 Logging, necessary for a variety of reasons in the data center, can consume resources and introduce undesirable latency. Avoiding that latency improves application performance and in some cases, the quality of logs. Logging. It’s mandatory and, in some industries, critical. Logs are used not only for auditing and tracking but for debugging, for data mining and analysis, and in some tiers of the architecture, replication and synchronization of data. Logs are a critical component across the data center, of that there is no doubt. That’s why it’s particularly frustrating to know that the cost in terms of performance is also one of the highest, lagging only slightly behind graphics in terms of performance costs. Given that there is very little graphics-related processing that goes on in data center components, disk I/O leaps to the top of the stack when it comes to performance impeding operations. The latency introduced by writing to a log often impacts the overall performance experienced by end-users because of the consumption of resources on the component by the logging operations. While generally out-of-band and thus non-blocking today, the consumption of resources can negatively impede performance by draining memory and using CPU cycles to perform its required tasks. Increasingly, as components are deployed in pairs, triples and more – owing to both scaling out physically and virtually – these logs also introduce “log sprawl” that can increase the cost associated with administration and make it more difficult to troubleshoot. After all, if you aren’t sure through which instance of a device a specific request was sent, you can’t easily find it in the log file. For all these reasons, centralized and generally off-box logging for data center components is becoming more critical. Consider it “logging as a service” if you will. This is not a new concept; centralized syslog servers have long been leveraged to provide a centralized, easier to manage log service that can be leveraged by just about every data center component. For load balancing services, the need is to not only centralize web-related logs but to ensure that they are written as fast as possible, to keep up with today’s demanding application environments. BIG-IP is no stranger to the need for high-speed, off-device logging and with v11 brings an open application, high-speed logging engine to bear. BIG-IP HIGH-SPEED LOGGING ENGINE One of the benefits of a unified, internal architecture is the ability to share improvements in the underlying platform across all products ultimately deployed on that platform. This is the case with TMOS, F5’s core application delivery technology. By enabling TMOS with a high-speed logging engine capable of up to 200,000 UDP/TCP messages per second, all modules – LTM, GTM, APM, ASM, WA, etc… – deployed on the TMOS platform automatically gain the benefits. Support for both local and external (off-box) logging enables you to centralize the data in third-party logging engines and meet security and compliance requirements. That means you can, ostensibly, leverage the visibility of a strategic point of control in the network to perform logging of web requests (and responses if required) rather than spread the responsibility across what may be an unknown number of web servers. Consider that in a highly-virtualized or cloud computing -based architecture, the number of servers required to meet current demand is variable and makes collection of web-server written logs more difficult unless an off-server log service is leveraged. That’s because virtualized servers often simply write logs to the local disk, which may or may not be persistent enough to meet compliance – or operational – demands. It’s also the case that some upstream infrastructure may modify the request and/or response, leaving logs with incomplete information. This is the case when an external application delivery controller acts as a Cookie gateway, a common function for adding security and consistency to web applications. Thus, logging at a strategic point, closest to the client, provides the most accurate picture of the request. Consider, too, the impact on writing logs in the face of an attack. DDoS counts on the consumption of resources to drain server and network component capacity, and by increasing the number of requests a server has to handle, it also gets an added consumption bonus from the need to write to the log. This is true on upstream network components, which compounds the impact and drains more resources than necessary. By enabling high-speed logging on upstream devices, offloading responsibility to a log service, and eliminating the need for web servers to also write to disk, the impact of a concerted DDoS attack can be more effectively managed. And if you’re going to use an off-server log service, it is more efficient to do so at a point upstream from the web-servers and gain the benefits of reducing resource consumption on the servers. Eliminating the resource consumption required by logs on the web server can have a very positive impact on the performance and capacity of the web server, which when combined with improvements in logging speed and reduced consumption on the BIG-IP translate into faster web applications and simplified log management strategy. High speed logging (HSL) is configurable using the GUI (via the Request Logging Profile) and supports the W3C extended log format. Happy Logging!382Views0likes0CommentsApplying ‘Centralized Control, Decentralized Execution’ to Network Architecture
#SDN brings to the fore some critical differences between concepts of control and execution While most discussions with respect to SDN are focused on a variety of architectural questions (me included) and technical capabilities, there’s another very important concept that need to be considered: control and execution. SDN definitions include the notion of centralized control through a single point of control in the network, a controller. It is through the controller all important decisions are made regarding the flow of traffic through the network, i.e. execution. This is not feasible, at least not in very large (or even just large) networks. Nor is it feasible beyond simple L2/3 routing and forwarding. HERE COMES the SCIENCE (of WAR) There is very little more dynamic than combat operations. People, vehicles, supplies – all are distributed across what can be very disparate locations. One of the lessons the military has learned over time (sometimes quite painfully through experience) is the difference between control and execution. This has led to decisions to employ what is called, “Centralized Control, Decentralized Execution.” Joint Publication (JP) 1-02, Department of Defense Dictionary of Military and Associated Terms, defines centralized control as follows: “In joint air operations, placing within one commander the responsibility and authority for planning, directing, and coordinating a military operation or group/category of operations.” JP 1-02 defines decentralized execution as “delegation of execution authority to subordinate commanders.” Decentralized execution is the preferred mode of operation for dynamic combat operations. Commanders who clearly communicate their guidance and intent through broad mission-based or effects-based orders rather than through narrowly defined tasks maximize that type of execution. Mission-based or effects-based guidance allows subordinates the initiative to exploit opportunities in rapidly changing, fluid situations. -- Defining Decentralized Execution in Order to Recognize Centralized Execution * Lt Col Woody W. Parramore, USAF, Retired Applying this to IT network operations means a single point of control is contradictory to the “mission” and actually interferes with the ability of subordinates (strategic points of control) to dynamically adapt to rapidly changing, fluid situations such as those experienced in virtual and cloud computing environments. Not only does a single, centralized point of control (which in the SDN scenario implies control over execution through admittedly dynamically configured but rigidly executed) abrogate responsibility for adapting to “rapidly changing, fluid situations” but it also becomes the weakest link. Clausewitz, in the highly read and respected “On War”, defines a center of gravity as "the hub of all power and movement, on which everything depends. That is the point against which all our energies should be directed." Most military scholars and strategists logically imply from the notion of a Clausewitzian center of gravity is the existence of a critical weak link. If the “controller” in an SDN is the center of gravity, then it follows it is likely a critical, weak link. This does not mean the model is broken, or poorly conceived of, or a bad idea. What it means is that this issue needs to be addressed. The modern strategy of “Centralized Control, Decentralized Execution” does just that. Centralized Control, Decentralized Execution in the Network The major issue with the notion of a centralized controller is the same one air combat operations experienced in the latter part of the 20th century: agility, or more appropriately, lack thereof. Imagine a large network adopting fully an SDN as defined today. A single controller is responsible for managing the direction of traffic at L2-3 across the vast expanse of the data center. Imagine a node, behind a Load balancer, deep in the application infrastructure, fails. The controller must respond and instruct both the load balancing service and the core network how to react, but first it must be notified. It’s simply impossible to recover from a node or link failure in 50 milliseconds (a typical requirement in networks handling voice traffic) when it takes longer to get a reply from the central controller. There’s also the “slight” problem of network devices losing connectivity with the central controller if the primary uplink fails. -- OpenFlow/SDN Is Not A Silver Bullet For Network Scalability, Ivan Pepelnjak (CCIE#1354 Emeritus) Chief Technology Advisor at NIL Data Communications The controller, the center of network gravity, becomes the weak link, slowing down responses and inhibiting the network (and IT) from responding in a rapid manner to evolving situations. This does not mean the model is a failure. It means the model must adapt to take into consideration the need to adapt more quickly. This is where decentralized execution comes in, and why predictions that SDN will evolve into an overarching management system rather than an operational one are likely correct. There exist today, within the network, strategic points of control; locations within the data center architecture at which traffic (data) is aggregated, forcing all data to traverse, from which control over traffic and data is maintained. These locations are where decentralized execution can fulfill the “mission-based guidance” offered through centralized control. Certainly it is advantageous to both business and operations to centrally define and codify the operating parameters and goals of data center networking components (from L2 through L7), but it is neither efficient nor practical to assume that a single, centralized controller can achieve both managing and executing on the goals. What the military learned in its early attempts at air combat operations was that by relying on a single entity to make operational decisions in real time regarding the state of the mission on the ground, missions failed. Airmen, unable to dynamically adjust their actions based on current conditions, were forced to watch situations deteriorate rapidly while waiting for central command (controller) to receive updates and issue new orders. Thus, central command (controller) has moved to issuing mission or effects-based objectives and allowing the airmen (strategic points of control) to execute in a way that achieves those objectives, in whatever way (given a set of constraints) they deem necessary based on current conditions. This model is highly preferable (and much more feasible given today’s technology) than the one proffered today by SDN. It may be that such an extended model can easily be implemented by distributing a number of controllers throughout the network and federating them with a policy-driven control system that defines the mission, but leaves execution up to the distributed control points – the strategic control points. SDN is new, it’s exciting, it’s got potential to be the “next big thing.” Like all nascent technology and models, it will go through some evolutionary massaging as we dig into it and figure out where and why and how it can be used to its greatest potential and organizations’ greatest advantage. One thing we don’t want to do is replicate erroneous strategies of the past. No network model abrogating all control over execution has every really worked. All successful models have been a distributed, federated model in which control may be centralized, but execution is decentralized. Can we improve upon that? I think SDN does in its recognition that static configuration is holding us back. But it’s decision to reign in all control while addressing that issue may very well give rise to new issues that will need resolution before SDN can become a widely adopted model of networking. QoS without Context: Good for the Network, Not So Good for the End user Cyclomatic Complexity of OpenFlow-Based SDN May Drive Market Innovation SDN, OpenFlow, and Infrastructure 2.0 OpenFlow/SDN Is Not A Silver Bullet For Network Scalability Prediction: OpenFlow Is Dead by 2014; SDN Reborn in Network Management OpenFlow and Software Defined Networking: Is It Routing or Switching ? Cloud Security: It’s All About (Extreme Elastic) Control Ecosystems are Always in Flux The Full-Proxy Data Center Architecture373Views0likes0CommentsF5 Friday: The Dynamic Control Plane
It’s not just cloud computing and virtualization that introduce volatility into the data center. The natural state of cloud computing is one of constant change. Applications and services and users interacting in ways that constantly change the landscape of the data center. But it isn’t just the volatility of cloud computing and virtualization that makes traditional data center architectures brittle and more apt to fail. It’s the constant barrage of users, devices, and locations against a static data center configuration that makes a traditional architecture fragile and inefficient. Pressures are mounting both from within and without on data center infrastructure to assure availability, security and high-performance of applications to every user, regardless of location or device type. With the rapidly changing landscape of devices and smartphones and locations of users, this means an ever changing array of policies governing access and assuring availability. The processes by which these are enforced is not sustainable in the face of such growth. The burden on operations, network and security teams can only grow based on such a static model. What’s needed to address the dynamism of the user and application environment is a dynamic infrastructure; a dynamic services model that adapts to the increasingly complex set of variables that can impact the way in which infrastructure should treat each and every individual application request – and response. It is this characteristic, this agility in infrastructure, that is critical to the implementation of a dynamic data center and an agile operational posture. Without the ability to adapt in an intelligent and programmable fashion, operations and infrastructure cannot hope to scale along with the growing demands on the network, application and storage infrastructure. THE DYNAMIC SERVICES MODEL The long-term answer to the challenge of efficient scalability in infrastructure operations lies in an architectural approach that anticipates change and enables rapid adaptation to any situation. This ideal state replaces point-to-point connections with a flexible dynamic services model that serves as an intelligent proxy between users and resources. It is important to stress that the dynamic services model is an ecosystem approach; it is not a single vendor solution or a point product. The programmatic and procedural resources provided to integrate, coordinate, and collaborate with other ecosystem elements are defining characteristics of F5’s dynamic control plane. This is evident in the illustrated solutions with VMware and Gomez, but these are only a few examples of the F5 dynamic control plane solving real-world problems today. F5 maintains formal relationships with leading technology providers including Dell, HP, IBM, ExtraHop, Infoblox, NetApp, CA, Symantec, webMethods, Secure Computing, RSA, WhiteHat Security, Splunk, TrendMicro, ByteMobile, Microsoft, and many others. These relationships include tested and documented integration with F5’s solutions and, as such, can be thought of as an extension of the dynamic control plane architecture. -- Ken Salchow, “Unleashing the True Potential of On-Demand IT” Within F5 we refer to this strategic point of control as the dynamic control plane; a platform that is adaptable, programmable and intelligent with regard to both its run-time and configuration-time operations. The full-proxy nature of F5’s underlying application delivery platform, TMOS, provides an interconnected and contextually-aware environment in which requests and responses can be collaboratively intercepted, inspected and if necessary, modified to assure the highest levels of availability, security and performance. By providing a common high-speed interconnect that shares context, F5 BIG-IP solutions are all capable of understanding not only the context of each individual request and response, but the business and operational requirements placed upon the data in a way that allows the platform to make real-time decisions regarding policy enforcement. From a deployment perspective, the dynamic control plane enables an agile operational posture; one that integrates via a standards-based, service-enabled API to provide the means by which BIG-IP can be integrated and collaborate with other data center management platforms to provide automated provisioning, context-aware monitoring, and infrastructure as a service. By enabling the platform with a common set of remote management interfaces, BIG-IP can be managed, monitored and informed through collaborative technologies that increase its abilities to make informed decisions regarding ingress and egress traffic such that the appropriate policies are enforced at the appropriate time on the appropriate end-user, device and location. Combining end-user, network, and application awareness means BIG-IP is enabled with the data necessary to adapt in real-time to conditions that exist now rather than conditions as they were five, ten or thirty minutes in the past. While historical trending is helpful in setting appropriate policies the ability to react quickly means unanticipated variables can be accounted for more rapidly, which means less time in which possible outages or breaches may occur. The results of such collaboration can be seen in joint solutions such as: Building an Enterprise Cloud with F5 and IBM – F5 Tech Brief F5 and Infoblox Integrated DNS Architecture – F5 Tech Brief F5 and Microsoft Delivering IT as a Service Achieving Enterprise Agility in the Cloud (VMware, F5 and BlueLock) All of these solutions (and others) take advantage of F5’s dynamic control plane both for automating the processes necessary to achieve the level of dynamism required as part of the solution to the challenge and for implementing the decision-making processes required at run-time to address the dynamism that drives the need for those processes to exist. IT’S AN AGILE THING What we’re really trying to say when we talk about addressing dynamism – whether internal to the data center, e.g. auto-scaling applications or external to the data center, e.g. new clients, locations and devices, is the need for a more agile operational posture. We’re trying to get to the point where IT has the ability to react to conditions as they change in a way that enhances the performance, availability and security of the data center as a whole. It’s about programmability and processability, about being able to specify policies to address “what-if” scenarios and then trusting that those policies will be enforced in the event they come to fruition. It’s about making infrastructure as agile as the conditions under which they must constantly deliver applications, and doing so as efficiently as possible. A dynamic services model enables operations to assume a more agile posture regarding deployment and delivery of applications. F5’s dynamic control plane makes it possible to do so efficiently, intelligently and collaboratively. What CIOs Can Learn from the Spartans Infrastructure 2.0 + Cloud + IT as a Service = An Architectural Parfait The F5 Dynamic Services Model Unleashing the True Potential of On-Demand IT What is a Strategic Point of Control Anyway? Cloud is the How not the What Cloud Control Does Not Always Mean ‘Do it yourself’ The Strategy Not Taken: Broken Doesn’t Mean What You Think It Means Data Center Feng Shui: Process Equally Important as Preparation Some Services are More Equal than Others The Battle of Economy of Scale versus Control and Flexibility342Views0likes1CommentF5 Friday: If Data is King then Storage Virtualization is the Castellan
The storage virtualization layer is another strategic point of control in the data center where costs can be minimized and resource utilization maximized. In olden times of lore, the king may have been top dog but it was the castellan through which one had to go to gain an audience or access to any one of his holdings. The castellan was a position of immense power and influence in the medieval hierarchy, responsible for managing the king’s castles and lands wherever they might be. In modern times, if data is king then storage virtualization must be the castellan; the system through which data is managed and accessed, regardless of location. It’s a strategic point of control, an aggregation point, that affords organizations the opportunity to architecturally apply policies governing the storage and access of data. Tiering policies – whether across local storage systems or into the cloud – are best applied at a point in the architecture where aggregation via virtualization occurs. Not unlike global and local application delivery systems, storage virtualization systems like F5 ARX “virtualize” resources and provide more seamless scalability and management of those resources. With global and local application delivery those resources are most often applications – but also include infrastructure. With F5 ARX, those resources are storage systems – some costly, some not, some on-premise, some off. Aggregating those resources and presenting a “virtual” view of those resources to end-users means migration of data can be performed seamlessly, without disruption to the end-user. It’s a service-oriented approach to storage architectures that affords agility and automation in carrying out operational tasks related to data management. That operational automation is increasingly important as the volume of data being stored, accessed and migrated to secondary and archive storage systems increase. Manual operations to archive, backup or replicate data would overwhelm storage professionals in the data center if they were to continue to keep pace with the explosive data growth experienced today. DATA is KING This isn’t the first time we’ve heard the announcement: data is growing, astonishingly fast. Not just the data flowing over the wires but data at rest, in storage. It’s an exponential growth caused in part by retention policies atop the reality of growing numbers of users creating more and more data. IBM calls out that, “83 percent of CIOs have visionary plans that include business intelligence and analytics, followed by mobility solutions (74 percent) and virtualization (68 percent).” cloud computing shot up in priority, selected by 45 percent more CIOs than the 2009 study. But not everyone speaks the CIOs language. Translation – it’s no longer about the applications, it’s all about the data: How to manage the data (there’s more data than ever to manage) How to leverage the data (information about consumers, markets, opportunities) How to integrate the data (across applications and devices) How to store the data (cloud, cloud, cloud) How to access the data (especially from mobile devices) -- Results of IBM’s CIO Study — Data is King There’s more data, more often, that needs to be stored for more time. That means more disk, more network, and ultimately more costs. That’s where ARX – storage / file virtualization – comes in. THE TAMING of the DATA Tiering, consolidation and simplified access strategies can make more manageable the menagerie of data threatening to overwhelm the data center with time and money. Operational automation is as imperative to storage as it is to application deployment as a tactic to address the increasing demands for flexibility and responsiveness across all of IT. Internal and external forces of change are driving IT organizations to get more efficient and manage more effectively the resources as their disposal in such a way as to minimize total cost of ownership as well as operational expense. Applying intelligent, adaptable and more flexible policies at strategic points of control within the data center architecture can alleviate many expenses associated with long-term management and control of data – both in flight (application delivery) and at rest (storage). This data explosion is not limited to large enterprises. Mid-sized enterprises are deluged with data as well. Keeping up with growth rates threaten to overrun budgets and overwhelm staff. Traditionally enterprise-class solutions are becoming more and more necessary at mid and even small-sized organizations to manage that data more efficiently. In today’s rapidly digitizing economy, small and mid-sized enterprises are dealing with exploding amounts of digital content and a growing range of data management challenges. -- Richard Villars, VP of Storage and IT Executive Strategies at IDC. Unfortunately, even though mid-sized organizations may have enterprise-class needs, they are still constrained by mid-sized business budgets. Offerings capable of providing enterprise-class features and performance on a mid-sized budget are imperative to assisting organizations to address their burgeoning storage management needs. F5 now offers the ARX1500 and ARX2500 appliances, providing small and mid-sized enterprises with advanced data management capabilities at attractive price points, along with superior scalability and performance levels. Combined with F5 ARX Cloud Extender, which provides the means by which storage as a service can be leveraged in conjunction with storage virtualization and management solutions, the new ARX appliances offer a compelling solution for mid-sized organizations in need of a more holistic and effective data management strategy. The financial benefits of cloud computing combined with operational improvements from a comprehensive storage management strategy can provide a needed boost to enterprises of all sizes. More on F5 ARX1500 and ARX2500 F5 ARX 1500 and 2500 F5’s New ARX Platforms Help Organizations Reap the Benefits of File Virtualization Network World – F5 Rolls Out New File Virtualization Appliances VAR Guy – F5 Launches New ARX Platforms for File Virtualization Success Strategies for Storage Data Migration – IDC Analyst Q&A ARX Series Datasheet F5 Friday: F5 ARX Cloud Extender Opens Cloud Storage F5 Friday: ARX VE Offers New Opportunities F5 Friday: The More The Merrier Disk May Be Cheap but Storage is Not All F5 Friday Posts on DevCentral Tiering is Like Tables, or Storing in the Cloud Tier Swapping Rack Space for Rack Space230Views0likes0CommentsF5 and Traffix: When Worlds Collide
#mwc12 #traffix #mobile Strategic points of control are critical to managing the convergence of technology in any network - enterprise or carrier What happens when technology converges? When old meets new? A fine example of what might happen is what has happened in the carrier space as voice and data services increasingly meet on the same network, each carrying unique characteristics forward from the older technology from which they sprung. In the carrier space having moved away from older communications technology does not mean having left behind core technology concepts. Though voice may be moving to IP with the advent of LTE/4G, it still carries with it the notion of signaling as a means to manage communication and users, and the impact on networks from that requisite signaling mechanism is significant. Along with the well-discussed and often-noted explosive growth of mobile and its impact on the enterprise comes a less-discussed and rarely noted explosive growth of signaling traffic and its impact on service providers. Enterprise experience with voice and signaling remains largely confined to SIP-focused deployments and are on a scale much smaller than that of the service provider. Hence the term “carrier-grade” to indicate the much more demanding environment. The number of signaling messages in 4G networks, for example, associated with a 3 minute IP voice call with data is 520. The same voice call today requires only 3. That exponential growth will put increasing pressure on carriers and require massive scale of infrastructure to support. All that signaling traffic in carrier networks occurs via Diameter, the standard agreed upon by 3GPP (3 rd Generation Partner Project) for network signaling in all 4G/LTE networks. Diameter is to carrier networks what HTTP is to web applications today: it’s the glue that makes it all happen. As the preeminent Diameter routing agent (DRA) for for 3G, 4G / LTE and IMS environments, Traffix’ solutions are fluent in the signaling language used by carriers across the globe to identify users, manage provisioning, and authorize access to services and networks. One could reasonably describe Diameter as the Identity and Access Management (IAM) technology of choice for service providers. When a user does anything on a 4G network, Diameter is involved somehow. What Traffix Signaling Delivery Controller (which is both a highly capable DRA as well as Diameter Edge Agent (DEA)) offers is a strategic point of control in the service providers network, serving as an intelligent tier in that network that enables interoperability, security, scale, and flexibility in how signaling traffic is managed and optimized. That should sound familiar, as F5 is no stranger to similar responsibilities in enterprise and web-class data centers today. F5 with its application and control plane technologies serves as an intelligent tier in the network that ensures interoperability, security, scale, and flexibility for how applications and services are delivered, secured, and optimized. What service providers do with Diameter – user identification, permission to roam, authorization to use certain networks, basically anything a user does on a 4G network – is akin to what F5 does with application delivery technology in the data center. F5’s vision has been to create a converged carrier architecture that unifies IP services end-to-end across the application, data, and control plane. Diameter is a foundational piece of that puzzle, just as any-IP support is critical to providing that same converged application services approach in the data center, a data center routing agent, if you will. Both approaches are ultimately about context, control, and collaboration. CONVERGENCE BREEDS FRAGMENTATION These three characteristics (context, control, collaboration) are required for a dynamic data center to handle the volatility inherent in emerging data center models as well as the convergence in service provider networks of voice and data. But as technologies converge, supporting infrastructure tends to fragment. This dichotomy is clearly present even in the enterprise, where unified communications (UC) implementations are creating chaos. In its early days, Diameter deployments in service provider networks experienced similar trends, and it was the development of the DRA that resolved the issue, bringing order out of chaos and providing a strategic point of control through which subscriber activity could be more efficiently managed. Out of chaos, order. That’s the value Traffix brings to carrier networks with its Signaling Delivery Controller (SDC). Traffix solutions optimize signaling traffic, offering service provider operators scalability, availability, visibility, interoperability, and more in an operationally consistent solution. With the number of mobile devices predicted to exceed the world population in the next year, and the advanced services those devices provide driving exponential growth in signaling traffic, the need to optimize signaling traffic is top of mind for most service providers today. When diverse systems converge, their infrastructure must also converge in terms of support for the resulting unified system. This is particularly true as mobile and virtual desktops become more prevalent and bring with them their own unique delivery challenges to both the service provider and data center networks. The two worlds are colliding, out there on the Internets and inside data centers, with more and more IP-related traffic requiring management within the carrier networks, and more and more traditionally carrier network traffic such as voice being seen inside the data center. What both worlds need is a fully end-to-end IP core infrastructure solution – one that can support IP and Diameter and scale regardless of whether the need is enterprise-class or carrier-grade. One that maintains context and manages access to resources across both voice and data and does so both seamlessly and transparently. Bringing together F5’s control plane with that of Traffix brings a holistic approach to controlling a converged voice-data network that enhances critical network functions across the application, control, and data planes. Traffix aligns well with F5’s overall vision of enabling intelligence in the network and providing context and control for all types of network services – whether carrier or enterprise. Additional Resources: F5 Networks Acquires Traffix Systems The LTE signaling challenge F5 Circles The Wagons and Adds Diameter to its Portfolio Traffix Systems F5 Sends LTE Signal With Acquisition F5 Friday: The Dynamic Control Plane223Views0likes0CommentsHow to Build a Silo Faster: Not Enough Ops in your Devops
We need to remember that operations isn’t just about deploying applications, it’s about deploying applications within a much larger, interdependent ecosystem. One of the key focuses of devops – that hardy movement that seeks to bridge the gap between development and operations – is on deployment. Repeatable deployment of applications, in particular, as a means to reduce the time and effort that goes into the deployment of applications into a production environment. But the focus is primarily on the automation of application deployment; on repeatable configuration of application infrastructure such that it reduces time, effort, and human error. Consider a recent edition of The Crossroads, in which CM Crossroads Editor-in-Chief Bob Aiello and Sasha Gilenson, CEO & Co-founder of Evolven Software, discuss the challenges of implementing and supporting automated application deployment. So, as you have mentioned, the challenge is that you have so many technologies and have so many moving pieces that are inter-dependant and today - each of the pieces come with a lot of configuration. To give you a specific example, you know, the WebSphere application and service, which is frequently used in the financial industry, comes with something like, 16,000 configuration parameters. You know Oracle, has 100s and 100s, , about 1200 parameters, only at the level of database server configuration. So, what happens is that there is a lot of information that you still need to collect, you need to centralize it. -- Sasha Gilenson, CEO and Co-founder of Evolven Software The focus is overwhelmingly on automated application deployment. That’s a good thing, don’t get me wrong, but there is more to deploying an application. Today there is still little focus beyond the traditional application infrastructure components. If you peruse some of the blogs and articles written on the subject by forerunners of the devops movement, you’ll find that most of the focus remains on automating application deployment as it relates to the application tiers within a data center architecture. There’s little movement beyond that to include other data center infrastructure that must be integrated and configured to support the successful delivery of applications to its ultimate end-users. That missing piece of the devops puzzle is an important one, as the operational efficiencies sought by enterprises by leveraging cloud computing , virtualization and dynamic infrastructure in general is, in part, the ability to automate and integrate that infrastructure into a more holistic operational strategy that addresses all three core components of operational risk: security, availability and performance. It is at the network and application network infrastructure layers where we see a growing divide between supply and demand. On the demand side we see increases for network and application network resources such as IP addresses, delivery and optimization services, firewall and related security services. On the supply side we see a fairly static level of resources (people and budgets) that simply cannot keep up with the increasing demand for services and services management necessary to sustain the growth of application services. INFRASTRUCTURE AUTOMATION One of the key benefits that can be realized in a data center evolution from today to tomorrow’s dynamic models is operational efficiency. But that efficiency can only be achieved by incorporating all the pieces of the puzzle. That means expanding the view of devops from the application deployment-centric view of today into the broader, supporting network and application network domain. It is in understanding the inter-dependencies and collaborative relationships of the delivery process that is necessary to fully realize on the efficiency gains proposed to be the real benefit of highly-virtualized and private cloud architectural models. This is actually more key than you might think as automating the configuration of say, WebSphere, in an isolated application-tier-only operational model may be negatively impacted in later processes when infrastructure is configured to support the deployment. Understanding the production monitoring and routing/switching polices of delivery infrastructure such as load balancers, firewalls, identity and access management and application delivery controllers is critical to ensure that the proper resources and services are configured on the web and application servers. Operations-focused professionals aren’t off the hook, either, as understanding the application from a resource consumption and performance point of view will greatly forward the ability to create and subsequently implement the proper algorithms and policies in the infrastructure necessary to scale efficiently. Consider the number of “touch points” in the network and application network infrastructure that must be updated and/or configured to support an application deployment into a production environment: Firewalls Load balancers / application delivery controller Health monitoring load balancing algorithm Failover Scheduled maintenance window rotations Application routing / switching Resource obfuscation Network routing Network layer security Application layer security Proxy-based policies Logging Identity and access management Access to applications by user device location combinations of the above Auditing and logging on all devices Routing tables (where applicable) on all devices VLAN configuration / security on all applicable devices The list could go on much further, depending on the breadth and depth of infrastructure support in any given data center. It’s not a simple process at all, and the “checklist” for a deployment on the operational side of the table is as lengthy and complex as it is on the development side. That’s especially true in a dynamic or hybrid environment, where resources requiring integration may themselves be virtualized and/or dynamic. While the number of parameters needing configuration of a database, as mentioned by Sasha above is indeed staggering, so too are the parameters and policies needing configuration in the network and application network infrastructure. Without a holistic view of applications as just one part of the entire infrastructure, configurations may need to be unnecessarily changed during infrastructure service provisioning and infrastructure policies may not be appropriate to support the business and operational goals specific to the application being deployed. DEVOPS or OPSDEV Early on Alistair Croll coined the concept of managing applications in conjunction with its supporting infrastructure “web ops.” That term and concept eventually morphed into devops and been adopted by many of the operational admins who must manage application deployments. But it is becoming focused on supporting application lifecycles through ops with very little attention being paid to the other side of the coin, which is ops using dev to support infrastructure lifecycles. In other words, the gap that drove the concept of automation and provisioning and integration across the infrastructure, across the network and application network infrastructure, still exists. What we’re doing, perhaps unconsciously, is simply enabling us to build the same silos that existed before a whole lot faster and more efficiently. The application is still woefully ignorant of the network, and vice-versa. And yet a highly-virtualized, scalable architecture must necessarily include what are traditionally “network-hosted” services: load balancing, application switching, and even application access management. This is because at some point in the lifecycle both the ability to perform and economy of scale of integrating web and application services with its requisite delivery infrastructure becomes an impediment to the process if accomplished manually. By 2015, tools and automation will eliminate 25 percent of labor hours associated with IT services. As the IT services industry matures, it will increasingly mirror other industries, such as manufacturing, in transforming from a craftsmanship to a more industrialized model. Cloud computing will hasten the use of tools and automation in IT services as the new paradigm brings with it self-service, automated provisioning and metering, etc., to deliver industrialized services with the potential to transform the industry from a high-touch custom environment to one characterized by automated delivery of IT services. Productivity levels for service providers will increase, leading to reductions in their costs of delivery. -- Gartner Reveals Top Predictions for IT Organizations and Users for 2011 and Beyond Provisioning and metering must include more than just the applications and its immediate infrastructure; it must reach outside its traditional demesne and take hold of the network and application network infrastructure simply to sustain the savings achieved by automating much of the application lifecycle. The interdependence that exists between applications and “the network” must not only be recognized, but explored and better understood such that additional efficiencies in delivery can be achieved by applying devops to core data center infrastructure. Other we risk building even taller silos in the data center, and what’s worse is we’ll be building them even faster and more efficiently than before.219Views0likes2CommentsAgile Operations: A Formula for Just-In-Time Provisioning
One of the ways in which traditional architectures and deployment models is actually superior (yes, I said superior) to cloud computing is in provisioning. Before you label me a cloud heretic, let me explain. In traditional deployment models capacity is generally allocated based on anticipated peaks in demand. Because the time to acquire, deploy, and integrate hardware into the network and application infrastructure this process is planned for and well-understood, and the resources required are in place before they are needed. In cloud computing, the benefit is that the time required to acquire those resources is contracted to virtually nothing, making capacity planning much more difficult. The goal is just-in-time provisioning – resources are not provisioned until you are sure you’re going to need them because part of the value proposition of cloud and highly virtualized infrastructure is that you don’t pay for resources until you need them. But it’s very hard to provision just-in-time and sometimes the result will end up being almost-but-not-quite-in-time. Here’s a cute [whale | squirrel | furry animal] to look at until service is restored. While fans of Twitter’s fail whale are loyal and everyone will likely agree its inception and subsequent use bought Twitter more than a bit of patience with its often times unreliable service, not everyone will be as lucky or have customers as understanding as Twitter. We’d all really rather prefer not to see the Fail Whale, regardless of how endearing he (she? it?) might be. But we also don’t want to overprovision and potentially end up spending more money than we need to. So how can these two needs be balanced?219Views0likes0CommentsF5 Friday: Ops First Rule
#cloud #microsoft #iam “An application is only as reliable as its least reliable component” It’s unlikely there’s anyone in IT today that doesn’t understand the role of load balancing to scale. Whether cloud or not, load balancing is the key mechanism through which load is distributed to ensure horizontal scale of applications. It’s also unlikely there’s anyone in IT that doesn’t understand the relationship between load balancing and high-availability (reliability). High-Availability (HA) architectures are almost always implemented using load balancing services to ensure seamless transition from one service instance to another in the event of a failure. What’s often overlooked is that scalability and HA isn’t important just for applications. Services – whether application or network-focused – must also be reliable. It’s the old “only as strong as the weakest link in the chain” argument. An application is only as reliable as its least reliable component – and that includes services and infrastructure upon which that application relies. It is – or should be – ops first rule; the rule that guides design of data center architectures. This requirement becomes more and more obvious as emerging architectures combining the data center and cloud computing are implemented, particularly when federating identity and access services. That’s because it is desirable to maintain control over the identity and access management processes that authenticate and authorize use of applications no matter where they may be deployed. Such an architecture relies heavily on the corporate identity store as the authoritative source of both credentials and permissions. This makes the corporate identity store a critical component in the application dependency chain, one that must necessarily be made as reliable as possible. Which means you need load balancing. A good example of how this architecture can be achieved is found in BIG-IP load balancing support for Microsoft’s Active Directory Federation Services (AD FS). AD FS and F5 Load Balancing Microsoft’s Active Directory Federation Services, (AD FS) sever role is an identity access solution that extends the single sign-on, (SSO) experience for directory-authenticated clients, (typically provided on the Intranet via Kerberos), to resources outside of the organization’s boundaries, such as cloud computing environments. To ensure high-availability, performance, and scalability the F5 BIG-IP Local Traffic Manager (LTM) can be deployed to load balance an AD FS server farm. There are several scenarios in which BIG-IP can load balance AD FS services. 1. To enable reliability of AD FS for internal clients accessing external resources, such as those hosted in Microsoft Office 365. This is the simplest of architectures and the most restrictive in terms of access for end-users as it is limited to only internal clients. 2. To enable reliability of AD FS and AD FS proxy servers, which provide external end-user SSO access to both internal federation-enabled resources as well as partner resources like Microsoft Office 365. This is a more flexible option as it serves both internal and external clients. 3. BIG-IP Access Policy Manager (APM) can replace the need for AD FS proxy servers required for external end-user SSO access, which eliminates another tier and enables pre-authentication at the perimeter, offering both the flexibility required (supporting both internal and external access) as well as a more secure deployment. In all three scenarios, F5 BIG-IP serves as a strategic point of control in the architecture, assuring reliability and performance of services upon which applications are dependent, particularly those of authentication and authorization. Using BIG-IP APM instead of AD FS proxy servers both simplifies and makes more agile the architecture. This is because BIG-IP APM is inherently more programmable and flexible in terms of policy creation. BIG-IP APM, being deployed on the BIG-IP platform, can take full advantage of the context in which requests are made, ensuring that identity and access control go beyond simple credentials and take into consideration device, location, and other contextual-clues that enable a more secure system of authentication and authorization. High-availability – and ultimately scalability - is preserved for all services by leveraging the core load balancing and HA functionality of the BIG-IP platform. All components in the chain are endowed with HA capabilities, making the entire application more resilient and able to withstand minor and major failures. Using BIG-IP LTM for load balancing AD FS serves as an adaptable and extensible architectural foundation for a phased deployment approach. As a pilot phase, rolling out AD FS services for internal clients only makes sense, and is the simplest in terms of its implementation. Using BIG-IP as the foundation for such an architecture enables further expansion in subsequent phases, such as introducing BIG-IP APM in a phase two implementation that brings flexibility of access location to the table. Further enhancements can then be made regarding access when context is included, enabling more complex and business-focused access policies to be implemented. Time-based restrictions on clients or location can be deployed and enforced, as is desired or needed by operations or business requirements. Reliability is a Least Common Factor Problem Reliability must be enabled throughout the application delivery chain to ultimately ensure reliability of each application. Scalability is further paramount for those dependent services, such as identity and access management, that are intended to be shared across multiple applications. While certainly there are many other load balancing services that could be used to enable reliability of these services, an extensible and highly scalable platform such as BIG-IP is required to ensure both reliability and scalability of shared services upon which many applications rely. The advantage of a BIG-IP-based application delivery tier is that its core reliability and scalability services extend to any of the many services that can be deployed. By simplifying the architecture through application delivery service consolidation, organizations further enjoy the benefits of operational consistency that keeps management and maintenance costs reduced. Reliability is a least common factor problem, and Ops First Rule should be applied when designing a deployment architecture to assure that all services in the delivery chain are as reliable as they can be. F5 Friday: BIG-IP Solutions for Microsoft Private Cloud BYOD–The Hottest Trend or Just the Hottest Term The Four V’s of Big Data Hybrid Architectures Do Not Require Private Cloud The Cost of Ignoring ‘Non-Human’ Visitors Complexity Drives Consolidation What Does Mobile Mean, Anyway? At the Intersection of Cloud and Control… Cloud Bursting: Gateway Drug for Hybrid Cloud Identity Gone Wild! Cloud Edition219Views0likes0Comments