control
50 TopicsF5 BIG-IP Platform Security
When creating any security-enabled network device, development teams must fully investigate security of the device itself to ensure it cannot be compromised. A gate provides no security to a house if the gap between the bars is large enough to drive a truck through. Many highly effective exploits have breached the very software and hardware that are designed to protect against them. If an attacker can breach the guards, then they don’t need to worry about being stealthy, meaning if one can compromise the box, then they probably can compromise the code. F5 BIG-IP Application Delivery Controllers are positioned at strategic points of control to manage an organization’s critical information flow. In the BIG-IP product family and the TMOS operating system, F5 has built and maintained a secure and robust application delivery platform, and has implemented many different checks and counter-checks to ensure a totally secure network environment. Application delivery security includes providing protection to the customer’s Application Delivery Network (ADN), and mandatory and routine checks against the stack source code to provide internal security—and it starts with a secure Application Delivery Controller. The BIG-IP system and TMOS are designed so that the hardware and software work together to provide the highest level of security. While there are many factors in a truly secure system, two of the most important are design and coding. Sound security starts early in the product development process. Before writing a single line of code, F5 Product Development goes through a process called threat modeling. Engineers evaluate each new feature to determine what vulnerabilities it might create or introduce to the system. F5’s rule of thumb is a vulnerability that takes one hour to fix at the design phase, will take ten hours to fix in the coding phase and one thousand hours to fix after the product is shipped—so it’s critical to catch vulnerabilities during the design phase. The sum of all these vulnerabilities is called the threat surface, which F5 strives to minimize. F5, like many companies that develop software, has invested heavily in training internal development staff on writing secure code. Security testing is time-consuming and a huge undertaking; but it’s a critical part of meeting F5’s stringent standards and its commitment to customers. By no means an exhaustive list but the BIG-IP system has a number of features that provide heightened and hardened security: Appliance mode, iApp Templates, FIPS and Secure Vault Appliance Mode Beginning with version 10.2.1-HF3, the BIG-IP system can run in Appliance mode. Appliance mode is designed to meet the needs of customers in industries with especially sensitive data, such as healthcare and financial services, by limiting BIG-IP system administrative access to match that of a typical network appliance rather than a multi-user UNIX device. The optional Appliance mode “hardens” BIG-IP devices by removing advanced shell (Bash) and root-level access. Administrative access is available through the TMSH (TMOS Shell) command-line interface and GUI. When Appliance mode is licensed, any user that previously had access to the Bash shell will now only have access to the TMSH. The root account home directory (/root) file permissions have been tightened for numerous files and directories. By default, new files are now only user readable and writeable and all directories are better secured. iApp Templates Introduced in BIG-IP v11, F5 iApps is a powerful new set of features in the BIG-IP system. It provides a new way to architect application delivery in the data center, and it includes a holistic, application-centric view of how applications are managed and delivered inside, outside, and beyond the data center. iApps provide a framework that application, security, network, systems, and operations personnel can use to unify, simplify, and control the entire ADN with a contextual view and advanced statistics about the application services that support business. iApps are designed to abstract the many individual components required to deliver an application by grouping these resources together in templates associated with applications; this alleviates the need for administrators to manage discrete components on the network. F5’s new NIST 800-53 iApp Template helps organizations become NIST-compliant. F5 has distilled the 240-plus pages of guidance from NIST into a template with the relevant BIG-IP configuration settings—saving organizations hours of management time and resources. Federal Information Processing Standards (FIPS) Developed by the National Institute of Standards and Technology (NIST), Federal Information Processing Standards are used by United States government agencies and government contractors in non-military computer systems. FIPS 140 series are U.S. government computer security standards that define requirements for cryptography modules, including both hardware and software components, for use by departments and agencies of the United States federal government. The requirements cover not only the cryptographic modules themselves but also their documentation. As of December 2006, the current version of the standard is FIPS 140-2. A hardware security module (HSM) is a secure physical device designed to generate, store, and protect digital, high-value cryptographic keys. It is a secure crypto-processor that often comes in the form of a plug-in card (or other hardware) with tamper protection built in. HSMs also provide the infrastructure for finance, government, healthcare, and others to conform to industry-specific regulatory standards. FIPS 140 enforces stronger cryptographic algorithms, provides good physical security, and requires power-on self tests to ensure a device is still in compliance before operating. FIPS 140-2 evaluation is required to sell products implementing cryptography to the federal government, and the financial industry is increasingly specifying FIPS 140-2 as a procurement requirement. The BIG-IP system includes a FIPS cryptographic/SSL accelerator—an HSM option specifically designed for processing SSL traffic in environments that require FIPS 140-1 Level 2–compliant solutions. Many BIG-IP devices are FIPS 140-2 Level 2–compliant. This security rating indicates that once sensitive data is imported into the HSM, it incorporates cryptographic techniques to ensure the data is not extractable in a plain-text format. It provides tamper-evident coatings or seals to deter physical tampering. The BIG-IP system includes the option to install a FIPS HSM (BIG-IP 6900, 8900, 11000, and 11050 devices). BIG-IP devices can be customized to include an integrated FIPS 140-2 Level 2–certified SSL accelerator. Other solutions require a separate system or a FIPS-certified card for each web server; but the BIG-IP system’s unique key management framework enables a highly scalable secure infrastructure that can handle higher traffic levels and to which organizations can easily add new services. Additionally the FIPS cryptographic/SSL accelerator uses smart cards to authenticate administrators, grant access rights, and share administrative responsibilities to provide a flexible and secure means for enforcing key management security. Secure Vault It is generally a good idea to protect SSL private keys with passphrases. With a passphrase, private key files are stored encrypted on non-volatile storage. If an attacker obtains an encrypted private key file, it will be useless without the passphrase. In PKI (public key infrastructure), the public key enables a client to validate the integrity of something signed with the private key, and the hashing enables the client to validate that the content was not tampered with. Since the private key of the public/private key pair could be used to impersonate a valid signer, it is critical to keep those keys secure. Secure Vault, a super-secure SSL-encrypted storage system introduced in BIG-IP version 9.4.5, allows passphrases to be stored in an encrypted form on the file system. In BIG-IP version 11, companies now have the option of securing their cryptographic keys in hardware, such as a FIPS card, rather than encrypted on the BIG-IP hard drive. Secure Vault can also encrypt certificate passwords for enhanced certificate and key protection in environments where FIPS 140-2 hardware support is not required, but additional physical and role-based protection is preferred. In the absence of hardware support like FIPS/SEEPROM (Serial (PC) Electrically Erasable Programmable Read-Only Memory), Secure Vault will be implemented in software. Even if an attacker removed the hard disk from the system and painstakingly searched it, it would be nearly impossible to recover the contents due to Secure Vault AES encryption. Each BIG-IP device comes with a unit key and a master key. Upon first boot, the BIG-IP system automatically creates a master key for the purpose of encrypting, and therefore protecting, key passphrases. The master key encrypts SSL private keys, decrypts SSL key files, and synchronizes certificates between BIG-IP devices. Further increasing security, the master key is also encrypted by the unit key, which is an AES 256 symmetric key. When stored on the system, the master key is always encrypted with a hardware key, and never in the form of plain text. Master keys follow the configuration in an HA (high-availability) configuration so all units would share the same master key but still have their own unit key. The master key gets synchronized using the secure channel established by the CMI Infrastructure as of BIG-IP v11. The master key encrypted passphrases cannot be used on systems other than the units for which the master key was generated. Secure Vault support has also been extended for vCMP guests. vCMP (Virtual Clustered Multiprocessing) enables multiple instances of BIG-IP software to run on one device. Each guest gets their own unit key and master key. The guest unit key is generated and stored at the host, thus enforcing the hardware support, and it’s protected by the host master key, which is in turn protected by the host unit key in hardware. Finally F5 provides Application Delivery Network security to protect the most valuable application assets. To provide organizations with reliable and secure access to corporate applications, F5 must carry the secure application paradigm all the way down to the core elements of the BIG-IP system. It’s not enough to provide security to application transport; the transporting appliance must also provide a secure environment. F5 ensures BIG-IP device security through various features and a rigorous development process. It is a comprehensive process designed to keep customers’ applications and data secure. The BIG-IP system can be run in Appliance mode to lock down configuration within the code itself, limiting access to certain shell functions; Secure Vault secures precious keys from tampering; and optional FIPS cards ensure organizations can meet or exceed particular security requirements. An ADN is only as secure as its weakest link. F5 ensures that BIG-IP Application Delivery Controllers use an extremely secure link in the ADN chain. ps Resources: F5 Security Solutions Security is our Job (Video) F5 BIG-IP Platform Security (Whitepaper) Security, not HSMs, in Droves Sometimes It Is About the Hardware Investing in security versus facing the consequences | Bloor Research White Paper Securing Your Enterprise Applications with the BIG-IP (Whitepaper) TMOS Secure Development and Implementation (Whitepaper) BIG-IP Hardware Updates – SlideShare Presentation Audio White Paper - Application Delivery Hardware A Critical Component F5 Introduces High-Performance Platforms to Help Organizations Optimize Application Delivery and Reduce Costs Technorati Tags: F5, PCI DSS, virtualization, cloud computing, Pete Silva, security, coding, iApp, compliance, FIPS, internet, TMOS, big-ip, vCMP482Views0likes1CommentBYOD Policies – More than an IT Issue Part 5: Trust Model
#BYOD or Bring Your Own Device has moved from trend to an permanent fixture in today's corporate IT infrastructure. It is not strictly an IT issue however. Many groups within an organization need to be involved as they grapple with the risk of mixing personal devices with sensitive information. In my opinion, BYOD follows the classic Freedom vs. Control dilemma. The freedom for user to choose and use their desired device of choice verses an organization's responsibility to protect and control access to sensitive resources. While not having all the answers, this mini-series tries to ask many the questions that any organization needs to answer before embarking on a BYOD journey. Enterprises should plan for rather than inherit BYOD. BYOD policies must span the entire organization but serve two purposes - IT and the employees. The policy must serve IT to secure the corporate data and minimize the cost of implementation and enforcement. At the same time, the policy must serve the employees to preserve the native user experience, keep pace with innovation and respect the user's privacy. A sustainable policy should include a clear BOYD plan to employees including standards on the acceptable types and mobile operating systems along with a support policy showing the process of how the device is managed and operated. Some key policy issue areas include: Liability, Device Choice, Economics, User Experience & Privacy and a Trust Model. Today we look at Trust Model. Trust Model Organizations will either have a BYOD policy or forbid the use all together. Two things can happen if not: if personal devices are being blocked, organizations are losing productivity OR the personal devices are accessing the network (with or without an organization's consent) and nothing is being done pertaining to security or compliance. Ensure employees understand what can and cannot be accessed with personal devices along with understanding the risks (both users and IT) associated with such access. While having a written policy is great, it still must be enforced. Define what is ‘Acceptable use.’ According to a recent Ponemon Institute and Websense survey, while 45% do have a corporate use policy, less than half of those actually enforce it. And a recent SANS Mobility BYOD Security Survey, less than 20% are using end point security tools, and out of those, more are using agent-based tools rather than agent-less. According to the survey, 17% say they have stand-alone BYOD security and usage policies; 24% say they have BYOD policies added to their existing policies; 26% say they "sort of" have policies; 3% don't know; and 31% say they do not have any BYOD policies. Over 50% say employee education is one way they secure the devices, and 73% include user education with other security policies. Organizations should ensure procedures are in place (and understood) in cases of an employee leaving the company; what happens when a device is lost or stolen (ramifications of remote wiping a personal device); what types/strength of passwords are required; record retention and destruction; the allowed types of devices; what types of encryption is used. Organizations need to balance the acceptance of consumer-focused Smartphone/tablets with control of those devices to protect their networks. Organizations need to have a complete inventory of employee's personal devices - at least the one’s requesting access. Organizations need the ability to enforce mobile policies and secure the devices. Organizations need to balance the company's security with the employee's privacy like, off-hours browsing activity on a personal device. Whether an organization is prepared or not, BYOD is here. It can potentially be a significant cost savings and productivity boost for organizations but it is not without risk. To reduce the business risk, enterprises need to have a solid BYOD policy that encompasses the entire organization. And it must be enforced. Companies need to understand: • The trust level of a mobile device is dynamic • Identify and assess the risk of personal devices • Assess the value of apps and data • Define remediation options • Notifications • Access control • Quarantine • Selective wipe • Set a tiered policy Part of me feels we’ve been through all this before with personal computer access to the corporate network during the early days of SSL-VPN, and many of the same concepts/controls/methods are still in place today supporting all types of personal devices. Obviously, there are a bunch new risks, threats and challenges with mobile devices but some of the same concepts apply – enforce policy and manage/mitigate risk As organizations move to the BYOD, F5 has the Unified Secure Access Solutions to help. ps Related BYOD Policies – More than an IT Issue Part 1: Liability BYOD Policies – More than an IT Issue Part 2: Device Choice BYOD Policies – More than an IT Issue Part 3: Economics BYOD Policies – More than an IT Issue Part 4: User Experience and Privacy BYOD–The Hottest Trend or Just the Hottest Term FBI warns users of mobile malware Will BYOL Cripple BYOD? Freedom vs. Control What’s in Your Smartphone? Worldwide smartphone user base hits 1 billion SmartTV, Smartphones and Fill-in-the-Blank Employees Evolving (or not) with Our Devices The New Wallet: Is it Dumb to Carry a Smartphone? Bait Phone BIG-IP Edge Client 2.0.2 for Android BIG-IP Edge Client v1.0.4 for iOS New Security Threat at Work: Bring-Your-Own-Network Legal and Technical BYOD Pitfalls Highlighted at RSA263Views0likes0CommentsFrom Car Jacking to Car Hacking
With the promise of self-driving cars just around the corner of the next decade and with researchers already able to remotely apply the brakes and listen to conversations, a new security threat vector is emerging. Computers in cars have been around for a while and today with as many as 50 microprocessors, it controls engine emissions, fuel injectors, spark plugs, anti-lock brakes, cruise control, idle speed, air bags and more recently, navigation systems, satellite radio, climate control, keyless entry, and much more. In 2010, a former employee of Texas Auto Center hacked into the dealer’s computer system and remotely activated the vehicle-immobilization system which engaged the horn and disabled the ignition system of around 100 cars. In many cases, the only way to stop the horns (going off in the middle of the night) was to disconnect the battery. Initially, the organization dismissed it as a mechanical failure but when they started getting calls from customers, they knew something was wrong. This particular web based system was used to get the attention of those who were late on payments but obviously, it was used for something completely different. After a quick investigation, police were able to arrest the man and charge him with unauthorized use of a computer system. University of California - San Diego researchers, in 2011, published a report (pdf) identifying numerous attack vectors like CD radios, Bluetooth (we already knew that) and cellular radio as potential targets. In addition, there are concerns that, in theory, a malicious individual could disable the vehicle or re-route GPS signals putting transportation (fleet, delivery, rental, law enforcement) employees and customers at risk. Many of these electronic control units (ECUs) can connect to each other and the internet and so they are vulnerable to the same internet dangers like malware, trojans and even DoS attacks. Those with physical access to your vehicle like mechanics, valets or others can access the On-Board Diagnostic System (OBD-II) usually located right under the dash. Plug in, and upload your favorite car virus. Tests have shown that if you can infect the diagnostics tools at a dealership, when cars were connected to the system, they were also infected. Once infected, the car would contact the researcher’s servers asking for more instructions. At that point, they could activate the brakes, disable the car and even listen to conversations in the car. Imagine driving down a highway, hearing a voice over the speakers and then someone remotely explodes your airbags. They’ve also been able to insert a CD with a malicious file to compromise a radio vulnerability. Most experts agree that right now, it is not something to overly worry about since many of the previously compromised systems are after-market equipment, it takes a lot of time/money and car manufactures are already looking into protection mechanisms. But as I’m thinking about current trends like BYOD, it is not far fetched to imagine a time when your car is VPN’d to the corporate network and you are able to access sensitive info right from the navigation/entertainment/climate control/etc screen. Many new cars today have USB ports that recognize your mobile device as an AUX and allow you to talk, play music and other mobile activities right through the car’s system. I’m sure within the next 5 years (or sooner), someone will distribute a malicious mobile app that will infect the vehicle as soon as you connect the USB. Suddenly, buying that ‘84 rust bucket of a Corvette that my neighbor is selling doesn’t seem like that bad of an idea even with all the C4 issues. ps251Views0likes0CommentsThe Cloud Integration Stack
#cloud Integrating environments occurs in layers … We use the term “hybrid cloud” to indicate a joining together of two disparate environments. We often simplify the “cloud” to encompass public IaaS, PaaS, SaaS and private cloud. But even though the adoption of such hybrid architectures may be a foregone conclusion, the devil is, as they say, in the details and how that adoption will be executed is not so easily concluded. At its core, cloud is about integrating infrastructure. We integrate infrastructure from the application and networking domains to enable elasticity and scalability. We integrate infrastructure from security and delivery realms to ensure a comprehensive, secure delivery chain that promises performance and reliability. We integrate infrastructure to manage these disparate worlds in a unified way, to reduce the burden on operations imposed by necessarily disconnected systems created by integrating environments. How these integrations are realized can be broken down into a fairly simple stack comprised of the network, resources, elasticity, and control. The NETWORK INTEGRATION LAYER At the network layer, the goal is normalize connectivity and provide optimization of network traffic between two disconnected environments. This is generally applicable only to the integration of IaaS environments, where connectivity today is achieved primarily through the use of secured network tunnels. This enables secure communications over which data and applications may be transferred between environments (and why optimization for performance sake may be desired) and over which management can occur. The most basic of network integration enabling a hybrid cloud environment is often referred to as bridging, after the common networking term. Bridging does not necessarily imply layer 3 normalization, however, and some sort of overlay networking technology will be required to achieve that normalization (and is often cited as a use of emerging technology like SDN). Look for solutions in this layer to be included in cloud “bridges” or “bridging” offerings. The RESOURCE INTEGRATION LAYER At the resource layer, integration occurs at the virtualization layer. Resources such as compute and storage are integrated with data center residing systems in such a way as to be included in provisioning processes. This integration enables visibility into the health and performance of said resources, providing the means to collect actionable performance and status related metrics for everything from capacity planning to redistribution of clients to the provisioning of performance-related services such as acceleration and optimization. This layer of integration is also heavily invested in the notion of maintaining operational consistency. One way this is achieved is by integrating remote resources into existing delivery network architectures that allow the enforcement of policy to ensure compliance with operational and business requirements. Another means of achieving operational consistency through resource integration is to integrate remotely deployed infrastructure solutions providing application delivery services. Such resources can be integrated with data center deployed management systems in such a way as to enforce operational consistency through synchronization of policies across all managed environments, cloud or otherwise. Look for solutions in this layer to be included in cloud “gateway” offerings. The ELASTICITY INTEGRATION LAYER Elasticity integration is closely related to resource integration but not wholly dependent upon it. Elasticity is the notion of expanding or contracting capacity of resources (whether storage, network, or compute) to meet demand. That elasticity requires visibility into demand (not as easy as it sounds, by the way) as well as integration with the broader systems that provision and de-provision resources. Consider a hybrid cloud in which there is no network or resource integration, but rather systems are in place to aggregate demand metrics from both cloud and data center deployed applications. When some defined threshold is met, a trigger occurs that instructs the system to interact with the appropriate control-plane API to provision or de-provision resources. Elasticity requires not only the elasticity of compute capacity, but may also require network or storage capacity be adjusted as well. This is the primary reason why simple “launch a VM” or “stop a VM” responses to changes in demand are wholly inadequate to achieve true elasticity – such simple responses do not take into consideration the ecosystem that is cloud, regardless of its confines to a single public provider or its spread across multiple public/private locations. True elasticity requires integration of the broader application delivery ecosystem to ensure consistent performance and security across all related applications. Look for solutions in this layer to be included in cloud “gateway” offerings. The CONTROL INTEGRATION LAYER Finally, the control integration layer is particularly useful when attempting to integrate SaaS with private cloud or traditional data center models. This is primarily because integration at other layers is virtually non-existent (this is also true of PaaS environments, which are often highly self-contained and only truly enable integration and control over the application layer). The control layer is focused on integrating processes, such as access and authentication, for purposes of maintaining control over security and delivery policies. This often involves some system under the organization’s control (i.e. in the data center) brokering specific functions as part of a larger process. Currently the most common control integration solution is the brokering of access to cloud hosted resources such as SaaS. The initial authentication and authorization steps of a broader log-in process occur in the data center, with the enterprise-controlled systems then providing assurance in the form of tokens or assertions (SAML, specifically crafted encrypted tokens, one time passwords, etc…) to the resource that the user is authorized to access the system. Control integration layers are also used to manage disconnected instances of services across environments for purposes of operational consistency. This control enables the replication and synchronization of policies across environments to ensure security policy enforcement as well as consistent performance. Look for solutions in this layer to be included in cloud “broker” offerings. Eventually, the entire integration stack will be leveraged to manage hybrid clouds with confidence, eliminating many of the obstacles still cited by even excited prospective customers as reasons they are not fully invested in cloud computing . F5 Friday: Avoiding the Operational Debt of Cloud Cloud Security: It’s All About (Extreme Elastic) Control Hybrid Architectures Do Not Require Private Cloud Identity Gone Wild! Cloud Edition Cloud Bursting: Gateway Drug for Hybrid Cloud The Conspecific Hybrid Cloud152Views0likes0CommentsIn 5 Minutes or Less - Enterprise Manager v3.0
In my 21st In 5 Video, I show you some of the new features available in Enterprise Manager v3.0, in 5 Minutes or Less. We cover the Centralized Analytics Module, iHealth integration and Multi-device configuration comparison. As a follow-up to release of BIG-IP v11.2, Enterprise Manager v3.0 is now available. In addition to support of v11.2, this release provides some key new capabilities that will help monitor and understand application performance across all BIG-IP devices. Simplify and automate the management of your F5 ADN infrastructure. ";" alt="" /> In 5 Minutes or Less - Enterprise Manager v3.0 ps Resources: F5 Enterprise Manager Overview Application Delivery Network Platform Management In 5 Minutes or Less Series (21 videos – over 2 hours of In 5 Fun) F5 Youtube Channel202Views0likes0CommentsWhat’s in Your Smartphone?
Typical smartphone owners have an average of 41 apps per device, 9 more than they had last year according to the recent Nielsen report, State of the Appnation – A Year of Change and Growth in U.S. Smartphones. Also last year, less than 40% of mobile subscribers in the U.S. had smartphones and this year, it’s at 50% and growing. Android and iOS users fuel the smartphone app drive with 88% downloading an app within the last month. They also found that as people download more apps, they are also spending more time with them (10% more), rather than using their mobile Web browsers for such activities. The Top Five Apps are Facebook, YouTube, Android Market, Google Search, and Gmail, no change from last year. More and more of our info is being saved on and collected by these smartphones and privacy is a big worry. Last year 70% and this year 73% expressed concern over personal data collection and 55% were cautious about sharing location info via smartphone apps. These concerns will only grow as more organizations adopt BYOD policies. While users are concerned for their security, according to Gartner, IT shops won't be able to provide the security necessary to protect company data. With so many entry points, data leakage outside the enterprise is a real risk. Gartner advises that IT shops managing mobile devices consider some mix of tiered support: Platform, Appliance and Concierge. With platform support, IT offers full PC-like support for a device and the device is chosen by IT, and will be used typically in vertical applications. With appliance-level support, IT supports a narrow set of applications on a mobile device, including server-based and Web-based application support on a wider set of pre-approved devices. Local applications are not supported. With concierge-level support, IT provides hands-on support, mainly to knowledge workers, for non-supported devices or non-supported apps on a supported device. The costs for support, which can be huge, are charged back to the users under this approach. ps References: State of the Appnation – A Year of Change and Growth in U.S. Smartphones Nielsen: 1 in 2 own a smartphone, average 41 apps Freedom vs. Control BYOD–The Hottest Trend or Just the Hottest Term Hey You, Get Off-ah My Cloud! Evolving (or not) with Our Devices The New Wallet: Is it Dumb to Carry a Smartphone? BYOD Is Driving IT ‘Crazy,’ Gartner Says Consumerization trend driving IT shops 'crazy,' Gartner analyst says171Views0likes0CommentsIPExpo London Presentations
A few months back I attended and spoke at the IPExpo 2011 at Earl’s Court Two in London. I gave 3 presentations which were recorded and two of them are available online from the IPExpo website. I haven’t figured out a way to download or embed the videos but did want to send the video links. The slides for each are also available. Sign-up (free) may be required to view the content but it’s pretty good, if I do say so myself. A Cloud To Call Your Own – I was late for this one due to some time confusion but I run in get mic’d and pull it all together. I run thru various areas of focus/concern/challenges of deploying applications in the cloud – many of them no different than a typical application in a typical data center. The Encryption Dance gets it’s first international performance and the UK crowd wasn’t quite sure what to do. It is the home of Monty Python, isn’t it? Catching up to the Cloud: Roadmap to the Dynamic Services Model – This was fun since it was later in the afternoon and there were only a few folks in the audience. I talk about the need to enable enterprises to add, remove, grow and shrink services on-demand, regardless of location. ps Related: F5 EMEA London IPEXPO 2011 London IPEXPO 2011 - The Wrap Up F5 EMEA Video F5 Youtube Channel F5 UK Web Site Technorati Tags: F5, ipexpo, integration, Pete Silva, security, business, emea, technology, trade show, big-ip, video, education178Views0likes0CommentsWhat is a Strategic Point of Control Anyway?
From mammoth hunting to military maneuvers to the datacenter, the key to success is control Recalling your elementary school lessons, you’ll probably remember that mammoths were large and dangerous creatures and like most animals they were quite deadly to primitive man. But yet man found a way to hunt them effectively and, we assume, with more than a small degree of success as we are still here and, well, the mammoths aren’t. Marx Cavemen PHOTO AND ART WORK : Fred R Hinojosa. The theory of how man successfully hunted ginormous creatures like the mammoth goes something like this: a group of hunters would single out a mammoth and herd it toward a point at which the hunters would have an advantage – a narrow mountain pass, a clearing enclosed by large rock, etc… The qualifying criteria for the place in which the hunters would finally confront their next meal was that it afforded the hunters a strategic point of control over the mammoth’s movement. The mammoth could not move away without either (a) climbing sheer rock walls or (b) being attacked by the hunters. By forcing mammoths into a confined space, the hunters controlled the environment and the mammoth’s ability to flee, thus a successful hunt was had by all. At least by all the hunters; the mammoths probably didn’t find it successful at all. Whether you consider mammoth hunting or military maneuvers or strategy-based games (chess, checkers) one thing remains the same: a winning strategy almost always involves forcing the opposition into a situation over which you have control. That might be a mountain pass, or a densely wooded forest, or a bridge. The key is to force the entire complement of the opposition through an easily and tightly controlled path. Once they’re on that path – and can’t turn back – you can execute your plan of attack. These easily and highly constrained paths are “strategic points of control.” They are strategic because they are the points at which you are empowered to perform some action with a high degree of assurance of success. In data center architecture there are several “strategic points of control” at which security, optimization, and acceleration policies can be applied to inbound and outbound data. These strategic points of control are important to recognize as they are the most efficient – and effective – points at which control can be exerted over the use of data center resources. DATA CENTER STRATEGIC POINTS of CONTROL In every data center architecture there are aggregation points. These are points (one or more components) through which all traffic is forced to flow, for one reason or another. For example, the most obvious strategic point of control within a data center is at its perimeter – the router and firewalls that control inbound access to resources and in some cases control outbound access as well. All data flows through this strategic point of control and because it’s at the perimeter of the data center it makes sense to implement broad resource access policies at this point. Similarly, strategic points of control occur internal to the data center at several “tiers” within the architecture. Several of these tiers are: Storage virtualization provides a unified view of storage resources by virtualizing storage solutions (NAS, SAN, etc…). Because the storage virtualization tier manages all access to the resources it is managing, it is a strategic point of control at which optimization and security policies can be easily applied. Application Delivery / load balancing virtualizes application instances and ensures availability and scalability of an application. Because it is virtualizing the application it therefore becomes a point of aggregation through which all requests and responses for an application must flow. It is a strategic point of control for application security, optimization, and acceleration. Network virtualization is emerging internal to the data center architecture as a means to provide inter-virtual machine connectivity more efficiently than perhaps can be achieved through traditional network connectivity. Virtual switches often reside on a server on which multiple applications have been deployed within virtual machines. Traditionally it might be necessary for communication between those applications to physically exit and re-enter the server’s network card. But by virtualizing the network at this tier the physical traversal path is eliminated (and the associated latency, by the way) and more efficient inter-vm communication can be achieved. This is a strategic point of control at which access to applications at the network layer should be applied, especially in a public cloud environment where inter-organizational residency on the same physical machine is highly likely. OLD SKOOL VIRTUALIZATION EVOLVES You might have begun noticing a central theme to these strategic points of control: they are all points at which some kind of virtualization – and thus aggregation – occur naturally in a data center architecture. This is the original (first) kind of virtualization: the presentation of many resources as a single resources, a la load balancing and other proxy-based solutions. When there is a one —> many (1:M) virtualization solution employed, it naturally becomes a strategic point of control by virtue of the fact that all “X” traffic must flow through that solution and thus policies regarding access, security, logging, etc… can be applied in a single, centrally managed location. The key here is “strategic” and “control”. The former relates to the ability to apply the latter over data at a single point in the data path. This kind of 1:M virtualization has been a part of datacenter architectures since the mid 1990s. It’s evolved to provide ever broader and deeper control over the data that must traverse these points of control by nature of network design. These points have become, over time, strategic in terms of the ability to consistently apply policies to data in as operationally efficient manner as possible. Thus have these virtualization layers become “strategic points of control”. And you thought the term was just another square on the buzz-word bingo card, didn’t you?1.2KViews0likes6CommentsEvolving (or not) with Our Devices
When I talk on the phone, I’ve always used my left ear to listen. Listening in the right ear just doesn’t sound right. This might be due to being right handed, doing the shoulder hold to take notes when needed. As corded turned to cordless and mobile along with the hands-free ear-plugs, that plug went into the left ear whenever I was on the phone. Recently, I’ve been listening to some music while walking the dog and have run into an issue. The stereo ear plugs do not fit, sit or stay in my right ear. I have no problem with the nub in my left ear but need to keep re-inserting, adjusting and holding the plug in my right ear. I’m sure I was born with the same size opening for both ears years ago and my only explanation is that my left ear has evolved over the years to accommodate an ear plug. Even measuring each indicates that the left is opened more ever so slightly. I seem to be fine, or at least better, with the isolation earphone style but it’s the ear-bud type that won’t fit in my right ear. I realize there are tons of earplug types for various needs and I could just get one that works for me but it got me thinking. If my ears or specifically my left ear has morphed due to technology, what other human physical characteristics might evolve over time. As computers became commonplace and more people started using keyboards, we started to see a huge increase of carpal tunnel syndrome. Sure, other repetitive tasks of the hand and wrist can cause carpal tunnel but typing on a computer keyboard is probably the most common cause. Posture related injuries like back, neck, shoulder and arm pain along with headaches are common computer related injuries. Focusing your eyes at the same distance over extended periods of time can cause fatigue and eye strain. It might not do permanent damage to your eyesight but you could experience blurred vision, headaches and a temporary inability to focus on faraway objects. Things like proper design of your workstation and taking breaks that encourage blood flow can help reduce computer related injuries. Of course, every profession has their specific repetitive tasks which can lead to some sort of injury and, depending on your work, the body adjusts and has it’s own physical memory to accomplish the task. Riding a bike. Often smokers who are trying to quit can tolerate the nicotine deduction but it’s the repetitive physical act of bringing the dart up that causes grief. That’s why many turn to straws or toothpicks or some other item to break the habit. We’ve gotten use to seeing people walking around with little blue-tooth ear apparatus attached to their heads and think nothing of it. They’ll leave it in all day even if they are not talking on the phone. Many probably feel ‘naked’ if they forgot it one day, almost like a watch or ring that we wear daily. I mentioned a couple years ago in IPv6 and the End of the World that with IPv6, each one of us, worldwide, would be able to have our own personal IP address that would follow us anywhere. Hold on, I’m getting a call through my earring but first must authenticate with the chip in my earlobe. That same chip, after checking my print and pulse, would open the garage, unlock the doors, disable the home alarm, turn on the heat and start the microwave for a nice hot meal as soon as I enter. Who would have thought that Carol Burnett's ear tug would come back. Now that many of us have mobile devices with touch-screens, we’re tapping away with index fingers and thumbs. I know my thumb joints can get sore when tapping too much. Will our thumbs grow larger or stronger over time to accommodate the new repetitive movement or go smaller and pointy to make sure we’re able to click the the correct virtual keypad on the device. We got video eyewear so it’s only a matter of time that our email and mobile screens could simply appear while wearing shades or as heads up on the car windshield. With special gloves or an implant under our hand, we can control the device through movement or tapping the steering wheel. Ahhh, anyway, I’m sure things will change again in the next decade and we’ll have some other things happening within our evolutionary process but it’ll be interesting to see if we can maintain control over technology or will technology change us. In the meantime, I’ll be ordering some new earphones. ps Technorati Tags: F5, humans, people, Pete Silva, security, behavior, education, technology, mobile, earphone, ipv6, computer injury, iPhone, web,218Views0likes0Comments355 Shopping Days Left
After just being bombarded with the endless options of gifts for your loved ones, a simple reminder that the next blitz is just around the corner. And you are a target. 2011 started relatively tame for breaches but when hacktivism and a few other entities decided to take hold, it became a massive year for lost data. From retail to healthcare to government to schools to financial institutions – no one was immune. Household names like Sony, RSA, Lockheed and Sega were all hit. Privacy Rights Clearinghouse reports that 535 security breaches in 2011 exposed 30 million sensitive records to identity thieves and other rip-off artists. Since 2005, 543 million records have been breached – almost double the US population and about 7% of the entire world’s population. Looking at the entire Privacy Rights Clearinghouse list is staggering both in numbers and names. It might not get better any time soon. Since mobile devices have become fixed appendages and continue to dominate many areas of our lives (phone, entertainment, email, GPS, banking, work, etc), the crooks will look for more ways to infiltrate that love affair. I suspect that mobile financial (payment/banking) apps will get a lot of attention this year as will malware laced apps. Our health information is also at risk. Medical records are being digitized. A 2009 stimulus bill included incentives for doctors and hospitals who embrace electronic health records. The CDC saw a 12% increase from last year – now 57% of office-based physicians use electronic health records. The inadvertent result is that the number of reported breaches is up 32% this year according to Ponemon Institute. That cost the health care industry somewhere in the neighborhood of $6.5 Billion. Now you might think that you have less control over a health provider’s systems than your own mobile device. While mostly true, close to half of those case involved a lost or stolen phone or personal computer. Some sort of human element involved. It is really up to each of us to practice safe computing and, if you’re knowledgeable, share insight with those who are not tech savvy. Yes, you can be the most cautious internet citizen and still be a victim due to someone else’s mistake, oversight or vulnerability. Even so, it is still important to be aware and do what you can. For centuries we’ve been physically protecting our property, neighbors, towns, identity and anything else important to us. At times, the thieves, enemies and otherwise unwanted still got in and created havoc. Advances and admissions, plus the value of whatever needed protection kept the battle going. It continues today in the digital universe. ps References 543 Million Records Breached Since 2005 Security Breaches 2005 – Present Privacy Rights Clearinghouse: 30 million sensitive records breached in 2011 Digital Data on Patients Raises Risk of Breaches HIPAA & Breach Enforcement Statistics for December 2011 Breaches Affecting 500 or More Individuals (Department of Health and Human Services) Second Annual Patient Privacy Study Released “With That Revealing Shirt? He Was Just Begging to be Hacked.” Blaming The Victim in the STRATFOR Hack The New Wallet: Is it Dumb to Carry a Smartphone? The Top 10, Top Predictions for 2012 Our Identity Crisis Security Never Takes a Vacation189Views0likes0Comments