context
39 TopicsWARNING: Security Device Enclosed
If you aren’t using all the security tools at your disposal you’re doing it wrong. How many times have you seen an employee wave on by a customer when the “security device enclosed” in some item – be it DVD, CD, or clothing – sets off the alarm at the doors? Just a few weeks ago I heard one young lady explain the alarm away with “it must have be the CD I bought at the last place I was at…” This apparently satisfied the young man at the doors who nodded and turned back to whatever he’d been doing. All the data the security guy needed to make a determination was there; he had all the context necessary in which to analyze the situation and make a determination based upon that information. But he ignored it all. He failed to leverage all the tools at his disposal and potentially allowed dollars to walk out the door. In doing so he also set a precedent and unintentionally sent a message to anyone who really wanted to commit a theft: I ignore warning signs, go ahead.1.6KViews0likes2CommentsWILS: Network Load Balancing versus Application Load Balancing
Are you load balancing servers or applications? Network traffic or application requests? If your strategy to application availability is network-based you might need a change in direction (up the stack). Can you see the application now? Network load balancing is the distribution of traffic based on network variables, such as IP address and destination ports. It is layer 4 (TCP) and below and is not designed to take into consideration anything at the application layer such as content type, cookie data, custom headers, user location, or the application behavior. It is context-less, caring only about the network-layer information contained within the packets it is directing this way and that. Application load balancing is the distribution of requests based on multiple variables, from the network layer to the application layer. It is context-aware and can direct requests based on any single variable as easily as it can a combination of variables. Applications are load balanced based on their peculiar behavior and not solely on server (operating system or virtualization layer) information. The difference between the two is important because network load balancing cannot assure availability of the application. This is because it bases its decisions solely on network and TCP-layer variables and has no awareness of the application at all. Generally a network load balancer will determine “availability” based on the ability of a server to respond to ICMP ping, or to correctly complete the three-way TCP handshake. An application load balancer goes much deeper, and is capable of determining availability based on not only a successful HTTP GET of a particular page but also the verification that the content is as was expected based on the input parameters. This is also important to note when considering the deployment of multiple applications on the same host sharing IP addresses (virtual hosts in old skool speak). A network load balancer will not differentiate between Application A and Application B when checking availability (indeed it cannot unless ports are different) but an application load balancer will differentiate between the two applications by examining the application layer data available to it. This difference means that a network load balancer may end up sending requests to an application that has crashed or is offline, but an application load balancer will never make that same mistake. WILS: Write It Like Seth. Seth Godin always gets his point across with brevity and wit. WILS is an ATTEMPT TO BE concise about application delivery TOPICS AND just get straight to the point. NO DILLY DALLYING AROUND. WILS: InfoSec Needs to Focus on Access not Protection WILS: Applications Should Be Like Sith Lords WILS: Cloud Changes How But Not What WILS: Application Acceleration versus Optimization WILS: Automation versus Orchestration Layer 7 Switching + Load Balancing = Layer 7 Load Balancing Business-Layer Load Balancing Not all application requests are created equal Cloud Balancing, Cloud Bursting, and Intercloud The Infrastructure 2.0 Trifecta716Views0likes0CommentsF5 and Versafe: Because Mobility Matters
#F5 #security #cloud #mobile #context Context means more visibility into devices, networks, and applications even when they're unmanaged. Mobility is a significant driver of technology today. Whether it's mobility of applications between data center and cloud, web and mobile device platform or users from corporate to home to publicly available networks, mobility is a significant factor impacting all aspects of application delivery but in particular, security. Server virtualization, BYOD, SaaS, and remote work all create new security problems for data center managers. No longer can IT build a security wall around its data center; security must be provided throughout the data and application delivery process. This means that the network must play a key role in securing data center operations as it “touches” and “sees” all traffic coming in and out of the data center. -- Lee Doyle, GigaOM "Survey: SDN benefits unclear to enterprise network managers" 8/29/2013 It's a given that corporate data and access to applications need to be protected when delivered to locations outside corporate control. Personal devices, home networks, and cloud storage all introduce the risk of information loss through a variety of attack vectors. But that's not all that poses a risk. Mobility of customers, too, is a source of potential disaster waiting to happen as control over behavior as well as technology is completely lost. Industries based on consumers and using technology to facilitate business transactions are particularly at risk from consumer mobility and, more importantly, from the attackers that target them. If the risk posed by successful attacks - phishing, pharming and social engineering - isn't enough to give the CISO an ulcer, the cost of supporting sometimes technically challenged consumers will. Customer service and support has become in recent years not only a help line for the myriad web and mobile applications offered by an organization, but a security help desk, as well, as consumers confused by e-mail and web attacks make use of such support lines. F5 and Security F5 views security as a holistic strategy that must be able to dig into not just the application and corporate network, but into the device and application, as well as the networks over which users access both mobile and web applications. That's where Versafe comes in with its unique combination of client-side intelligent visibility and subscription-based security service. Versafe's technology employs its client-side visibility and logic with expert-driven security operations to ensure real-time detection of a variety of threat vectors common to web and mobile applications alike. Its coverage of browsers, devices and users is comprehensive. Every platform, every user and every device can be protected from a vast array of threats including those not covered by traditional solutions such as session hijacking. Versafe approaches web fraud by monitoring the integrity of the session data that the application expects to see between itself and the browser. This method isn’t vulnerable to ‘zero-day’ threats: malware variants, new proxy/masking techniques, or fraudulent activity originating from devices, locations or users who haven’t yet accumulated digital fraud fingerprints. Continuous Delivery Meets Continuous Security Versafe's solution can accomplish such comprehensive coverage because it's clientless,relying on injection into web content in real time. That's where F5 comes in. Using F5 iRules, the appropriate Versafe code can be injected dynamically into web pages to scan and detect potential application threats including script injection, trojans, and pharming attacks. Injection in real-time through F5 iRules eliminates reliance on scanning and updating heterogeneous endpoints and, of course, relying on consumers to install and maintain such agents. This allows the delivery process to scale seamlessly along with users and devices and reasserts control over processes and devices not under IT control, essentially securing unsecured devices and lines of communication. Injection-based delivery also means no impact on application developers or applications, which means it won't reduce application development and deployment velocity. It also enables real-time and up-to-the-minute detection and protection against threats because the injected Versafe code is always communicating with the latest, up-to-date security information maintained by Versafe at its cloud-based, Security Operations Center. User protection is always on, no matter where the user might be or on what device and doesn't require updating or action on the part of the user. The clientless aspect of Versafe means it has no impact on user experience. Versafe further takes advantage of modern browser technology to execute with no performance impact on the user experience, That's a big deal, because a variety of studies on real behavior indicates performance hits of even a second on load times can impact revenue and user satisfaction with web applications. Both the web and mobile offerings from Versafe further ensure transaction integrity by assessing a variety of device-specific and behavioral variables such as device ID, mouse and click patterns, sequencing of and timing between actions and continuous monitoring of JavaScript functions. These kinds of checks are sort of an automated Turing test; a system able to determine whether an end-user is really a human being - or a bot bent on carrying out a malicious activity. But it's not just about the mobility of customers, it's also about the mobility - and versatility - of modern attackers. To counter a variety of brand, web and domain abuse, Versafe's cloud-based 24x7x365 Security Operations Center and Malware Analysis Team proactively monitors for organization-specific fraud and attack scheming across all major social and business networks to enable rapid detection and real-time alerting of suspected fraud. EXPANDING the F5 ECOSYSTEM The acquisition of Versafe and its innovative security technologies expands the F5 ecosystem by exploiting the programmable nature of its platform. Versafe technology supports and enhances F5's commitment to delivering context-aware application services by further extending our visibility into the user and device domain. Its cloud-based, subscription service complements F5's IP Intelligence Service, which provides a variety of similar service-based data that augments F5 customers' ability to make context-aware decisions based on security and location data. Coupled with existing application security services such as web application and application delivery firewalls, Versafe adds to the existing circle of F5 application security services comprising user, network, device and application while adding brand and reputation protection to its already robust security service catalog. We're excited to welcome Versafe into the F5 family and with the opportunity to expand our portfolio of delivery services. More information on Versafe: Versafe Versafe | Anti-Fraud Solution (Anti Phishing, Anti Trojan, Anti Pharming) Versafe Identifies Significant Joomla CMS Vulnerability & Corresponding Spike in Phishing, Malware Attacks Joomla Exploit Enabling Malware, Phishing Attacks to be Hosted from Genuine Sites 'Eurograbber' online banking scam netted $47 million443Views0likes1CommentChoosing a Load Balancing Algorithm Requires DevOps Fu
Knowing the algorithms is only half the battle, you’ve got to understand a whole lot more to design a scalable architecture. Citrix’s Craig Ellrod has a series of blog posts on the basic (industry standard) load balancing algorithms. These are great little posts for understanding the basics of load balancing algorithms like round robin, least connections, and least (fastest) response time. Craig’s posts are accurate in their description of the theoretical (designed) behavior of the algorithms. The thing that’s missing from these posts (and maybe Craig will get to this eventually) is context. Not the context I usually talk about, but the context of the application that is being load balanced and the way in which modern load balancing solutions behave, which is not as a simple Load balancer. Different applications have different usage patterns. Some are connection heavy, some are compute intense, some return lots of little responses while others process larger incoming requests. Choosing a load balancing algorithm without understand the behavior of the application and its users will almost always result in an inefficient scaling strategy that leaves you constantly trying to figure out why load is unevenly distributed or SLAs are not being met or why some users are having a less than stellar user experience. One of the most misunderstood aspects of load balancing is that a load balancing algorithm is designed to choose from a pool of resources and that an application is or can be made up of multiple pools of resources. These pools can be distributed (cloud balancing) or localized, and they may all be active or some may be designated as solely existing for failover purposes. Ultimately this means that the algorithm does not actually choose the pool from which a resource will be chosen – it only chooses a specific resource. The relationship between the choice of a pool and the choice of a single resource in a pool is subtle but important when making architectural decisions – especially those that impact scalability. A pool of resources is (or should be) a set of servers serving similar resources. For example, the separation of image servers from application logic servers will better enable scalability domains in which each resource can be scaled individually, without negatively impacting the entire application. To make the decision more complex, there are a variety of factors that impact the way in which a load balancing algorithm actually behaves in contrast to how it is designed to act. The most basic of these factors is the network layer at which the load balancing decision is being made. THE IMPACT of PROTOCOL There are two layers at which applications are commonly load balanced: TCP (transport) and HTTP (application). The layer at which a load balancing is performed has a profound effect on the architecture and capabilities. Layer 4 (Connection-oriented) Load Balancing When load balancing at layer 4 you are really load balancing at the TCP or connection layer. This means that a connection (user) is bound to the server chosen on the initial request. Basically a request arrives at the load balancer and, based on the algorithm chosen, is directed to a server. Subsequent requests over the same connection will be directed to that same server. Unlike Layer 7-related load balancing, in a layer 4 configuration the algorithm is usually the routing mechanism for the application. Layer 7 (Connection-oriented) Load Balancing When load balancing at Layer 7 in a connection-oriented configuration, each connection is treated in a manner similar to Layer 4 Load Balancing with the exception of how the initial server is chosen. The decision may be based on an HTTP header value or cookie instead of simply relying on the load balancing algorithm. In this scenario the decision being made is which pool of resources to send the connection to. Subsequently the load balancing algorithm chosen will determine which resource within that pool will be assigned. Subsequent requests over that same connection will be directed to the server chosen. Layer 7 connection-oriented load balancing is most useful in a virtual hosting scenario in which many hosts (or applications) resolve to the same IP address. Layer 7 (Message-oriented) Load Balancing Layer 7 load balancing in a message-oriented configuration is the most flexible in terms of the ability to distribute load across pools of resources based on a wide variety of variables. This can be as simple as the URI or as complex as the value of a specific XML element within the application message. Layer 7 message-oriented load balancing is more complex than its Layer 7 connection-oriented cousin because a message-oriented configuration also allows individual requests – over the same connection – to be load balanced to different (virtual | physical) servers. This flexibility allows the scaling of message-oriented protocols such as SIP that leverage a single, long-lived connection to perform tasks requiring different applications. This makes message-oriented load balancing a better fit for applications that also provide APIs as individual API requests can be directed to different pools based on the functionality they are performing, such as separating out requests that update a data source from those that simply read from a data source. Layer 7 message-oriented load balancing is also known as “request switching” because it is capable of making routing decisions on every request even if the requests are sent over the same connection. Message-oriented load balancing requires a full proxy architecture as it must be the end-point to the client in order to intercept and interpret requests and then route them appropriately. OTHER FACTORS to CONSIDER If that were not confusing enough, there are several other factors to consider that will impact the way in which load is actually distributed (as opposed to the theoretical behavior based on the algorithm chosen). Application Protocol HTTP 1.0 acts differently than HTTP 1.1. Primarily the difference is that HTTP 1.0 implies a one-to-one relationship between a request and a connection. Unless the HTTP Keep-Alive header is included with a HTTP 1.0 request, each request will incur the processing costs to open and close the connection. This has an impact on the choice of algorithm because there will be many more connections open and in progress at any given time. You might think there’s never a good reason to force HTTP 1.0 if the default is more efficient, but consider the case in which a request is for an image – you want to GET it and then you’re finished. Using HTTP 1.0 and immediately closing the connection is actually better for the efficiency and thus capacity of the web server because it does not maintain an open connection waiting for a second or third request that will not be forthcoming. An open, idle connection that will eventually simply time-out wastes resources that could be used by some other user. Connection management is a large portion of resource consumption on a web or application server, thus anything that increases that number and rate decreases the overall capacity of each individual server, making it less efficient. HTTP 1.1 is standard (though unfortunately not ubiquitous) and re-uses client-initiated connections, making it more resource efficient but introducing questions regarding architecture and algorithmic choices. Obviously if a connection is reused to request both an image resource and a data resource but you want to leverage fault tolerant and more efficient scale design using scalability domains this will impact your choice of load balancing algorithms and the way in which requests are routed to designated pools. Configuration Settings A little referenced configuration setting in all web and application servers (and load balancers) is the maximum number of requests per connection. This impacts the distribution of requests because once the configured maximum number of requests has been sent over the same connection it will be closed and a new connection opened. This is particularly impactful on AJAX and other long-lived connection applications. The choice of algorithm can have a profound affect on availability when coupled with this setting. For example, an application uses an AJAX-update on a regular interval. Furthermore, the application requests made by that updating request require access to session state, which implies some sort of persistence is required. The first request determines the server, and a cookie (as one method) subsequently ensures that all further requests for that update are sent to the same server. Upon reaching the maximum number of requests, the load balancer must re-establish that connection – and because of the reliance on session state it must send the request back to the original server. In the meantime, however, another user has requested a resource and been assigned to that same server, and that user has caused the server to reach its maximum number of connections (also configurable). The original user is unable to be reconnected to his session and regardless of whether the request is load balanced to a new resource or not, “availability” is effectively lost as the current state of the user’s “space” is now gone. You’ll note that a combination of factors are at work here: (1) the load balancing algorithm chosen, (2) configuration options and (3) application usage patterns and behavior. IT JUST ISN’T ENOUGH to KNOW HOW the ALGORITHMS BEHAVE This is the area in which devops is meant to shine – in the bridging of that gap across applications and the network, in understanding how the two interact, integrate, and work together to achieve high-scalability and well-performing applications. Devops must understand the application but they must also understand how load balancing choices and configurations and options impact the delivery and scalability of that application, and vice-versa. The behavior of an application and its users can impact the way in which a load balancing algorithm performs in reality rather than theory, and the results are often very surprising. Round-robin load balancing, is designed to equally distribute requests across a pool of resources, not workload. Thus in the course of a period of time one application instance in a pool of resources may become overloaded and either become unavailable or exhibit errors that appear only under heavy load while other instances in the same pool may be operating normally and under nominal load conditions. You certainly need to have a basic understanding of load balancing algorithms, especially moving forward toward elastic applications and cloud computing . But once you understand the basics you really need to start examining the bigger picture and determining the best set of options and configurations for each application. This is one of the reasons testing is so important when designing a scalable architecture – to uncover the strange behavior that can often only be discovered under heavy load, to examine the interaction of all the variables and how they impact the actual behavior of the load balancer at run time.339Views0likes0CommentsOde to FirePass
A decade ago, remote VPN access was a relatively new concept for businesses; it was available only to a select few who truly needed it, and it was usually over a dial-up connection. Vendors like Cisco, Check Point, and Microsoft started to develop VPN solutions using IPsec, one of the first transport layer security protocols, and RADIUS Server. At first organizations had to launch the modem and enter the pertinent information, but soon client software was offered as a package. This client software had to be installed, configured, and managed on the user’s computer. As high-speed broadband became a household norm and SSL/TLS matured, the SSL VPN arrived, allowing secure connections via a browser-based environment. Client pre-installation and management hassles were eliminated; rather the masses now had secure access to corporate resources with just a few browser components and an appliance in the data center. These early SSL VPNs, like the first release of F5’s FirePass, offered endpoint checks and multiple modes of access depending on user needs. At the time, most SSL VPNs were limited in areas like overall performance, logins per second, concurrent sessions/users, and in some cases, throughput. Organizations that offered VPN extended it to executives, frequent travelers, and IT staff, and it was designed to provide separated access for corporate employees, partners, and contractors over the web portal. But these organizations were beginning to explore company-wide access since most employees still worked on-site. Today, almost all employees have multiple devices, including smartphones, and most companies offer some sort of corporate VPN access. By 2015, 37.2 percent of the worldwide workforce will be remote and therefore mobile—that’s 1.3 billion people. Content is richer, phones are faster, and bandwidth is available—at least via broadband to the home. Devices need to be authenticated and securely connected to corporate assets, making a high-performance Application Delivery Controller (ADC) with unified secure access a necessity. As FirePass is retired, organizations will have two ADC options with which to replace it: F5 BIG-IP Edge Gateway, a standalone appliance, and BIG-IP Access Policy Manager (APM), a module that can be added to BIG-IP LTM devices. Both products are more than just SSL VPNs—they’re the central policy control points that are critical to managing dynamic data center environments. A Little History F5’s first foray into the SSL VPN realm was with its 2003 purchase of uRoam and its flagship product, FirePass. Although still small, Infonetics Research predicted that the SSL VPN market will swell from around $25 million [in 2002] to $1 billion by 2005/6 and the old meta Group forecasted that SSL-based technology would be the dominant method for remote access, with 80 percent of users utilizing SSL by 2005/6. They were right—SSL VPN did take off. Using technology already present in web browsers, SSL VPNs allowed any user from any browser to type in a URL and gain secure remote access to corporate resources. There was no full client to install—just a few browser control components or add-on to facilitate host checks and often, SSL-tunnel creation. Administrators could inspect the requesting computer to ensure it achieved certain levels of security, such as antivirus software, a firewall, and client certificates. Like today, there were multiple methods to gain encrypted access. There was (and still is) the full layer-3 network access connection; a port forwarding or application tunnel–type connection; or simply portal web access through a reverse proxy. SSL VPNs Mature With more enterprises deploying SSL VPNs, the market grew and FirePass proved to be an outstanding solution. Over the years, FirePass has lead the market with industry firsts like the Visual Policy Editor, VMware View support, group policy support, an SSL client that supported QoS (quality of service) and acceleration, and integrated support with third-party security solutions. Every year from 2007 through 2010, FirePass was an SC Magazine Reader Trust finalist for Best SSL VPN. As predicted, SSL VPN took off in businesses; but few could have imagined how connected the world would really become. There are new types of tablet devices and powerful mobile devices, all growing at accelerated rates. And today, it’s not just corporate laptops that request access, but personal smartphones, tablets, home computers, televisions, and many other new devices that will have an operating system and IP address. As the market has grown, the need for scalability, flexibility, and access speed became more apparent. In response, F5 began including the FirePass SSL VPN functionality in the BIG-IP system of Application Delivery Controllers, specifically, BIG-IP Edge Gateway and BIG-IP Access Policy Manager (APM). Each a unified access solution, BIG-IP Edge Gateway and BIG-IP APM are scalable, secure, and agile controllers that can handle all access needs, whether remote, wireless, mobile, or LAN. The secure access reigns of FirePass have been passed to the BIG-IP system; by the end of 2012, FirePass will no longer be available for sale. For organizations that have a FirePass SSL VPN, F5 will still offer support for it for several years. However those organizations are encouraged to test BIG-IP Edge Gateway or BIG-IP APM. Unified Access Today The accelerated advancement of the mobile and remote workforce is driving the need to support tens of thousands concurrent users. The bursting growth of Internet traffic and the demand for new services and rich media content can place extensive stress on networks, resulting in access latency and packet loss. With this demand, the ability of infrastructure to scale with the influx of traffic is essential. As business policies change over time, flexibility within the infrastructure gives IT the agility needed to keep pace with access demands while the security threats and application requirements are constantly evolving. Organizations need a high-performance ADC to be the strategic point of control between users and applications. This ADC must understand both the applications it delivers and the contextual nature of the users it serves. BIG-IP Access Policy Manager BIG-IP APM is a flexible, high-performance access and security add-on module for either the physical or virtual edition of BIG-IP Local Traffic Manager (LTM). BIG-IP APM can help organizations consolidate remote access infrastructure by providing unified global access to business-critical applications and networks. By converging and consolidating remote access, LAN access, and wireless connections within a single management interface, and providing easy-to-manage access policies, BIG-IP APM can help free up valuable IT resources and scale cost-effectively. BIG-IP APM protects public-facing applications by providing policy-based, context-aware access to users while consolidating access infrastructure. BIG-IP Edge Gateway BIG-IP Edge Gateway is a standalone appliance that provides all the benefits of BIG-IP APM—SSL VPN remote access security—plus application acceleration and WAN optimization services at the edge of the network—all in one efficient, scalable, and cost-effective solution. BIG-IP Edge Gateway is designed to meet current and future IT demands, and can scale up to 60,000 concurrent users on a single box. It can accommodate all converged access needs, and on a single platform, organizations can manage remote access, LAN access, and wireless access by creating unique policies for each. BIG-IP Edge Gateway is the only ADC with remote access, acceleration, and optimization services built in. To address high latency links, technologies like intelligent caching, WAN optimization, compression, data deduplication, and application-specific optimization ensure the user is experiencing the best possible performance, 2 to 10 times faster than legacy SSL VPNs. BIG-IP Edge Gateway gives organizations unprecedented flexibility and agility to consolidate all their secure access methods on a single device. FirePass SSL VPN Migration A typical F5 customer might have deployed FirePass a few years ago to support RDP virtual desktops, endpoint host checks, and employee home computers, and to begin the transition from legacy IPsec VPNs. As a global workforce evolved with their smartphones and tablets, so did IT's desire to consolidate their secure access solutions. Many organizations have upgraded their FirePass controller functionality to a single BIG-IP appliance. Migrating any system can be a challenge, especially when it is a critical piece of the infrastructure that global users rely on. Migrating security devices, particularly remote access solutions, can be even more daunting since policies and settings are often based on an identity and access management framework. Intranet web applications, network access settings, basic device configurations, certificates, logs, statistics, and many other settings often need to be configured on the new controller. FirePass can make migrating to BIG-IP Edge Gateway or BIG-IP APM a smooth, fast process. The FirePass Configuration Export Tool, available as a hotfix (HF-359012-1) for FirePass v6.1 and v7, exports configurations into XML files. Device management, network access, portal access, and user information can also all be exported to an XML file. Special settings like master groups, IP address pools, packet filter rules, VLANS, DNS, hosts, drive mappings, policy checks, and caching and compression are saved so an administrator can properly configure the new security device. It’s critical that important configuration settings are mapped properly to the new controller, and with the FirePass Configuration Export Tool, administrators can deploy the existing FirePass configurations to a new BIG-IP Edge Gateway device or BIG-IP APM module. A migration guide will be available shortly. SSL VPNs like FirePass have helped pave the way for easy, ubiquitous remote access to sensitive corporate resources. As the needs of the corporate enterprise change, so must the surrounding technology tasked with facilitating IT initiates. The massive growth of the mobile workforce and their devices, along with the need to secure and optimize the delivery of rich content, requires a controller that is specifically developed for application delivery. Both BIG-IP Edge Gateway and BIG-IP APM offer all the SSL VPN functionality found in FirePass, but on the BIG-IP platform. ps Resources: 2011 Gartner Magic Quadrant for SSL VPNs F5 Positioned in Leaders Quadrant of SSL VPN Magic Quadrant SOL13366 - End of Sale Notice for FirePass SOL4156 - FirePass software support policy Secure Access with the BIG-IP System | (whitepaper) FirePass to BIG-IP APM Migration Service F5 FirePass to BIG-IP APM Migration Datasheet FirePass Wiki Home Audio Tech Brief - Secure iPhone Access to Corporate Web Applications In 5 Minutes or Less - F5 FirePass v7 Endpoint Security Pete Silva Demonstrates the FirePass SSL-VPN Technorati Tags: F5, infrastructure 2.0, integration, cloud connect, Pete Silva, security, business, education, technology, application delivery, intercloud, cloud, context-aware, infrastructure 2.0, automation, web, internet320Views0likes0CommentsKids and their Dot Coms
My daughter likes to glue pictures in a composition notebook – Disney Princesses, giraffes, fairies, Barbie scenes, herself and many other things a kindergartener gravitates towards. Usually she asks for certain characters or a particular animal and I go find and print. This weekend, however, as she was asking for some Barbie pictures and a basketball player, she specifically said, ‘you need to go to barbie.com and basketballplayer.com to get the pictures.’ Oh really? She’s known about ‘dot com’ for a while, especially buyslushymagic.com but this was one of the first times she’s requested, rather instructed me to visit specific sites for her crafts. She is good at a keyboard and knows how to search for youtube videos, which is becoming the norm for 5 year olds. I totally understand that each generation, due to whatever technological advancements, grow up in different era's with different ways of doing things and many conversations start with, ‘When I was growing up…’ or ‘When I was a kid…’ We didn’t have TV; we only had black & white TV; we had to get up to change the channel on our TV; we didn’t have cable TV; we had square TVs; we didn’t have HDTV; our TV wasn’t hooked up to the internet; we didn’t have streaming movies to the TV and soon it’ll be, ‘we didn’t have TVs that watched us when I was a kid.’ It’s fun to live during a time of so much technology innovation and growth and to work for a company, F5, that is an integral part of how it all works. And as is usually the case when I’m contemplating some nostalgia related topic, I came across this infographic: Isn’t it fun to look back and remember what we were doing last century? ps285Views0likes0CommentsCloudFucius Says: AAA Important to the Cloud
While companies certainly see a business benefit to a pay-as-you-go model for computing resources, security concerns seem always to appear at the top of surveys regarding cloud computing. These concerns include authentication, authorization, accounting (AAA) services; encryption; storage; security breaches; regulatory compliance; location of data and users; and other risks associated with isolating sensitive corporate data. Add to this array of concerns the potential loss of control over your data, and the cloud model starts to get a little scary. No matter where your applications live in the cloud or how they are being served, one theme is consistent: You are hosting and delivering your critical data at a third-party location, not within your four walls, and keeping that data safe is a top priority. Most early adopters began to test hosting in the cloud using non-critical data. Performance, scalability, and shared resources were the primary focus of initial cloud offerings. While this is still a major attraction, cloud computing has matured and established itself as yet another option for IT. More data—including sensitive data—is making its way to the cloud. The problem is that you really don’t know where in the cloud the data is at any given moment. IT departments are already anxious about the confidentiality and integrity of sensitive data; hosting this data in the cloud highlights not only concerns about protecting critical data in a third-party location but also role-based access control to that data for normal business functions. Organizations are beginning to realize that the cloud does not lend itself to static security controls. Like all other elements within cloud architecture, security must be integrated into a centralized, dynamic control plane. In the cloud, security solutions must have the capability to intercept all data traffic, interpret its context, and then make appropriate decisions about that traffic, including instructing other cloud elements how to handle it. The cloud requires the ability to apply global policies and tools that can migrate with, and control access to, the applications and data as they move from data center to cloud—and as they travel to other points in the cloud. One of the biggest areas of concern for both cloud vendors and customers alike is strong authentication, authorization, and encryption of data to and from the cloud. Users and administrators alike need to be authenticated—with strong or two-factor authentication—to ensure that only authorized personnel are able to access data. And, the data itself needs to be segmented to ensure there is no leakage to other users or systems. Most experts agree that AAA services along with secure, encrypted tunnels to manage your cloud infrastructure should be at the top of the basic cloud services offered by vendors. Since data can be housed at a distant location where you have less physical control, logical control becomes paramount, and enforcing strict access to raw data and protecting data in transit (such as uploading new data) becomes critical to the business. Lost, leaked, or tampered data can have devastating consequences. Secure services based on SSL VPN offer endpoint security, giving IT administrators the ability to see who is accessing the organization and what the endpoint device’s posture is to validate against the corporate access policy. Strong AAA services, L4 and L7 user Access Control Lists, and integrated application security help protect corporate assets and maintain regulatory compliance. Cloud computing, while quickly evolving, can offer IT departments a powerful alternative for delivering applications. Cloud computing promises scalable, on-demand resources; flexible, self-serve deployment; lower TCO; faster time to market; and a multitude of service options that can host your entire infrastructure, be a part of your infrastructure, or simply serve a single application. And one from Confucius himself: I hear and I forget. I see and I remember. I do and I understand. ps263Views0likes1CommentBYOD and the Death of the DMZ
#BYOD #infosec It's context that counts, not corporate connections. BYOD remains a topic of interest as organizations grapple not only technologically with the trend but politically, as well. There are dire warnings that refusing to support BYOD will result in an inability to attract and retain up and coming technologists, that ignoring the problems associated with BYOD will eventually result in some sort of karmic IT event that will be painful for all involved. Surveys continue to tell us organizations cannot ignore BYOD. A recent ITIC survey indicated a high level of BYOD across the global 550 companies polled. 51% of workers utilize smart phones as their BYOD devices; another 44% use notebooks and ultra books, while 31% of respondents indicated they use tablets (most notably the Apple iPad) and 23% use home-based desktop PCs or Macs. It's here, it's now, and it's in the data center. The question is no longer "will you allow it" but "how will you secure/manage/support it"? It's that first piece – secure it – that's causing some chaos and confusion. Just as we discovered with cloud computing early on, responsibility for anything shared is muddled. When asked who should bear responsibility for the security of devices in BYOD situations, respondents offered a nearly equal split between company (37%) and end-user (39%) with 21% stating it fell equally on both. From an IT security perspective, this is not a bad split. Employees should be active participants in organizational security. Knowing is, as GI Joe says, half the battle and if employees bringing their own devices to work are informed and understand the risks, they can actively participate in improving security practices and processes. But relying on end-users for organizational security would be folly, and thus IT must take responsibility for the technological enforcement of security policies developed in conjunction with the business. One of the first and most important things we must do to enable better security in a BYOD (and cloudy) world is to kill the DMZ. [Pause for apoplectic fits] By kill the DMZ I don't mean physically dismantle the underlying network architecture supporting it – I mean logically. The DMZ was developed as a barrier between the scary and dangerous Internet and sensitive corporate data and applications. That barrier now must extend to inside the data center, to the LAN, where the assumption has long been devices and users accessing data center resources are inherently safe. They are not (probably never have been, really). Every connection, every request, every attempt to access an application or data within the data center must be treated as suspect, regardless of where it may have originated and without automatically giving certain devices privileges over others. A laptop on the LAN may or may not be BYOD, it may or may not be secure, it may or may not be infected. A laptop on the LAN is no more innately safe than a tablet than is a smart phone. SMARTER CONTROL This is where the concept of a strategic point of control comes in handy. If every end-user is funneled through the same logical tier in the data center regardless of network origination, policies can be centrally deployed and enforced to ensure appropriate levels of access based on the security profile of the device and user. By sharing access control across all devices, regardless of who purchased and manages them, policies can be tailored to focus on the application and the data, not solely on the point of origination. While policies may trigger specific rules or inspections based on device or originating location, ultimately the question is who can access a given application and data and under what circumstances? It's context that counts, not corporate connections. The questions must be asked, regardless of whether the attempt to access begins within the corporate network boundaries or not. Traffic coming from the local LAN should not be treated any differently than that of traffic entering via the WAN. The notion of "trusted" and "untrusted" network connectivity has simply been obviated by the elimination of wires and the rampant proliferation of malware and other destructive digital infections. In essence, the DMZ is being – and must be - transformed. It's no longer a zone of inherent distrust between the corporate network and the Internet, it's a zone of inherent distrust between corporate resources and everything else. Its design and deployment as a buffer is still relevant, but only in the sense that it stands between critical assets and access by hook, crook, or tablet. The DMZ as we have known it is dead. Trust no one. Referenced blogs and articles: If Security in the Cloud Were Handled Like Car Accidents262Views0likes0CommentsDefense in Depth in Context
In the days of yore, a military technique called Defense-in-Depth was used to protect kingdoms, castles, and other locations where you might be vulnerable to attack. It's a layered defense strategy where the attacker would have to breach several layers of protection to finally reach the intended target. It allows the defender to spread their resources and not put all of the protection in one location. It's also a multifaceted approach to protection in that there are other mechanisms in place to help; and it's redundant so if a component failed or is compromised, there are others that are ready to step in to keep the protection in tack. Information technology also recognizes this technique as one of the 'best practices' when protecting systems. The infrastructure and systems they support are fortified with a layered security approach. There are firewalls at the edge and often, security mechanisms at every segment of the network. Circumvent one, the next layer should net them. There is one little flaw with the Defense-in-Depth strategy - it is designed to slow down attacks, not necessarily stop them. It gives you time to mobilize a counter-offensive and it's an expensive and complex proposition if you are an attacker. It's more of a deterrent than anything and ultimately, the attacker could decide that the benefits of continuing the attack outweigh the additional costs. In the digital world, it is also interpreted as redundancy. Place multiple iterations of a defensive mechanism within the path of the attacker. The problem is that the only way to increase the cost and complexity for the attacker is to raise the cost and complexity of your own defenses. Complexity is the kryptonite of good security and what you really need is security based on context. Context takes into account the environment or conditions surrounding an event to make an informed decision about how to apply security. This is especially true when protecting a database. Database firewalls are critical components to protecting your valuable data and can stop a SQL Injection attack, for instance, in an instant. What they lack is the ability to decipher contextual data like userid, session, cookie, browser type, IP address, location and other meta-data of who or what actually performed the attack. While it can see that a particular SQL query is invalid, it cannot decipher who made the request. Web Application Firewalls on the other hand can gather user side information since many of its policy decisions are based on the user's context. A WAF monitors every request and response from the browser to the web application and consults a policy to determine if the action and data are allowed. It uses such information as user, session, cookie and other contextual data to decide if it is a valid request. Independent technologies that protect against web attacks or database attacks are available, but they have not been linked to provide unified notification and reporting. Now imagine if your database was protected by a layered, defense-in-depth architecture along with the contextual information to make informed, intelligent decisions about database security incidents. The integration of BIG-IP ASM with Oracle's Database Firewall offers the database protection that Oracle is known for and the contextual intelligence that is baked into every F5 solution. Unified reporting for both the application firewall and database firewall provides more convenient and comprehensive security monitoring. Integration between the two security solutions offers a holistic approach to protecting web and database tiers from SQL injection type of attacks. The integration gives you the layered protection many security professionals recognize as a best practice, plus the contextual information needed to make intelligent decisions about what action to take. This solution provides improved SQL injection protection to F5 customers and correlated reporting for richer forensic information on SQL injection attacks to Oracle database customers. It’s an end-to-end web application and database security solution to protect data, customers, and their businesses. ps Resources: F5 Joins with Oracle to Offer Enhanced Security for Web-Based Database Applications Security for Web-Based Database Applications Enhanced With F5 and Oracle Product Integration Using Oracle Database Firewall with BIG-IP ASM F5 Networks Adds To Oracle Database Oracle Database Firewall BIG-IP Application Security Manager The “True Security Company” Red Herring F5 Friday: Two Heads are Better Than One260Views0likes0Comments