survey
8 TopicsDevSecCon Singapore Survey Results and Trip Recap
A few weeks ago I was lucky enough to get to fly to Singapore, meet my colleagues there in person, and to attend DevSecCon Singapore as a sponsor. For those unable to attend, here are the presentation slides from the 2-day event, and here are 3 takeaways from my colleague Keiichiro Nozaki. As you hopefully know by now, community is near and dear to the entire DevCentral team, and we are super-excited at growing our offline engagement. There were 227 attendees from all over Asia (plus a few from Europe and North America), and everyone was highly engaged in content, conversation, and connecting. Working with the organizers of this international conference was a pleasure, and it is rejuvenating to meet like-minded folks who believe in the importance of sharing knowledge (rather than sharing sales/marketing pitches). F5 had such a great time, we’re already looking forward to DevSecCon Seattle (September 16-17, 2019)! Here are some of the F5 crew who supported the conference: Instead of sponsoring live captioning for this event, we provided diversity scholarship attendance. Diversity and inclusion are so important to the F5 culture, and it was a pleasure to meet these young professionals, and we hope to see them again at a future event (and maybe also in a recruiter’s inbox). Here’s what our table looked like: The robots and stickers were a hit, and folks liked how comfy our shirts are. We also gave away a $100SGD Harvey Norman gift card each day, and gave out 8 spot-prizes to folks who wore our Nerd Life t-shirt on Day 2 (Starbucks gift cards, since we’re based in Seattle… and most of us run on caffeine). Here are the first two winners of the spot-prizes: For those of you who weren’t there, the Pebble survey is our way of better-understanding what challenges are at the top of attendees’ minds. In exchange for telling us if they were more on the DEV side, more on the SEC side, or OTHER, attendees got 3 pebbles of their corresponding color to vote on pain points/challenges. All 3 votes could go into a single jar or be spread out across 2 or 3 jars. As promised, here are the Pebble survey results (with a hearty thank-you to the nearly 40% of attendees - who participated). Highlighted are each persona’s top answer and the most popular answer of the four available. As you can see, most respondents self-identified as being on the Sec side of DevSec. We were unsurprised that 2 out of the 3 surveyed grounds found organizational silos to be the biggest challenge, somewhat at odds with the dev side of the house. So, what can we collectively do to help improve cultural issues within the devops world? I’d love to see some discussion in the replies below. Robert Haynes will be writing a series of articles addressing the pain points listed above, and discussing some ways to get around/over those challenges. I’ll update this post with links as they get published, but you’ll probably see them better if you follow his Twitter account. Stay tuned! On a personal note, I’d like to sincerely thank all the many people who helped my jet-lagged self function at this event (surprise caffeine is wonderful caffeine), the many people who helped make the event friendly, accessible, and useful, and the table of near-strangers who welcomed me with open arms just after the conference ended by offering food, and pouring me the last beer from their pitcher before we decided to enjoy the view from a rooftop (photo below). I can’t state strongly enough how much I love good community, regardless of where in the world it manifests. Shared food and drink has always struck me as the most fundamental level on which people can bond, and doing so after 2 days of sharing thoughts and conversations at the technical event was a treat. It was my first time at this event, and I had a great time with the wonderful folks who were there – from the organizers who did a fantastic job, to the attendees to our fellow sponsors - everyone was passionate about learning and sharing knowledge. I know we’re all looking forward to the next community conference, and hoping to see you on the road. See you soon!411Views0likes3CommentsFeatures in BigIPReport
Considering all the brilliant members of Devcentral I wanted to reach out to the community regarding potential features of my project. Are you using BigIP-Report? Want to make your voice heard regarding future releases? Then take the very short survey here: https://loadbalancing.se/2019/03/18/bigip-report-2019-survey/ If you are not and have F5 units in your company, check it out in the Code Share for a new world of configuration transparency and much better interactions with developers in your company. https://loadbalancing.se/bigip-report/371Views0likes0CommentsNetOps Meets DevOps - The State of Network Automation Survey
We want to understand your company’s current application architectures and the adoption of continuous delivery and continuous deployment practices within your organization. Please answer some brief questions about: How important automation is to your application deployments Drivers for continuous delivery and continuous deployment (CD/CD) Current challenges and concerns with respect to network and security operations How your future initiatives are shaping your plans for network and security automation Usage of automation tools across public and private cloud Please note that your responses will be confidential and reported only in aggregate. As a thank you for participating, you will receive a copy of the final aggregate survey results and, a lucky participant will receive a $500 Amazon gift card. All information will remain confidential This survey is being administered by an independent research company on behalf of F5 and Red Hat. Your answers will be kept strictly confidential and your feedback will be combined with the feedback from all respondents worldwide. UPDATE: The report has now been finalized and can be found here: NetOps Meets DevOps - The State of Network Automation Many thanks from the DevCentral Team!229Views0likes0CommentsWill the Cloud Soak Your Fireworks?
This week in the States, the Nation celebrates it's Independence and many people will be attending or setting off their own fireworks show. In Hawaii, fireworks are shot off more during New Year's Eve than on July 4th and there is even Daytime Fireworks now. Cloud computing is exploding like fireworks with all the Oooooooo's and Ahhhhhhh's of what it offers but the same groan, like the traffic jam home, might be coming to an office near you. Recently, Ponemon Institute and cloud firm Netskope released a study Data Breach: The Cloud Multiplier Effect, indicating that 613 IT and security professionals felt that deploying resources in the cloud triples the probability of a major breach. Specifically, a data breach with 100,000+ customer records compromised, the cost would be just over $20 million, based on Ponemon Institute’s May 2014 'Cost of a Data Breach'. With a breach of that scale, using cloud services may triple the risk of a data breach. It's called the 'cloud multiplier effect' and it translates to a 3% higher risk of a data breach for every 1% increase in the use of cloud services. So if you had 100 cloud services, you would only need to add 25 more to increase the possibility of a data breach by 75%, according to the study. 69% of the respondents felt that their organizations are not proactive in assessing what data is too sensitive to be stored in the cloud and 62% said that the cloud services their companies are using are not fully tested to make sure they are secure. Most, almost three-quarters, believed they would not even be notified of a breach that involved lost or stolen intellectual property/business confidential or even customer data. Not a lot of confidence there. The security respondents felt around 45% of all software applications used by the company were cloud based yet half of those had no IT visibility. This comes at a time when many organizations are looking to the cloud to solve a bunch of challenges. At the same time, this sounds a lot like the cloud concerns of year's past - security and risk - plus this is the perception of...not necessarily the reality of what's actually occurring. It very well could be the case - with all the parts, loss of control, out in the wild, etc - that the risk is greater. And I think that's the point. The risk. While cloud does offer organizations amazing opportunities, what these people are saying is that companies need to do a better job at the onset, in the beginning and during the evaluations, to understand the risk of the type(s) of data getting sent to the cloud along with the specific cloud service that holds it. It has only been a few years that the cloud has been taken seriously and from the beginning there have been grumblings about the security risks and loss of control. Some cloud providers have addressed many of those concerns and organizations are subscribing to services or building their own cloud infrastructure. It is where IT is going. But still,as with any new technology bursting with light, color and noise, take good care where and when you light the fuse. ps Related Cloud computing triples probability of major data breach: survey Cloud Could Triple Odds of $20M Data Breach Cloud Triples A Firm’s Probability of Data Breach The future of cloud is hybrid ... and seamless CloudExpo 2014: Future of the Cloud Surfing the Surveys: Cloud, Security and those Pesky Breaches Cloud Bursting Reference Architecture Technorati Tags: f5,cloud,security,risk,silva,survey,breach,fireworks,july 4 Connect with Peter: Connect with F5:354Views0likes0CommentsResults of My Completely Unscientific Internet Survey on the word "Network"
Q: What do you think when you hear the word "network"? Which is about what I expected - an emphasis on the lower layers (2-3) of the stack, some crossover at the transport layer (4) and diminishing mindshare at the higher layers (5-7). That bodes somewhat poorly for technologies attempting to change "the network" because the focus ends up on the increasingly commoditized L2-3 space and the specialized, value-added network services at L4-7 drop off the map. There's an inflection point in the stack at L4 that changes the economy of scale both technologically and financially, which is likely why cloud continues to focus on black-boxing network (l2-3) infrastructure and investing less and less up the stack where the value lies but requires more compute and interference (operational overhead) such that the customer cost must necessarily increase to compensate for investment in building the services. And then there's a whole bunch of technical reasons why that inflection point is important, and why it's really hard to commoditize (and extract the same economy of scale) non-standardized traffic into general-purpose services. In other words, we're just beginning to get to the good parts of cloud ... and SDN will gain faster maturation/adoption simply by virtue of being able to coattail on many of the same benefits and concepts.188Views0likes0CommentsCuring the Cloud Performance Arrhythmia
#cloud #webperf Maintaining Consistent Performance of Elastic Applications in the Cloud Requires the Right Mix of Services Arrhythmias are most often associated with the human heart. The heart beats in a specific, known and measurable rhythm to deliver oxygen to the entire body in a predictable fashion. Arrhythmias occur when the heart beats irregularly. Some arrhythmias are little more than annoying, such as PVCs, but others can be life-threatening, such as ventricular fibrillation. All arrhythmias should be actively managed. Inconsistent application performance is much like a cardiac arrhythmia. Users may experience a sudden interruption in performance at any time, with no real rhyme or reason. In cloud computing environments, this is more likely, because there are relatively few, if any, means of managing these incidents. A 2011 global study on cloud conducted on behalf of Alcatel-Lucent showed that while security is still top of mind for IT decision makers considering cloud computing, performance – in particular reliable performance – ranks higher on the list of demands than security or costs. THE PERFORMANCE PRESCRIPTION One of the underlying reasons for performance arrhythmias in the cloud is a lack of attention paid to TCP management at the load balancing layer. TCP has not gotten any lighter during our migration to cloud computing and while most enterprise implementations have long since taken advantage of TCP management capabilities in the data center to redress inconsistent performance, these techniques are either not available or simply not enabled in cloud computing environments. Two capabilities critical to managing performance arrhythmias of web applications are caching and TCP multiplexing. These two technologies, enabled at the load balancing layer, reduce the burden of delivering content on web and application servers by offloading to a service specifically designed to perform these tasks – and do so fast and reliably. In doing so, the Load balancer is able to process the 10,000th connection with the same vim and verve as the first. This is not true of servers, whose ability to process connections degrades as load increases, which in turn necessarily raises latency in response times that manifests as degrading performance to the end-user. Failure to cache HTTP objects outside the web or application server has a similar negative impact due to the need to repetitively serve up the same static content to every user, chewing up valuable resources that eventually burdens the server and degrades performance. Caching such objects at the load balancing layer offloads the burden of processing and delivering these objects, enabling servers to more efficiently process those requests that require business logic and data. FAILURE in the CLOUD Interestingly, customers are very aware of the disparity between cloud computing and data center environments in terms of services available. In a recent article on this topic from Shamus McGillicuddy, "Tom Hollingsworth, a senior network engineer with United Systems, an Oklahoma City-based value-added reseller (VAR). "I want to replicate [in the cloud with] as much functionality [customers] have for load balancers, firewalls and things like that." So why are cloud providers resistant to offering such services? Shamus offered some insight in the aforementioned article, citing maintenance and scalability as inhibitors to cloud provider offerings in the L4-7 service space. Additionally, the reality is that such offload technologies, while improving and making more consistent performance of applications also have a side effect of making more efficient the use of resources available to the application. This ultimately means a single virtual instance can scale more efficiently, which means the customer needs fewer instances to support the same user base. This translates into fewer instances for the provider, which negatively impacts their ARPU (Annual Revenue Per User) – one of the key metrics used to evaluate the health and growth of providers today. But the reality is that providers will need to start addressing these concerns if they are to woo enterprise customers and convince them the cloud is where it's at. Enabling consistent performance is a requirement, and a decade of experience has shown customers that consistent performance in a scalable environment requires more than simple load balancing – it requires the very L4-7 services that do not exist in provider environments today. Referenced blogs & articles: Layer 4-7 cloud networking still scarce in IaaS market Understanding the market opportunity for carrier cloud services The Need for (HTML5) Speed SPDY versus HTML5 WebSockets QoS without Context: Good for the Network, Not So Good for the End user The Cloud Integration Stack HTML5 WebSockets: High-Speed Infrastructure Integration Bus? Cloud Delivery Model is about Ops, not Apps213Views0likes0CommentsGetting at the Heart of Security in the Cloud
#infosec #cloud CloudPassage digs a bit deeper into the issue of security and public cloud computing and finds some interesting results Security is a pretty big word. It’s used to represent everything from attack prevention to authentication and authorization to securing transport protocols. It’s used as an umbrella term for such a wide variety of concerns that it has become virtually meaningless when applied to technology. For some time, purveyors of security studies have asked the market, “What’s stopping you from adopting cloud?” Invariably one of the most often cited show-stoppers is “security.” Pundits raced to tell us this, but in no wise did they offer deeper insight into what, exactly, security meant. So it was nice to see CloudPassage dig deeper into “security in the cloud” with a recent survey it conducted. You may recall that CloudPassage has a more than passing interest in cloud-based security, as its focus is on cloud-based security with an emphasis on host-based firewalls. Published in February 2012, it sheds some light on what IT professionals consider most important with respect to public cloud security. Not unsurprisingly, “lack of perimeter defenses and/or network control” was the most often cited concern with respect to security in public cloud environments with 25% of respondents indicating it was troubling. This response would appear to go hand in hand with the 12% who cited an inability to leverage “enterprise security tools” in public cloud environments. It is no secret that duplicating security architectures and processes in the cloud is not something we seen done at this juncture. When you combine an inability to replicate security policy and process in the cloud due to incompatibilities of infrastructure and software with a less than robust security service offering in public cloud environments, the “lack of perimeter defenses and/or network control” answer being top of the list makes a lot of sense. WHERE ARE WE GOING? There are myriad surveys that indicate organizations are moving to use public cloud computing, despite these concerns, and one assumes that this means they are finding ways to resolve these issues. Many organizations are turning back the clock and taking advantage of agent-based (host deployed) solutions to secure their assets in public cloud environments, which affords much better protection than nothing at all, and others still are leveraging the tried-and-true “checklist” method: manually securing servers based on best-practices and corporate policy. Neither is optimal from an operational perspective. Neither is the use of cloud provider offered services such as Amazon security groups because the result is a disjointed set of security policies across multiple environments. Policy languages and implementation – not to mention capabilities – vary widely from service to service. While the most basic of protections – firewalling – is more compatible from the perspective of ability to codify, still the actual policy language will differ. These disconnects can lead to gaps in security policies that leave open to attack the organization’s assets. Inconsistent management and deployment processes spanning multiple environments leave open the possibility of human error and misconfiguration, an often cited cause of outages and breaches in general. Where we are today is sitting with a disjointed set of options from which to choose, and the need to somehow cobble together these disparate tools and services into a comprehensive security strategy capable of consistently securing servers, applications, and other resources from attack, exploitation, and breach. It is not really an inspiring view at the moment. Vendors and providers need to work toward some common language and services that enable consistent replication – and thus enforcement - of the policies that govern access and protection of all corporate resources, regardless of location. Whether through standards initiatives or brokerage of APIs or better ability of organizations to deploy security solutions in both the data center and public cloud environments is not necessarily the question. The question is how can enterprises better address the specific security-related concerns they have regarding public cloud deployments in a way that minimizes risk of misconfiguration or gaps in policy enforcement while providing for operationally consistent processes that ensure the benefits of public cloud computing are not lost. REVERSE INTEGRATION One of the interesting trends that we’re seeing is around the demand for consistency in infrastructure across environments, and this will eventually drive demand for integration of what are today “cloud only” solutions back into data center components. Folks like CloudPassage and other cloud-focused systems that deliver host-based security coupled with a SaaS management model will eventually need to consider integration with “traditional” enterprise solutions as a means to deliver the consistency necessary to maintain cloud-related operational benefits. Right now we’re seeing a move toward preserving operational consistency through replication of policy from within the data center out, to the cloud. But as cloud-hosted solutions continue to mature and evolve, one would expect to see the ability to replicate policy in the other direction – from the cloud back into the data center. This is no trivial task, as it requires the SaaS management component of such solutions to become what might be considered a policy broker; that is, their system becomes the point of policy creation and management and it is through integration with both cloud and data center infrastructure that such policies are deployed, updated, and managed. This is why the notion of API-enabled infrastructure, a.k.a. Infrastructure 2.0, is so important. It’s not just about creating a vibrant and healthy ecosystem of solutions within the data center, but in the cloud and in between, as well. It is the glue that will integrate disparate systems and normalize policies across environments, and ultimately provide the market with a broader set of choices that can more efficiently and effectively address the specific security (and other operational) concerns that may be preventing organizations from fully embracing cloud computing. The Conflation of Pay-as-you-Grow Hardware with On-Demand The Conspecific Hybrid Cloud Committing to Overhead: Proceed With Caution. Why MDM May Save IT from Consumerization Block Attack Vectors, Not Attackers Get Your Money for Nothing and Your Bots for Free Dome9: Closing the (Cloud) Barn Door256Views0likes0Comments