big data
15 TopicsAdd a Data Collection Device to your BIG-IQ Cluster
Gathering and analyzing data helps organizations make intelligent decisions about their IT infrastructure. You may need a data collection device (DCD) to collect BIG-IP data so you can manage that device with BIG-IQ. BIG-IQ is a platform that manages your devices and the services they deliver. Let’s look at how to discover and add a data collection device in BIG-IQ v5.2. You can add a new data collection device to your BIG-IQ cluster so that you can start managing it using the BIG-IP device data. In addition to Event and Alert Log data, you can view and manage statistical data for your devices. From licensing to policies, traffic to security, you’ll see it all from a single pane of glass. But you need a DCD to do that. So, we start by logging in to a BIG-IQ. Then, under the System tab, go to BIG-IQ Data Collection and under that, click BIG-IQ Data Collection Devices. The current DCD screen shows no devices in this cluster. To add a DCD, click Add. This brings us to the DCD Properties screen. For Management Address field, we add the management IP address of the BIG-IP/DCD we want to manage. We’ll then add the Admin username and password for the device. For Data Collection IP Address, we put the transport address which is usually the internal Self-IP address of the DCD and click Add. The process can take a little while as the BIG-IQ authenticates with the BIG-IQ DCD and adds it to the BIG-IQ configuration. But once complete, you can see the devices has been added successfully. Now you’ll notice that the DCD has been added but there are no Services at this point. To add Services, click Add Services. In this instance, we’re managing a BIG-IP with multiple services including Access Policies so we’re going to activate the Access services. The listener address already has the management address of the DCD populated so we’ll simply click Activate. Once activated, you can see that it is Active. When we go back to the Data Collection Devices page, we can see that the Access Services have been added and the activation worked. Congrats! You’ve added a Data Collection Device! You can also watch a video demo of How to Add a data collection device to your BIG-IQ cluster. ps Related: Lightboard Lesson: What is BIG-IQ?3.3KViews0likes6CommentsPrivacy for a Price
A few weeks ago, I went to my usual haircut place and after the trim at the register I presented my loyalty card. You know the heavy paper ones that either get stamped or hole-punched for each purchase. After a certain number of paid visits, you receive a free haircut. I presented the card, still in the early stages of completion, for validation and the manager said I could convert the partially filled card to their new system. I just had to enter my email address (and some other info) in the little kiosk thingy. I declined saying, 'Ah, no thanks, enough people have my email already and don't need yet another daily digest.' He continued, 'well, we are doing away with the cards and moving all electronic so...' 'That's ok,' I replied, 'I'll pay for that extra/free haircut to keep my name off a mailing list.' This event, of course, got me thinking about human nature and how we will often give up some privacy for either convenience or something free. Imagine a stranger walking up to you and asking for your name, address, email, birthday, income level, favorite color and shopping habits. Most of us would tell them to 'fill in the blank'-off. Yet, when a Brand asks for the same info but includes something in return - free birthday dinner, discounted tickets, coupons, personalized service - we typically spill the beans. Infosys recently conducted a survey which showed that consumers worldwide will certainly share personal information to get better service from their doctors, bank and retailers; yet, they are very sensitive about how they share. Today’s digital consumers are complicated and sometimes suspicious about how institutions use their data, according to the global study of 5,000 digitally savvy consumers. They also created an infographic based on their findings. Overall they found: 82 percent want data mining for fraud protection, will even switch banks for more security; 78 percent more likely to buy from retailers with targeted ads, while only 16 percent will share social profile; 56 percent will share personal and family medical history with doctors ...and specific to retail: To know me is to sell to me: Three quarters of consumers worldwide believe retailers currently miss the mark in targeting them with ads on mobile apps, and 72 percent do not feel that online promotions or emails they receive resonate with their personal interests and needs To really know me is to sell me even more: A wide majority of consumers (78 percent) agree that they would be more likely to purchase from a retailer again if they provided offers targeted to their interests, wants or needs, and 71 percent feel similarly if offered incentives based on location Catch-22 for retailers? While in principle shoppers say they want to receive ads or promotions targeted to their interests, just 16 percent will share social media profile information. Lacking these details could make it difficult for retailers to deliver tailored digital offers Your data is valuable and comes with a price. While many data miners are looking to capitalize on our unique info, you can always decline. Yes, it is still probably already gathered up somewhere else; Yes, you will probably miss out on some free or discounted something; Yes, you will probably see annoying pop-up ads on that free mobile app/game and; Yes, you might feel out of the loop. But, it was still fun to be in some control over my own info leaks. ps Related: Path pledges to be ad-free: Will consumers pay for their privacy? What Would You Pay for Privacy? Paying for privacy: Why it’s time for us to become customers again Consumers Worldwide Will Allow Access To Personal Data For Clear Benefits, Says Infosys Study Engaging with digital consumers: Insights from Infosys survey [Infographic] Parking Ticket Privacy Invasion of Privacy - Mobile App Infographic Style 'Radio Killed the Privacy Star' Music Video? Technorati Tags: privacy,data,big data,mobile,loyalty,consumer,human,information,personal,silva,security,retail,financial Connect with Peter: Connect with F5:574Views0likes1CommentThe Internet of Sports
Did you see what the NFL is doing this year with sensors? Earlier this month they announced a partnership with Zebra Technologies, a company that provides RFID chips for applications from 'automotive assembly lines to dairy cows' milk production.' This season there will be sensors in the player's shoulder pads which will track all their on field movements. This includes player acceleration rates, top speed, length of runs, and even the distance between a ball carrier and a defender. Next year they'll add sensors for breathing, temperature and heart rate. More stats than ever and could change the game for-ever. Imagine coaches being able to examine that data and instantly call a play based on it. Play by play. To me it somewhat takes away that 'feel' for the game flow but also having data to confirm or deny that feeling might make for exciting games. Maybe lots of 0-0 overtimes or a 70-0 blowout. Data vs. data. Oh how do I miss my old buzzing electric football game. The yardsticks will have chips along with the refs and all that data is picked up by 20 RFID receivers placed throughout the stadium. Those, in turn, are wired to a hub and server which processes the data. 25 times a second, data will be transmitted to the receivers and the quarter sized sensors use a typical watch battery. The data goes to the NFL 'cloud' and available in seconds. The only thing without a sensor is the ball. But that's probably coming soon since we already have the 94Fifty sensor basketball. And we've had the NASCAR RACEf/x for years and this year they are going to track every turn of the wrench with RFID tracking in the pits and sensors on the crew. Riddell has impact sensors in their helmets to analyze, transmit and alert if an impact exceeds a predetermined threshold. They can measure the force of a NBA dunk; they can recognize the pitcher’s grip and figure out the pitch; then the bat sensor that can measure impact to the ball, the barrel angle of their swings, and how fast their hands are moving; and they are tracking soccer player movement in Germany. Heck, many ordinary people wear sensor infused bracelets to track their activity. We've come a long way since John Madden sketched over a telestrator years ago and with 300 plus lb. players running around with sensors, this is truly Big Data. It also confirms my notion that the IoT should really be the Internet of Nouns - the players, the stadiums and the yardsticks. ps Related: Player-tracking system will let NFL fans go deeper than ever Fantasy footballers and coaches rejoice—NFL players to wear RFID tags More sensors are coming to professional sports, but research outpaces business models Why This Nascar Team Is Putting RFID Sensors On Every Person In The Pit Impact Sensors: Riddell InSite Impact Response System Fastpitch Softball League Adds Swing Sensors to its Gear Technorati Tags: rfid,sensors,IoT,things,nfl,cloud,big data,silva,f5 Connect with Peter: Connect with F5:400Views0likes1CommentPlay Ball!
...Oh Wait, Let Me Check the Stat-Cloud First! It is like a SAT question: Cincinnati Reds Billy Hamilton has a 10.83 foot lead off first base, can hit a top speed of 21.51 mph and clocked a jump of 0.49 seconds. If the Milwaukee Brewers catcher took 0.667 seconds to to get the ball out of his glove to throw to second and the ball is travelling at 78.81 mph, is Hamilton safe or out? A few weeks ago I wrote about the Internet of Sports, and can't believe I missed this one. But with the MLB playoffs in full gear, I didn't want this to slip through the IoT cracks. Sports analytics has been around for a while but never to this degree. Just like the NFL, Major League Baseball is equipping stadiums with technologies that can track moving players, flying baseballs and accurate throws. More than the RBIs, hits and stolen bases that appear on the back of trading cards, new technology (and software) also gathers stats like pop-fly pursuit range or average ground ball response time. Professional sports teams have always tracked their players' performance and often such milestones are included in the player's contract. Bonus for so many games played, or home runs hit or some other goal. With all this new detailed data, teams can adjust how they train, prepare for games and even player value for personnel moves like trades. For the 2014 season, only 3 stadiums (Mets, Brewers, Twins) had the new Field f/x (Sportvision Inc.) system but the league plans to have all 30 parks complete for the 2015 season. Field f/x can show data such as the angle of elevation of a batted ball, the highest point in its trajectory and the distance covered and top speed attained by a player attempting to field a ball. Of course all this can then be crunched for cool graphics during a replay. Cameras, sensors and software are all now part of the game. So are data centers, clouds and high speed links. All this data needs to be crunched somewhere and more often it is in a cloud environment. Add to that, the connection(s) to the fans and with the fans at the stadium. Levi's Stadium, for instance, has 1200 access points and an app that allows you to order food, watch instant replays and know which bathroom line is the shortest. Our sport stadiums are becoming data centers. Announcer: Welcome to Exclusive Sponsor Data Center Field! Home of the Hypertext Transfer Protocols. Undefeated at home this year, the Prots look to extend their record and secure home field throughout the playoffs. And if you were wondering, Hamilton was Safe. ps Related: New Baseball Season Brings Tech to Track Player Skills Major League Baseball brings new tech to the plate Baseball All-Stars’ Data Gets More Sophisticated With Field F/X The Internet of Sports Are You Ready For Some...Technology!! Is IoT Hype For Real? Technorati Tags: iot,things,baseball,sports,sensors,stats,big data,mlb,nfl,f5,silva Connect with Peter: Connect with F5:370Views0likes0CommentsF5 Friday: I am in UR HTTP Headers Sharing Geolocation Data
#DNS #bigdata #F5 #webperf How'd you like some geolocation data with that HTTP request? Application developers are aware (you are aware, aren't you?) that when applications are scaled using most modern load balancing services that the IP address of the application requests actually belong to the load balancing service. Application developers are further aware that this means they must somehow extract the actual client IP address from somewhere else, like the X-Forwarded-For HTTP header. Now, that's pretty much old news. Like I said, application developers are aware of this already. What's new (and why I'm writing today) is the rising use of geolocation to support localized (and personalized) content. To do this, application developers need access to the geographical location indicated by either GPS coordinates or IP address. In most cases, application developers have to get this information themselves. This generally requires integration with some service that can provide this information despite the fact that infrastructure like BIG-IP and its DNS services, already have it and have paid the price (in terms of response time) to get it. Which means, ultimately, that applications pay the performance tax for geolocation data twice - once on the BIG-IP and once in the application. Why, you are certainly wondering, can't the BIG-IP just forward that information in an HTTP header just like it does the client IP address? Good question. The answer is that technically, there's no reason it can't. Licensing, however, is another story. BIG-IP includes, today, a database of IP addresses that locates clients, geographically, based on client IP address. The F5 EULA, today, allows customers to use this information for a number of purposes, including GSLB load balancing decisions, access control decisions with location-based policies, identification of threats by country, location blocking of application requests, and redirection of traffic based on the client’s geographic location. However, all decisions had to be made on BIG-IP itself and geographic information could not be shared or transmitted to any other device. However, a new agreement allows customers an option to use the geo-location data outside of BIG-IP, subject to fees and certain restrictions. That means BIG-IP can pass on State, Province, or Region geographic data to applications using an easily accessible HTTP header. How does that work? Customers can now obtain a EULA waiver which permits certain off-box use cases. This allows customers to use the geolocation data included with BIG-IP in applications residing on a server or servers in an “off box” fashion. For example, location information may be embedded into an HTTP header or similar and then sent on to the server for it to perform some geo-location specific action. Customers (existing or new) can contact their F5 sales representative to start the process of obtaining the waiver necessary to enable the legal use of this data in an off-box fashion. All that's necessary from a technical perspective is to determine how you want to share the data with the application. For example, you'll (meaning you, BIG-IP owner and you, application developer) will have to agree upon what HTTP header you'll want to use to share the data. Then voila! Developers have access to the data and can leverage it for existing or new applications to provide greater location-awareness and personalization. If your organization has a BIG-IP (and that's a lot of organizations out there), check into this opportunity to reduce the performance tax on your applications that comes from double-dipping into geolocation data. Your users (especially your mobile users) will appreciate it.327Views0likes0CommentsF5 Friday: Building a Proactive Threat Management Infrastructure One Service at a Time
#GDI #infosec #bigdata Personalization is usually the first application mentioned for big data, but security may be of even more value to the enterprise We (as in the corporate “we”) recently postulated that it was time “time to ratchet up the protection afforded users and the business by leveraging big data in a way that enables attacks to be prevented, not just deflected or avoided.” Actually, it’s well past time we applied the tremendous amount of information that is available to defending and protecting corporate assets. Security experts and pundits have long posited that a proactive approach to security is called for, that the reactive approach of the past is no longer sufficient to protect the business from compromise, from revenue loss, from infection, and from breaches. One way we can enable a more proactive security strategy in the enterprise is to start enabling infrastructure with the intelligence necessary to make real-time decisions regarding the threat posture of every single connection. No more sampling, no more guessing, no more after-the-fact alerting from monitoring systems. This is increasingly important, as 98% of breaches documented by Verizon in its 2012 Data Breach Investigation Report stemmed from external agents – an increase of 6% over the prior year – with malware cited as a contributing to over two-thirds of the 2011 caseload and 95% of all stolen data. Perhaps if organizations involved had been able to identify that the end-point connecting was known to be infected or a known distributor of malware, many of the breaches might have been avoided. A DELIVERY INTELLIGENCE ECOSYSTEM Attempting to do just that is the reason F5 is building out an ecosystem that delivers intelligence to strategic infrastructure services. The goal is to leverage big data and cloud computing to provide key components of the context required to proactively make access decisions with respect to corporate resources. The first subscription-based service in the line-up is IP Intelligence, which provides updates on IP threats. The service draws on the expertise of a global threat-sensor network to detect malicious activity and IP addresses. Even when the BIG-IP device is behind a content delivery network (CDN) or other proxies, the IP Intelligence service can provide protection by looking at the real client IP addresses as logged within the X-Forwarded-For header to allow or block traffic from a CDN with threatening IPs. The capability to detect the threat before it can launch an attack enhances perimeter security, including mitigating DoS attacks and preventing potential fraudulent transactions. The use of intelligent, behavioral and reputation-based context applied to connections enables protected applications to better scale and perform consistently, as well as increases downstream device throughput and ability to evaluate more efficiently those requests that are allowed past the network boundary. All BIG-IP systems will be able to take advantage of IP Intelligence via iRules, through a new command that queries the IP Intelligence service. A simple, easy to configure interface is also available in BIG-IP Application Security Manager (ASM) that includes the ability to whitelist IP addresses, because we all know what happens when the CEO is blocked. The intelligence required to balance legitimate access needs from anywhere to corporate resources goes well beyond a simple reputation lookup, however. It’s not always enough to simply allow or deny access based on reputation, as it may be the case that an employee is ensconced in a meeting room, far from corporate IT, on a network from which an attack has been previously launched. Further evaluation may be necessary; a combination of client, user, reputation, and location may be needed to make a final decision to allow or deny. The whole is greater than the sum of its parts, and sometimes it is the context of the request – all relevant variables – that are necessary in order to intelligently make a decision. BIG-IP enables this level of intelligence, allows operators to dig into all the factors in the equation, and make an intelligent decision based not only on pure data but data balanced with risk and business requirements. It is the combination of employing intelligent inspection with IP Intelligence that further enables IT to proactively manage access while mitigating risk. It enables IT organizations to effectively deal with emerging threats and trends like BYOD and cloud computing with confidence. IP Intelligence enriches the already robust context-aware capabilities of the BIG-IP system and puts the ability to codify complex multi-variable policies into the hands of IT, where it belongs. Additional Resources: IP Intelligence Service – Datasheet IP Intelligence Service – SlideShare Presentation Dynamic Perimeter Security with IP Intelligence – White Paper Total Eclipse of the Internet Recognizing a Threat is the First Step Towards Preventing It The Changing Security Threat Landscape Infographic HashDos: 42% of IIS sites are still Vulnerable Should You Be Concerned About the Other Anonymous?267Views0likes0CommentsBig Data Getting Attention
According to IBM, we generate 2.5 quintillion (2.5 followed by 17 zeros) bytes of data every day. In the last two years, we've created about 90% of the data we have today. Almost everything that's 'connected' generates data. Our mobile devices, social media interactions, online purchases, GPS navigators, digital media, climate sensors and even this blog to name a few, adds to the pile of big data that needs to be processed, analyzed, managed and stored. And you think that saving all your movies, music and games is a challenge. This data growth conundrum is 3 (or 4 - depending on who you talk to) dimensional with Volume (always increasing amount of data), Velocity (the speed back and forth) and Variety (all the different types - structured & unstructured). Veracity (trust and accuracy) is also included in some circles. With all this data churning, security and privacy only add to the concerns but traditional tactics might not be adequate. Recently the Cloud Security Alliance (CSA) listed the top 10 security and privacy challenges big data poses to enterprises and what organizations can do about them. After interviewing CSA members and security-practitioners to draft an initial list of high priority security and privacy problems, studying the published solutions and characterizing problems as challenges if the proposed solution(s) did not cover the problem scenarios, they arrived at the Top 10 Security & Privacy Challenges for Big Data. They are: Secure computations in distributed programming frameworks Security best practices for non-relational data stores Secure data storage and transactions logs End-point input validation/filtering Real-Time Security Monitoring Scalable and composable privacy-preserving data mining and analytics Cryptographically enforced data centric security Granular access control Granular audits Data Provenance The Expanded Top 10 Big Data challenges has evolved from the initial list of challenges to an expanded version that addresses new distinct issues. Modeling: formalizing a threat model that covers most of the cyber-attack or data-leakage scenarios Analysis: finding tractable solutions based on the threat model Implementation: implanting the solution in existing infrastructures The idea of highlighting these challenges is to bring renewed focus on fortifying big data infrastructures. The entire CSA Top 10 Big Data Security Challenges report can be downloaded here. ps Related: CSA Lists Top 10 Security, Privacy Challenges of Big Data CSA Releases the Expanded Top Ten Big Data Security & Privacy Challenges Expanded Top Ten Big Data Security and Privacy Challenges (download link) The Four V’s of Big Data Don't forget the network in this Big Data rush When Big Data Meets Cloud Meets Infrastructure How big data in the cloud can drive IT ops Technorati Tags: big data,performance,web,cloud,csa,optimization,cloud computing,network,application,infrastructure,http 2.0,dynamic infrastructure,mobile,silva Connect with Peter: Connect with F5:240Views0likes0CommentsF5 Friday: If Data is King then Storage Virtualization is the Castellan
The storage virtualization layer is another strategic point of control in the data center where costs can be minimized and resource utilization maximized. In olden times of lore, the king may have been top dog but it was the castellan through which one had to go to gain an audience or access to any one of his holdings. The castellan was a position of immense power and influence in the medieval hierarchy, responsible for managing the king’s castles and lands wherever they might be. In modern times, if data is king then storage virtualization must be the castellan; the system through which data is managed and accessed, regardless of location. It’s a strategic point of control, an aggregation point, that affords organizations the opportunity to architecturally apply policies governing the storage and access of data. Tiering policies – whether across local storage systems or into the cloud – are best applied at a point in the architecture where aggregation via virtualization occurs. Not unlike global and local application delivery systems, storage virtualization systems like F5 ARX “virtualize” resources and provide more seamless scalability and management of those resources. With global and local application delivery those resources are most often applications – but also include infrastructure. With F5 ARX, those resources are storage systems – some costly, some not, some on-premise, some off. Aggregating those resources and presenting a “virtual” view of those resources to end-users means migration of data can be performed seamlessly, without disruption to the end-user. It’s a service-oriented approach to storage architectures that affords agility and automation in carrying out operational tasks related to data management. That operational automation is increasingly important as the volume of data being stored, accessed and migrated to secondary and archive storage systems increase. Manual operations to archive, backup or replicate data would overwhelm storage professionals in the data center if they were to continue to keep pace with the explosive data growth experienced today. DATA is KING This isn’t the first time we’ve heard the announcement: data is growing, astonishingly fast. Not just the data flowing over the wires but data at rest, in storage. It’s an exponential growth caused in part by retention policies atop the reality of growing numbers of users creating more and more data. IBM calls out that, “83 percent of CIOs have visionary plans that include business intelligence and analytics, followed by mobility solutions (74 percent) and virtualization (68 percent).” cloud computing shot up in priority, selected by 45 percent more CIOs than the 2009 study. But not everyone speaks the CIOs language. Translation – it’s no longer about the applications, it’s all about the data: How to manage the data (there’s more data than ever to manage) How to leverage the data (information about consumers, markets, opportunities) How to integrate the data (across applications and devices) How to store the data (cloud, cloud, cloud) How to access the data (especially from mobile devices) -- Results of IBM’s CIO Study — Data is King There’s more data, more often, that needs to be stored for more time. That means more disk, more network, and ultimately more costs. That’s where ARX – storage / file virtualization – comes in. THE TAMING of the DATA Tiering, consolidation and simplified access strategies can make more manageable the menagerie of data threatening to overwhelm the data center with time and money. Operational automation is as imperative to storage as it is to application deployment as a tactic to address the increasing demands for flexibility and responsiveness across all of IT. Internal and external forces of change are driving IT organizations to get more efficient and manage more effectively the resources as their disposal in such a way as to minimize total cost of ownership as well as operational expense. Applying intelligent, adaptable and more flexible policies at strategic points of control within the data center architecture can alleviate many expenses associated with long-term management and control of data – both in flight (application delivery) and at rest (storage). This data explosion is not limited to large enterprises. Mid-sized enterprises are deluged with data as well. Keeping up with growth rates threaten to overrun budgets and overwhelm staff. Traditionally enterprise-class solutions are becoming more and more necessary at mid and even small-sized organizations to manage that data more efficiently. In today’s rapidly digitizing economy, small and mid-sized enterprises are dealing with exploding amounts of digital content and a growing range of data management challenges. -- Richard Villars, VP of Storage and IT Executive Strategies at IDC. Unfortunately, even though mid-sized organizations may have enterprise-class needs, they are still constrained by mid-sized business budgets. Offerings capable of providing enterprise-class features and performance on a mid-sized budget are imperative to assisting organizations to address their burgeoning storage management needs. F5 now offers the ARX1500 and ARX2500 appliances, providing small and mid-sized enterprises with advanced data management capabilities at attractive price points, along with superior scalability and performance levels. Combined with F5 ARX Cloud Extender, which provides the means by which storage as a service can be leveraged in conjunction with storage virtualization and management solutions, the new ARX appliances offer a compelling solution for mid-sized organizations in need of a more holistic and effective data management strategy. The financial benefits of cloud computing combined with operational improvements from a comprehensive storage management strategy can provide a needed boost to enterprises of all sizes. More on F5 ARX1500 and ARX2500 F5 ARX 1500 and 2500 F5’s New ARX Platforms Help Organizations Reap the Benefits of File Virtualization Network World – F5 Rolls Out New File Virtualization Appliances VAR Guy – F5 Launches New ARX Platforms for File Virtualization Success Strategies for Storage Data Migration – IDC Analyst Q&A ARX Series Datasheet F5 Friday: F5 ARX Cloud Extender Opens Cloud Storage F5 Friday: ARX VE Offers New Opportunities F5 Friday: The More The Merrier Disk May Be Cheap but Storage is Not All F5 Friday Posts on DevCentral Tiering is Like Tables, or Storing in the Cloud Tier Swapping Rack Space for Rack Space230Views0likes0CommentsThe Internet of Things and mobility driving HTTP and cloud
#IoT #bigdata #cloud The Internet of Things smells like opportunity for everyone. There is no industry that hasn't been touched by the notion of smart "things" enabling convenience or collaboration or control in every aspect of our lives. From healthcare to entertainment, from automotive to financials, the Internet of Things is changing the way we work, live and play. That's the view from the consumer side, from the perspective of someone using the technology made available by <insert vendor/provider here>. But before that consumer could get their hands on the technology -and the inevitable accompanying "app" that comes with it - the provider/vendor had a lot of work cut out for them. Whether it was building out licensing and activation servers, a remote-based control application, or a data exchanging service, the provider of this "thing" and its supporting ecosystem of applications had to design and implement systems. One of those systems is inevitably related to storage and retrieval of data. That's because a key consideration of nearly all "things" is that while they pack quite the computing punch (as compared historically to mobile devices) they are still constrained with respect to storage capabilities.These devices are generating more data than most people can fathom. Globally, smart devices represented 21 percent of the total mobile devices and connections in 2013, they accounted for 88 percent of the mobile data traffic. In 2013, on an average, a smart device generated 29 times more traffic than a non-smart device. Globally, there were nearly 22 million wearable devices (a sub-segment of M2M category) in 2013 generating 1.7 petabytes of monthly traffic. Cisco Visual Networking Index Not only are the things themselves incapable of storing the vast quantities of data generated in the long term, providers/vendors are unlikely to have the spare storage capacity readily available to manage it. Even if they do, it's a rare organization that has in place the controls and multi-tenancy necessary to support the storing such data with the privacy expected (and demanded) by consumers. Add to that the reality that they're small and portable and often dropped into the most inconvenient of places by their owners (or their owners' children) and the result is a need to ensure off-thing storage of data. Just in case. What that means is the Internet of Things is driving the use of HTTP and solidifying its status as the "new TCP". Things communicate with applications to store and retrieve a variety of data. This communication is generally accomplished over the Internet - even though it may start out over a cellular network - and that means using the linga franca of the web, HTTP. Additionally, HTTP is ubiquitous, the market for developers is saturated, and support for HTTP is built into most embedded systems today. So HTTP will be used to store and retrieve data for all these "things", that seems a foregone conclusion. But what about storage and capacity? Ready or Not The question is whether the provider/vendor of the thing is going to take on the challenges of capacity and storage themselves or, as is increasingly the case, turn to the public cloud. The public cloud option has many benefits, particularly in that it's cheap, it already exists, and its enabled with the APIs (accessible via HTTP) required to integrate it with a mobile app or thing. It's already multi-tenant and supportive of the level of privacy required by consumers, and it grows on-demand without requiring staff to spend time racking more compute and storage in the data center. It seems likely, then, that not only will things and mobility continue to drive the dominance of HTTP but will also increase use of public cloud services. Certainly there are industries and segments within the "things" and mobile app categories that make using public cloud unacceptable. My mobile banking and financial apps, for example, are not storing data anywhere but safely inside the (hopefully very secure) walls of their respective institutions. My Minecraft game on the Xbox 360, however, offers up "cloud" as a storage device, which means I can create new worlds til the cows come home. My smartpen synchronizes with Evernote. My iPhone is constantly nagging me to use iCloud because, well, it's available. With Google's acquisition of Nest, if its data and control applications weren't being run in Google's cloud, they probably will be in the future. The reality is that many organizations are not architecturally ready from the network and operations perspective to take on the challenges that will be encountered by a foray into the Internet of Things. But to let it pass them by is also not acceptable. That may very well drive organizations to the cloud to avoid missing these early days of opportunity.212Views0likes0CommentsInteresting Data versus Useful Data
I'm sure there are few people who have escaped some form of reporting or metrics collection in their careers. Pondering my various roles and responsibilities for a moment, I think I've measured almost everything: SPAM emails caught, megabytes browsed, web-cache hits, IOPS, virtual machine density, tweets and re-tweets... In most cases it has been clear to me the value of certain metrics over others. As a contractor I’ve measured hours worked - that’s how I got paid. As an engineer I’ve measured web usage as a means to justify organizational spend for campus access. In sales, I’ve worked with customers on creating metrics-based business cases to justify both investment and change in practice. However, I’ll be honest with you, there were a few metrics that left me scratching my big bald head… This brings me on to ‘Big Data’, “an all-encompassing term for any collection of data sets so large and complex that it becomes difficult to process them using traditional data processing applications.” Sounds interesting, right? But who’s actually analyzing this data to drive business value. What are their use cases? Or, from a different angle, who’s misusing the data to get the answers they want, instead of the guidance they need? Jason Spooner of Social Media Explorer grabbed my attention with his article titled, “BIG DATA IS USELESS: unless you have a plan for how to use it”, highlighting that, “There is no inherent value to data. The value comes from the application of what the data says.” My thoughts exactly! While I know I’ve come across a little on the skeptic side today, it’s my endless questioning and curiosity that’s kept me employable thus far. Consequently, I feel compelled to ask, is there a danger that Big Data is merely fueling the behavioral addicts among us? Is this an enterprise-grade obsessive compulsive disorder? Or, maybe it’s the skeptics like me that will inhibit future application. J. Edgar Hoover was heavily criticized over his fingerprint database… ok, not my finest example. In my current role I’m focused heavily on Software-Defined Networking. Unfortunately, SDN has been largely driven by the desire to solve implementation issues – how quickly an organization can deploy a new network and improve its Time to Market for new applications and services. However, I believe that there is far more to gain from applying software-defined principals to solving post-deployment problems. Consider the benefits of a network topology driven by real-time data analysis. A network that can adapt based on its own awareness. Now that would be cool! I appreciate that the control-plane abstraction driven by SDN is step one: allowing for the breaking away from management silos and steering towards a policy-driven network. However, there is still far more to gain from a software-defined approach. I, for one, look forward to the day when we see these data center network policies being driven by data analysis, both historical and real-time, to deliver better business services. Dare I call it DC Agility 2.0…?211Views0likes0Comments