data
12 TopicsThe BIG-IP Application Security Manager Part 10: Event Logging
This is the last article in a 10-part series on the BIG-IP Application Security Manager (ASM). The first nine articles in this series are: What is the BIG-IP ASM? Policy Building The Importance of File Types, Parameters, and URLs Attack Signatures XML Security IP Address Intelligence and Whitelisting Geolocation Data Guard Username and Session Awareness Tracking In this, the final article in the BIG-IP ASM series, we will dive into the excitement and necessity of event logging. Throughout this ASM series, we've looked at log files from a distance but we never really talked about how to configure logging. I know...event logging might not be the most fascinating part of the ASM, but it's really important stuff! Before joining F5, I worked as a cyber threat analyst for a government organization. I saw lots of cyber attacks against various systems. After an attack would take place, my team and I would come in and study the attack vector, target points, etc and it seemingly never failed that the system logs showed at least some (but many times all) of the malicious activity. If someone had just been reviewing the logs... Logging Profiles Logging profiles specify how and where the ASM stores requests for application data. In versions prior to 11.3.0, a logging profile is associated with a security policy, but beginning in 11.3.0 the logging profile is associated with a virtual server. I'm using version 11.3.0 in these examples, so this article will associate a logging profile with a virtual server. When choosing a logging profile, you have the option of creating your own or using one of the system-supplied profiles. In addition, you can log data locally, remotely, or both using the same logging profile. Keep in mind that the system-supplied profiles are configured to only log data locally. The logging profile specifies two things: where the log data is stored (locally, remotely, both) and what data gets stored (all requests, illegal requests only, etc). Creating a Profile To create a new logging profile, navigate to Security >> Event Logs >> Logging Profiles and click the "Create" button. You will see the following screen: I named this one "Test_Log_Profile" and enabled logging for Application Security. Notice that you can enable logging for Application Security, Protocol Security, and/or Denial of Service Protection. I enabled local storage and filtered for "Illegal Requests Only". Now that I have my logging profile created, I can associate it with the virtual server. Configuring the Virtual Server Navigate to Local Traffic >> Virtual Servers >> Virtual Server List and click on the virtual server with which you want to associate the logging profile. Notice the tabs across the top part of the page...click on Security >> Policies and you will see the following screen: Now you can move the logging profile from "Available" to "Selected" in order to enable the profile for the virtual server. Also, notice that "Application Security Policy" is enabled and the name of the security policy is listed in the drop down menu. If you enable more than one profile, the ASM will apply the settings of the top profile first and then work down the list. Viewing Log Files Log data is stored in the /var/log/asm folder on the BIG-IP. You can view the details of the log data using the command line or the GUI. Command Line To view the log data via the command line, use a command like "cat" or "tail". You can also use other standard commands like "grep" to filter results or "more" to view one page at a time. GUI To view the Application Security logs in the GUI, navigate to Security >> Event Logs >> Application >> Requests and you will see the following screen: You can click on any of the application requests, and the details will load in the bottom portion of the screen. You can view the Request Details, the actual HTTP Request, or the actual HTTP Response (if response logging is enabled in your logging profile). Many times response logging is not enabled due to the large amount of data this would consume. Remote Storage The ASM provides the option of storing log data on a remote server. When configuring a logging profile, you can view the Advanced Configuration to enable remote storage and select one of three types. The first is "Remote" and this option specifies that the ASM will store all traffic on a remote logging server like syslog. The second is "Reporting Server" and this option specifies that the ASM will store all log data on a server using a preconfigured storage format. The third option is "ArcSight" and this option specifies that the ASM will store all log data on a remote server using predefined ArcSight settings for the logs (the log messages are in the Common Event Format). Speaking of remote storage...a popular remote log management tool is Splunk. In fact, Splunk offers a specific F5 app that does a fantastic job of organizing and displaying log data in a way that is easy to understand and consume. If you need more information on the Splunk app for F5 log data, check out this article written by the one and the only Jason Rahm...you'll be glad you did! Well, that wraps things up for this article. It's been a fun ride through the internal workings of the BIG-IP ASM. I hope you have enjoyed this series as much as I have. Stay tuned for my next set of articles on the awesomeness that is DNS...see you soon!! Update: Now that the article series is complete, I wanted to share the links to each article. If I add any more in the future, I'll update this list. What is the BIG-IP ASM? Policy Building The Importance of File Types, Parameters, and URLs Attack Signatures XML Security IP Address Intelligence and Whitelisting Geolocation Data Guard Username and Session Awareness Tracking Event Logging4.3KViews0likes5CommentsAsk the Expert – Are WAFs Dead?
Brian McHenry, Sr. Security Solution Architect, addresses the notion that Web Application Firewalls are dead and talks about what organizations need to focus on today when protecting their data and applications across a diverse environment. He discusses many of the current application threats, how to protect your data across hybrid environments and the importance of a security policy that is portable across many environments. Move on from the idea of a traditional WAF and embrace hybrid WAF architectures. ps Related: The Death of WAF as We Know It F5 Security Solutions Technorati Tags: f5,waf,security,web application firewall,cloud,data,silva,video,hybrid Connect with Peter: Connect with F5:296Views0likes0CommentsThe Breach of Things
Yet another retailer has confessed that their systems were breached and an untold number of victims join the growing list of those who have had their data was stolen. This one could be bigger than the infamous Target breach. I wonder if some day we'll be referring to periods of time by the breach that occurred. 'What? You don't remember the Target breach of '13! Much smaller than the Insert Company Here Breach of 2019!' Or almost like battles of a long war. 'The Breach of 2013 was a turning point in the fight against online crime,' or some other silly notion. On top of that, a number of celebrity's private photos, stored in the cloud (of course), were privately stolen. I'm sorry but if you are going to take private pictures of yourself with something other than a classic Polaroid, someone else will eventually see them. Almost everything seems breach'able these days. Last year, the first toilet was breached. The one place you'd think you would have some privacy has also been soiled. Add to that televisions, thermostats, refrigerators and automobiles. And a person's info with a dangerous hug. Companies are sprouting up all over to offer connected homes where owners can control their water, temperature, doors, windows, lights and practically any other item, as long as it has a sensor. Won't be long until we see sensational headlines including 'West Coast Fridges Hacked...Food Spoiling All Over!' or 'All Eastern Televisions Hacked to Broadcast old Gilligan's Island Episodes!' As more things get connected, the risks of a breach obviously increase. The more I thought about it, I felt it was time to resurrect this dandy from 2012: Radio Killed the Privacy Star for those who may have missed it the first time. Armed with a mic and a midi, I belt out, karaoke style, my music video ‘Radio Killed the Privacy Star.’ Lyrics can be found at Radio Killed the Privacy Star. Enjoy. ps Related The Internet of Sports Is IoT Hype For Real? Internet of Things OWASP Top 10 Uncle DDoS'd, Talking TVs and a Hug Welcome to the The Phygital World The DNS of Things Technorati Tags: breach,things,iot,data,privacy,target,photos,f5,silva,security,video Connect with Peter: Connect with F5:366Views0likes0Comments業界唯一のシャーシ型ADCであるViprionシリーズの最小モデル、C2200シャーシを提供開始
このたび、F5ネットワークスジャパン株式会社は、F5 Synthesisアーキテクチャモデルの恩恵を増大する新製品、Viprionシリーズの新モデル、2スロット式小型シャーシであるC2200を発表いたしました。C2200は、従来のミッドレンジであるC2400、上位モデルのC4480、フラッグシップのC4800に加え、小型で省スペース、お求めやすい価格設定で従来と変わらない機能をお届けします。 主なキーポイントは以下の通りです。 Viprionシリーズ最小の2RU(ラックユニット)というサイズ 対応するブレードは最新のミッドレンジブレードであるB2150 / B2250 最大ブレード2枚搭載可能。つまり最大40のvCMP仮想インスタンスを構築可能 対応ソフトウェア(TMOS)のバージョンは11.5.0以降 詳細な情報はViprion製品ページをご参照下さい。スペックを含めたデータシートやプラットフォーム一覧表などもございます。 Viprion C2200では、システムをユーザの必要に応じてアップグレードする能力を保ちながら、スケーリング可能な処理力を加えることが可能となり、企業にとって重要なアプリケーションサービスのパフォーマンスとスケーリングの両方を実現します。F5の仮想クラスタ・マルチプロセシング(vCMP ® )テクノロジを用いて、アプリケーションサービスと十分に活用されていないアプリケーション・デリバリ・コントローラ(ADC)を効率的に統合させ、最高密度のマルチテナントソリューションを提供いたします。 今までも、そこまでインフラの拡張が大きく見込まれないユーザ様環境では、従来機のViprionで最大4枚・8枚という中1-2枚程度で運用が続いている事例も数多くございます。このように、より小規模なキャパシティプランニングをされているユーザ様向けにも拡張性、仮想化ソリューションを展開し、より小型で少ない投資から始める事ができる、というご提案が可能になります。新しいViprion C2200を是非ご検討下さい! 出荷体制は整っております。製品に関する詳しい情報に関しては、F5ネットワークスジャパン株式会社(https://interact.f5.com/JP-Contact.html)、または各販売代理店までご連絡ください。233Views0likes0Comments業界唯一のシャーシ型ADCであるViprionシリーズの最小モデル、C2200シャーシを提供開始
このたび、F5ネットワークスジャパン株式会社は、F5 Synthesisアーキテクチャモデルの恩恵を増大する新製品、Viprionシリーズの新モデル、2スロット式小型シャーシであるC2200を発表いたしました。C2200は、従来のミッドレンジであるC2400、上位モデルのC4480、フラッグシップのC4800に加え、小型で省スペース、お求めやすい価格設定で従来と変わらない機能をお届けします。 主なキーポイントは以下の通りです。 Viprionシリーズ最小の2RU(ラックユニット)というサイズ 対応するブレードは最新のミッドレンジブレードであるB2150 / B2250 最大ブレード2枚搭載可能。つまり最大40のvCMP仮想インスタンスを構築可能 対応ソフトウェア(TMOS)のバージョンは11.5.0以降 詳細な情報はViprion製品ページをご参照下さい。スペックを含めたデータシートやプラットフォーム一覧表などもございます。 Viprion C2200では、システムをユーザの必要に応じてアップグレードする能力を保ちながら、スケーリング可能な処理力を加えることが可能となり、企業にとって重要なアプリケーションサービスのパフォーマンスとスケーリングの両方を実現します。F5の仮想クラスタ・マルチプロセシング(vCMP ® )テクノロジを用いて、アプリケーションサービスと十分に活用されていないアプリケーション・デリバリ・コントローラ(ADC)を効率的に統合させ、最高密度のマルチテナントソリューションを提供いたします。 今までも、そこまでインフラの拡張が大きく見込まれないユーザ様環境では、従来機のViprionで最大4枚・8枚という中1-2枚程度で運用が続いている事例も数多くございます。このように、より小規模なキャパシティプランニングをされているユーザ様向けにも拡張性、仮想化ソリューションを展開し、より小型で少ない投資から始める事ができる、というご提案が可能になります。新しいViprion C2200を是非ご検討下さい! 出荷体制は整っております。製品に関する詳しい情報に関しては、F5ネットワークスジャパン株式会社(https://interact.f5.com/JP-Contact.html)、または各販売代理店までご連絡ください。211Views0likes0CommentsPrivacy for a Price
A few weeks ago, I went to my usual haircut place and after the trim at the register I presented my loyalty card. You know the heavy paper ones that either get stamped or hole-punched for each purchase. After a certain number of paid visits, you receive a free haircut. I presented the card, still in the early stages of completion, for validation and the manager said I could convert the partially filled card to their new system. I just had to enter my email address (and some other info) in the little kiosk thingy. I declined saying, 'Ah, no thanks, enough people have my email already and don't need yet another daily digest.' He continued, 'well, we are doing away with the cards and moving all electronic so...' 'That's ok,' I replied, 'I'll pay for that extra/free haircut to keep my name off a mailing list.' This event, of course, got me thinking about human nature and how we will often give up some privacy for either convenience or something free. Imagine a stranger walking up to you and asking for your name, address, email, birthday, income level, favorite color and shopping habits. Most of us would tell them to 'fill in the blank'-off. Yet, when a Brand asks for the same info but includes something in return - free birthday dinner, discounted tickets, coupons, personalized service - we typically spill the beans. Infosys recently conducted a survey which showed that consumers worldwide will certainly share personal information to get better service from their doctors, bank and retailers; yet, they are very sensitive about how they share. Today’s digital consumers are complicated and sometimes suspicious about how institutions use their data, according to the global study of 5,000 digitally savvy consumers. They also created an infographic based on their findings. Overall they found: 82 percent want data mining for fraud protection, will even switch banks for more security; 78 percent more likely to buy from retailers with targeted ads, while only 16 percent will share social profile; 56 percent will share personal and family medical history with doctors ...and specific to retail: To know me is to sell to me: Three quarters of consumers worldwide believe retailers currently miss the mark in targeting them with ads on mobile apps, and 72 percent do not feel that online promotions or emails they receive resonate with their personal interests and needs To really know me is to sell me even more: A wide majority of consumers (78 percent) agree that they would be more likely to purchase from a retailer again if they provided offers targeted to their interests, wants or needs, and 71 percent feel similarly if offered incentives based on location Catch-22 for retailers? While in principle shoppers say they want to receive ads or promotions targeted to their interests, just 16 percent will share social media profile information. Lacking these details could make it difficult for retailers to deliver tailored digital offers Your data is valuable and comes with a price. While many data miners are looking to capitalize on our unique info, you can always decline. Yes, it is still probably already gathered up somewhere else; Yes, you will probably miss out on some free or discounted something; Yes, you will probably see annoying pop-up ads on that free mobile app/game and; Yes, you might feel out of the loop. But, it was still fun to be in some control over my own info leaks. ps Related: Path pledges to be ad-free: Will consumers pay for their privacy? What Would You Pay for Privacy? Paying for privacy: Why it’s time for us to become customers again Consumers Worldwide Will Allow Access To Personal Data For Clear Benefits, Says Infosys Study Engaging with digital consumers: Insights from Infosys survey [Infographic] Parking Ticket Privacy Invasion of Privacy - Mobile App Infographic Style 'Radio Killed the Privacy Star' Music Video? Technorati Tags: privacy,data,big data,mobile,loyalty,consumer,human,information,personal,silva,security,retail,financial Connect with Peter: Connect with F5:590Views0likes1CommentYou Got a Minute?
Like most of us, I try to read the entire internet on a daily basis but for some reason, these slipped through. They both came out in 2011 and I am sure the numbers have changed in many cases. For instance, the graphic shows 70+ domains registered every minute and for Sept 3 (thus far for today), on average 78 per minute have been registered. Yet for twitter, the chart indicates 320 new accounts per minute but my look up today, if my math is correct, shows 94 new twitter accounts every minute but with 546,000 (vs. 98,000+) tweets per minute today. Regardless, the somewhat, slightly dated info is still mind boggling and it is always fun to see historical data. Things that happen on the Internet every 60 Seconds circa 2011. And the products we use: ps Related: 60 Seconds – Things That Happen On Internet Every Sixty Seconds [Infographic] Technorati Tags: data,60 seconds,stats,internet,web,social media,application delivery,silva Connect with Peter: Connect with F5:282Views0likes0CommentsJSON versus XML: Your Choice Matters More Than You Think
Should the enterprise standardize on JSON or XML as their lingua franca for Web 2.0 integration?Or should they use both as best fits the application?The decision impacts more than just integration – it resounds across the entire infrastructure and impacts everything from security to performance to availability of those applications. One of the things a developer may or may not have control over when building enterprise applications is the format of the data used to communicate (integrate) with other applications. Increasingly services external to the enterprise are very Web 2.0 in that they provide HTTP-based APIs for integration that exchange data in one of a couple of standard formats: XML and JSON. While RSS and ATOM are also seen in APIs as options, these are generally used only when the data being presented is frequently updated and of a “listing” style nature. XML and JSON are used to deliver more complex structures that do not fit well in to the paradigm described by RSS and ATOM formatted information. Increasingly libraries or toolkits are used to build interactive Web 2.0 style applications – XAJAX, SAJAX, Dojo, Prototype, script.aculo.us – and these, too, generally default to XML or JSON, though other formats are often supported as well. So as you’re building out that Web 2.0 style application and thinking about the API you’re going to offer to make it easier for partners/customers/other departments to handle integration with their Web 2.0 style applications – or even thinking about the way in which data will be exchanged with the client (browser) - you need to think carefully about the choice you’re making. There are pros and cons to both JSON and XML, and the choice has implications outside the confines of application development in your organization. The debate on which is “best” or “optimal” is far from over, and it’s likely to eclipse – for developers anyway – the religious-style wars over the choice of browser. Even mainstream technology coverage is taking an interest in the subject. A recent piece from C|NET on “NoSQL and the future of cloud databases” says “Mapping object data to JSON, a JavaScript data interchange format, is far less complex. The "schemaless" nature of many of these products is an excellent fit with agile development methodologies.” Indeed, schemaless data formats are certainly more flexible, but that flexibility has a price that may need to be paid by the rest of the infrastructure.259Views0likes2CommentsThe Database Tier is Not Elastic
It is the database tier and its unique characteristics that ultimate determine where an application will be deployed. cloud computing is mostly about “elasticity.” The extraction and contraction of resources based on demand. It is the contraction of resources which is oft times forgotten but without it, cloud computing and highly dynamic, virtualized infrastructures are little more than seamless capacity growth engines. For web and application architectural tiers, the contraction of resources is as much a requirement to realize the benefits of shared, dynamic capacity as the ability to rapidly expand. But in the database tier, the application data layer, contraction is more a contradiction than anything else. WHAT COMES UP USUALLY COMES DOWN Elasticity in applications is a good thing. It is important to the overall success rate of cloud computing and dynamic infrastructure initiatives to remember that “what comes up, must come down” – especially in relation to provisioned compute resources. Applications should expand their resource consumption to meet demand, but when demand wanes, so too should their resource consumption rates. By spreading compute resources around the various applications that need them in a dynamic way, based on demand, we achieve peak efficiency and make the most of our capital expenditures. Such architectural approaches allow us to allocate “temporary” compute resources when necessary from cloud computing environments external to the organization, and release them when not necessary. This is all well and good, except when we’re talking about the database. Databases employ a number of techniques by which they can improve their performance, and most of them involve complex caching and pooling strategies that make use of lots and lots of RAM. At the database tier, RAM may increase, but it rarely decreases. It’s a different kind of workload than web and application servers, which can easily be scaled out using parallel processing strategies. Many, many copies of the same code can execute in isolated chunks around the data center because they do not need access to a centralized store of information about all the sessions that may be occurring at the same time. In order to maintain consistency, databases use indexes and locks and other computational techniques to manage access to data, especially in the case of modification. This means that even though the code to perform such tasks can be ostensibly executed on multiple copies of a database, the especial data required to ensure consistent operations is contained in a single, contiguous data structure. That data cannot be easily transferred or replicated in real-time to other copies. There is a single data overlord that must maintain a holistic view of the data and therefore must (today) run on a single machine (virtual or iron). That means all access is through a single gateway, and scaling that gateway is generally only possible through the expansion of resources available to the database application. Scale up is the traditional strategy, and until we learn how to share memory blocks across the network in a way that assures consistency we can either bow to the belief that eventual consistency is good enough or that there will be one, ginormous system that continually expands along with data growth. YOU CAN SCALE OUT READ but NOT WRITE It is the unique characteristics of data that result in a quirky architecture that allows us to scale out read but only scale up write. This makes the database tier a lot more complex than perhaps it once was. In the past, a single ginormous server housed a database and it was the only path to data. Today, however, the need for better performance and support for hyperscaling of applications has led to a functional partitioning scheme that separates reads from writes and assumes that eventual consistency is better than non-availability. This does not mean it’s impossible to put a database into an external cloud computing environment. It just means that it’s going to run, 24x7, and scalability cannot necessarily be achieved by scaling out – the traditional means by which a cloud computing environment enables scale. It means that scaling up will require migration, if you haven’t adjusted for future growth to begin with, and that there may be, depending on the cloud computing environment you choose, an upper bounds to your data growth. If you’ve only got X amount of disk and memory available, at some point your database will hit that upper bound and either it will begin to drag down performance or availability or simply be unable to continue growing. Or you’ll need to consider the use of distributed database systems which can scale out by distributing data across multiple database nodes (local or remote) either using replication or duplication. When used over a LAN – low latency, high performing, high bandwidth – the replication and/or duplication required for the master database to manage and maintain its minion databases can be successful. One would assume, then, that the use of distributed database systems in a cloud computing environment would be the appropriate marriage of the two architectural approaches to scalability. However, most enterprise applications existing today – both developed in-house and packaged – do not take advantage of such technology and there exist no standardized means by which a traditional DBMS can be morphed into a DDBMS. Additionally, the replication/duplication of database systems over a WAN – high latency, lower performing, low bandwidth – is problematic for maintaining consistency. Which often means a closed-system, LAN connected only approach to application architecture is the only feasible option. Which puts us right back where we were – with the database tier being upward-bound only, not elastic, and potentially outgrowing the ability of a provider to offer an appropriate level of compute resources to maintain performance and capacity, effectively limiting data growth. Which is not a good thing, because limiting data growth means limiting business growth. DATA GROWTH is AN INDICATOR of BUSINESS SUCCESS It is almost universally true that the growth of data is an indicator of business success. As business grows, so does the customer data. As business grows, so does the user-generated content. As business grows, so do the financial and employee records and e-mail. And, of course, the gigabytes of Power Point presentations and standard operating procedure documents that grow, morph, and are ultimately discarded – but maintained for posterity/reference in the future grow along with the business. Data grows, it doesn’t shrink. There is nothing that so accurately lives up to the “pack rat” mentality as a business. And much of it is stored in databases, which live in the data tier and are increasingly web (and mobile client) enabled. So when we talk about elastic applications we’re really talking just about the applications, not necessarily the data tier. Unless you have employed a sharded architectural approach to enabling long-term growth, you have “THE database” and it’s going to grow and grow and grow and never shrink. It isn’t elastic; the parts of an application that are are the applications that access THE database. It is this “nut” that needs to be cracked for cloud computing to truly become “the” standard for data center architectures. Until we either see DDBMS become the standard for database systems or figure out how to really share compute resources across the LAN such that RAM from multiple machines appears to be a contiguous, locally accessible chunk of memory, the database tier will be the limiting – and deciding – factor in determining how an application is architected and where it might end up residing. Related blogs & articles: Data Center Feng Shui: Normalizing Phased Deployment with Virtualized Network Appliances The Multi-Generational Datacenter: From Toddlers to Teenagers Infrastructure Scalability Pattern: Sharding Sessions Infrastructure Scalability Pattern: Partition by Function or Type Applying Scalability Patterns to Infrastructure Architecture Sessions, Sessions Everywhere I, Cloud Infrastructure 2.0 + Cloud + IT as a Service = An Architectural Parfait The Battle of Economy of Scale versus Control and Flexibility203Views0likes1CommentCloud Computing: Will data integration be its Achilles Heel?
Wesley: Now, there may be problems once our app is in the cloud. Inigo: I'll say. How do I find the data? Once I do, how do I integrate it with the other apps? Once I integrate it, how do I replicate it? If you remember this somewhat altered scene from the Princess Bride, you also remember that no one had any answers for Inigo. That's apropos of this discussion, because no one has any good answers for this version of Inigo either. And no, a holocaust cloak is not going to save the day this time. If you've been considering deploying applications in a public cloud, you've certainly considered what must be the Big Hairy Question regarding cloud computing: how do I get at my data? There's very little discussion about this topic, primarily because at this point there's no easy answer. Data stored in the cloud is not easily accessible for integration with applications not residing in the cloud, which can definitely be a roadblock to adopting public cloud computing. Stacey Higginbotham at GigaOM had a great post on the topic of getting data into the cloud, and while the conclusion that bandwidth is necessary is also applicable to getting your data out of the cloud, the details are left in your capable hands. We had this discussion when SaaS (Software as a Service) first started to pick up steam. If you're using a service like salesforce.com to store business critical data, how do you integrate that back into other applications that may need it? Web services were the first answer, followed by integration appliances and solutions that included custom-built adapters for salesforce.com to more easily enable access and integration to data stored "out there", in the cloud. Amazon offers URL-based and web services access to data stored in its SimpleDB offering, but that doesn't help folks who are using Oracle, SQL Server, or MySQL offerings in the cloud. And SimpleDB is appropriately named; it isn't designed to be an enterprise class service - caveat emptor is in full force if you rely upon it for critical business data. RDBMS' have their own methods of replication and synchronization, but mirroring and real-time replication methods require a lot of bandwidth and very low latency connections - something not every organization can count on having. Of course you can always deploy custom triggers and services that automatically replicate back into the local data center, but that, too, is problematic depending on bandwidth availability and accessibility of applications and databases inside the data center. The reverse scenario is much more likely, with a daemon constantly polling the cloud computing data and pulling updates back into the data center. You can also just leave that data out there in the cloud, implement, or take advantage of if they exist, service-based access to the data and integrate it with business processes and applications inside the data center. You're relying on the availability of the cloud, the Internet, and all the infrastructure in between, but like the solution for integrating with salesforce.com and other SaaS offerings, this is likely the best of a set of "will have to do" options. The issue of data and its integration has not yet raised its ugly head, mostly because very few folks are moving critical business applications into the cloud and admittedly, cloud computing is still in its infancy. But even non-critical applications are going to use or create data, and that data will, invariably, become important or need to be accessed by folks in the organization, which means access to that data will - probably sooner rather than later - become a monkey on the backs of IT. The availability of and ease of access to data stored in the public cloud for integration, data mining, business intelligence, and reporting - all common enterprise application use of data - will certainly affect adoption of cloud computing in general. The benefits of saving dollars on infrastructure (management, acquisition, maintenance) aren't nearly as compelling a reason to use the cloud when those savings would quickly be eaten up by the extra effort necessary to access and integrate data stored in the cloud. Related articles by Zemanta SQL-as-a-Service with CloudSQL bridges cloud and premises Amazon SimpleDB ready for public use Blurring the functional line - Zoho CloudSQL merges on-site and on-cloud As a Service: The many faces of the cloud A comparison of major cloud-computing providers (Amazon, Mosso, GoGrid) Public Data Goes on Amazon's Cloud309Views0likes2Comments