data
13 TopicsThe BIG-IP Application Security Manager Part 10: Event Logging
This is the last article in a 10-part series on the BIG-IP Application Security Manager (ASM). The first nine articles in this series are: What is the BIG-IP ASM? Policy Building The Importance of File Types, Parameters, and URLs Attack Signatures XML Security IP Address Intelligence and Whitelisting Geolocation Data Guard Username and Session Awareness Tracking In this, the final article in the BIG-IP ASM series, we will dive into the excitement and necessity of event logging. Throughout this ASM series, we've looked at log files from a distance but we never really talked about how to configure logging. I know...event logging might not be the most fascinating part of the ASM, but it's really important stuff! Before joining F5, I worked as a cyber threat analyst for a government organization. I saw lots of cyber attacks against various systems. After an attack would take place, my team and I would come in and study the attack vector, target points, etc and it seemingly never failed that the system logs showed at least some (but many times all) of the malicious activity. If someone had just been reviewing the logs... Logging Profiles Logging profiles specify how and where the ASM stores requests for application data. In versions prior to 11.3.0, a logging profile is associated with a security policy, but beginning in 11.3.0 the logging profile is associated with a virtual server. I'm using version 11.3.0 in these examples, so this article will associate a logging profile with a virtual server. When choosing a logging profile, you have the option of creating your own or using one of the system-supplied profiles. In addition, you can log data locally, remotely, or both using the same logging profile. Keep in mind that the system-supplied profiles are configured to only log data locally. The logging profile specifies two things: where the log data is stored (locally, remotely, both) and what data gets stored (all requests, illegal requests only, etc). Creating a Profile To create a new logging profile, navigate to Security >> Event Logs >> Logging Profiles and click the "Create" button. You will see the following screen: I named this one "Test_Log_Profile" and enabled logging for Application Security. Notice that you can enable logging for Application Security, Protocol Security, and/or Denial of Service Protection. I enabled local storage and filtered for "Illegal Requests Only". Now that I have my logging profile created, I can associate it with the virtual server. Configuring the Virtual Server Navigate to Local Traffic >> Virtual Servers >> Virtual Server List and click on the virtual server with which you want to associate the logging profile. Notice the tabs across the top part of the page...click on Security >> Policies and you will see the following screen: Now you can move the logging profile from "Available" to "Selected" in order to enable the profile for the virtual server. Also, notice that "Application Security Policy" is enabled and the name of the security policy is listed in the drop down menu. If you enable more than one profile, the ASM will apply the settings of the top profile first and then work down the list. Viewing Log Files Log data is stored in the /var/log/asm folder on the BIG-IP. You can view the details of the log data using the command line or the GUI. Command Line To view the log data via the command line, use a command like "cat" or "tail". You can also use other standard commands like "grep" to filter results or "more" to view one page at a time. GUI To view the Application Security logs in the GUI, navigate to Security >> Event Logs >> Application >> Requests and you will see the following screen: You can click on any of the application requests, and the details will load in the bottom portion of the screen. You can view the Request Details, the actual HTTP Request, or the actual HTTP Response (if response logging is enabled in your logging profile). Many times response logging is not enabled due to the large amount of data this would consume. Remote Storage The ASM provides the option of storing log data on a remote server. When configuring a logging profile, you can view the Advanced Configuration to enable remote storage and select one of three types. The first is "Remote" and this option specifies that the ASM will store all traffic on a remote logging server like syslog. The second is "Reporting Server" and this option specifies that the ASM will store all log data on a server using a preconfigured storage format. The third option is "ArcSight" and this option specifies that the ASM will store all log data on a remote server using predefined ArcSight settings for the logs (the log messages are in the Common Event Format). Speaking of remote storage...a popular remote log management tool is Splunk. In fact, Splunk offers a specific F5 app that does a fantastic job of organizing and displaying log data in a way that is easy to understand and consume. If you need more information on the Splunk app for F5 log data, check out this article written by the one and the only Jason Rahm...you'll be glad you did! Well, that wraps things up for this article. It's been a fun ride through the internal workings of the BIG-IP ASM. I hope you have enjoyed this series as much as I have. Stay tuned for my next set of articles on the awesomeness that is DNS...see you soon!! Update: Now that the article series is complete, I wanted to share the links to each article. If I add any more in the future, I'll update this list. What is the BIG-IP ASM? Policy Building The Importance of File Types, Parameters, and URLs Attack Signatures XML Security IP Address Intelligence and Whitelisting Geolocation Data Guard Username and Session Awareness Tracking Event Logging4.3KViews0likes5CommentsPrivacy for a Price
A few weeks ago, I went to my usual haircut place and after the trim at the register I presented my loyalty card. You know the heavy paper ones that either get stamped or hole-punched for each purchase. After a certain number of paid visits, you receive a free haircut. I presented the card, still in the early stages of completion, for validation and the manager said I could convert the partially filled card to their new system. I just had to enter my email address (and some other info) in the little kiosk thingy. I declined saying, 'Ah, no thanks, enough people have my email already and don't need yet another daily digest.' He continued, 'well, we are doing away with the cards and moving all electronic so...' 'That's ok,' I replied, 'I'll pay for that extra/free haircut to keep my name off a mailing list.' This event, of course, got me thinking about human nature and how we will often give up some privacy for either convenience or something free. Imagine a stranger walking up to you and asking for your name, address, email, birthday, income level, favorite color and shopping habits. Most of us would tell them to 'fill in the blank'-off. Yet, when a Brand asks for the same info but includes something in return - free birthday dinner, discounted tickets, coupons, personalized service - we typically spill the beans. Infosys recently conducted a survey which showed that consumers worldwide will certainly share personal information to get better service from their doctors, bank and retailers; yet, they are very sensitive about how they share. Today’s digital consumers are complicated and sometimes suspicious about how institutions use their data, according to the global study of 5,000 digitally savvy consumers. They also created an infographic based on their findings. Overall they found: 82 percent want data mining for fraud protection, will even switch banks for more security; 78 percent more likely to buy from retailers with targeted ads, while only 16 percent will share social profile; 56 percent will share personal and family medical history with doctors ...and specific to retail: To know me is to sell to me: Three quarters of consumers worldwide believe retailers currently miss the mark in targeting them with ads on mobile apps, and 72 percent do not feel that online promotions or emails they receive resonate with their personal interests and needs To really know me is to sell me even more: A wide majority of consumers (78 percent) agree that they would be more likely to purchase from a retailer again if they provided offers targeted to their interests, wants or needs, and 71 percent feel similarly if offered incentives based on location Catch-22 for retailers? While in principle shoppers say they want to receive ads or promotions targeted to their interests, just 16 percent will share social media profile information. Lacking these details could make it difficult for retailers to deliver tailored digital offers Your data is valuable and comes with a price. While many data miners are looking to capitalize on our unique info, you can always decline. Yes, it is still probably already gathered up somewhere else; Yes, you will probably miss out on some free or discounted something; Yes, you will probably see annoying pop-up ads on that free mobile app/game and; Yes, you might feel out of the loop. But, it was still fun to be in some control over my own info leaks. ps Related: Path pledges to be ad-free: Will consumers pay for their privacy? What Would You Pay for Privacy? Paying for privacy: Why it’s time for us to become customers again Consumers Worldwide Will Allow Access To Personal Data For Clear Benefits, Says Infosys Study Engaging with digital consumers: Insights from Infosys survey [Infographic] Parking Ticket Privacy Invasion of Privacy - Mobile App Infographic Style 'Radio Killed the Privacy Star' Music Video? Technorati Tags: privacy,data,big data,mobile,loyalty,consumer,human,information,personal,silva,security,retail,financial Connect with Peter: Connect with F5:574Views0likes1CommentIs it possible to set a threshold limit with Data Guard in the ASM?
Hi All, I have been playing around with data guard and I am able to block requests that match the regex expressions, or allow them and mask them etc. My question is, Is it possible to only block once a threshold limit has been reached? For example I want to allow 1 credit card to go through however if 10 credit cards are identified in a single request, can I block that request? As always, thank you in advance.Solved424Views0likes4CommentsThe Breach of Things
Yet another retailer has confessed that their systems were breached and an untold number of victims join the growing list of those who have had their data was stolen. This one could be bigger than the infamous Target breach. I wonder if some day we'll be referring to periods of time by the breach that occurred. 'What? You don't remember the Target breach of '13! Much smaller than the Insert Company Here Breach of 2019!' Or almost like battles of a long war. 'The Breach of 2013 was a turning point in the fight against online crime,' or some other silly notion. On top of that, a number of celebrity's private photos, stored in the cloud (of course), were privately stolen. I'm sorry but if you are going to take private pictures of yourself with something other than a classic Polaroid, someone else will eventually see them. Almost everything seems breach'able these days. Last year, the first toilet was breached. The one place you'd think you would have some privacy has also been soiled. Add to that televisions, thermostats, refrigerators and automobiles. And a person's info with a dangerous hug. Companies are sprouting up all over to offer connected homes where owners can control their water, temperature, doors, windows, lights and practically any other item, as long as it has a sensor. Won't be long until we see sensational headlines including 'West Coast Fridges Hacked...Food Spoiling All Over!' or 'All Eastern Televisions Hacked to Broadcast old Gilligan's Island Episodes!' As more things get connected, the risks of a breach obviously increase. The more I thought about it, I felt it was time to resurrect this dandy from 2012: Radio Killed the Privacy Star for those who may have missed it the first time. Armed with a mic and a midi, I belt out, karaoke style, my music video ‘Radio Killed the Privacy Star.’ Lyrics can be found at Radio Killed the Privacy Star. Enjoy. ps Related The Internet of Sports Is IoT Hype For Real? Internet of Things OWASP Top 10 Uncle DDoS'd, Talking TVs and a Hug Welcome to the The Phygital World The DNS of Things Technorati Tags: breach,things,iot,data,privacy,target,photos,f5,silva,security,video Connect with Peter: Connect with F5:358Views0likes0CommentsCloud Computing: Will data integration be its Achilles Heel?
Wesley: Now, there may be problems once our app is in the cloud. Inigo: I'll say. How do I find the data? Once I do, how do I integrate it with the other apps? Once I integrate it, how do I replicate it? If you remember this somewhat altered scene from the Princess Bride, you also remember that no one had any answers for Inigo. That's apropos of this discussion, because no one has any good answers for this version of Inigo either. And no, a holocaust cloak is not going to save the day this time. If you've been considering deploying applications in a public cloud, you've certainly considered what must be the Big Hairy Question regarding cloud computing: how do I get at my data? There's very little discussion about this topic, primarily because at this point there's no easy answer. Data stored in the cloud is not easily accessible for integration with applications not residing in the cloud, which can definitely be a roadblock to adopting public cloud computing. Stacey Higginbotham at GigaOM had a great post on the topic of getting data into the cloud, and while the conclusion that bandwidth is necessary is also applicable to getting your data out of the cloud, the details are left in your capable hands. We had this discussion when SaaS (Software as a Service) first started to pick up steam. If you're using a service like salesforce.com to store business critical data, how do you integrate that back into other applications that may need it? Web services were the first answer, followed by integration appliances and solutions that included custom-built adapters for salesforce.com to more easily enable access and integration to data stored "out there", in the cloud. Amazon offers URL-based and web services access to data stored in its SimpleDB offering, but that doesn't help folks who are using Oracle, SQL Server, or MySQL offerings in the cloud. And SimpleDB is appropriately named; it isn't designed to be an enterprise class service - caveat emptor is in full force if you rely upon it for critical business data. RDBMS' have their own methods of replication and synchronization, but mirroring and real-time replication methods require a lot of bandwidth and very low latency connections - something not every organization can count on having. Of course you can always deploy custom triggers and services that automatically replicate back into the local data center, but that, too, is problematic depending on bandwidth availability and accessibility of applications and databases inside the data center. The reverse scenario is much more likely, with a daemon constantly polling the cloud computing data and pulling updates back into the data center. You can also just leave that data out there in the cloud, implement, or take advantage of if they exist, service-based access to the data and integrate it with business processes and applications inside the data center. You're relying on the availability of the cloud, the Internet, and all the infrastructure in between, but like the solution for integrating with salesforce.com and other SaaS offerings, this is likely the best of a set of "will have to do" options. The issue of data and its integration has not yet raised its ugly head, mostly because very few folks are moving critical business applications into the cloud and admittedly, cloud computing is still in its infancy. But even non-critical applications are going to use or create data, and that data will, invariably, become important or need to be accessed by folks in the organization, which means access to that data will - probably sooner rather than later - become a monkey on the backs of IT. The availability of and ease of access to data stored in the public cloud for integration, data mining, business intelligence, and reporting - all common enterprise application use of data - will certainly affect adoption of cloud computing in general. The benefits of saving dollars on infrastructure (management, acquisition, maintenance) aren't nearly as compelling a reason to use the cloud when those savings would quickly be eaten up by the extra effort necessary to access and integrate data stored in the cloud. Related articles by Zemanta SQL-as-a-Service with CloudSQL bridges cloud and premises Amazon SimpleDB ready for public use Blurring the functional line - Zoho CloudSQL merges on-site and on-cloud As a Service: The many faces of the cloud A comparison of major cloud-computing providers (Amazon, Mosso, GoGrid) Public Data Goes on Amazon's Cloud300Views0likes2CommentsAsk the Expert – Are WAFs Dead?
Brian McHenry, Sr. Security Solution Architect, addresses the notion that Web Application Firewalls are dead and talks about what organizations need to focus on today when protecting their data and applications across a diverse environment. He discusses many of the current application threats, how to protect your data across hybrid environments and the importance of a security policy that is portable across many environments. Move on from the idea of a traditional WAF and embrace hybrid WAF architectures. ps Related: The Death of WAF as We Know It F5 Security Solutions Technorati Tags: f5,waf,security,web application firewall,cloud,data,silva,video,hybrid Connect with Peter: Connect with F5:290Views0likes0CommentsYou Got a Minute?
Like most of us, I try to read the entire internet on a daily basis but for some reason, these slipped through. They both came out in 2011 and I am sure the numbers have changed in many cases. For instance, the graphic shows 70+ domains registered every minute and for Sept 3 (thus far for today), on average 78 per minute have been registered. Yet for twitter, the chart indicates 320 new accounts per minute but my look up today, if my math is correct, shows 94 new twitter accounts every minute but with 546,000 (vs. 98,000+) tweets per minute today. Regardless, the somewhat, slightly dated info is still mind boggling and it is always fun to see historical data. Things that happen on the Internet every 60 Seconds circa 2011. And the products we use: ps Related: 60 Seconds – Things That Happen On Internet Every Sixty Seconds [Infographic] Technorati Tags: data,60 seconds,stats,internet,web,social media,application delivery,silva Connect with Peter: Connect with F5:279Views0likes0CommentsJSON versus XML: Your Choice Matters More Than You Think
Should the enterprise standardize on JSON or XML as their lingua franca for Web 2.0 integration?Or should they use both as best fits the application?The decision impacts more than just integration – it resounds across the entire infrastructure and impacts everything from security to performance to availability of those applications. One of the things a developer may or may not have control over when building enterprise applications is the format of the data used to communicate (integrate) with other applications. Increasingly services external to the enterprise are very Web 2.0 in that they provide HTTP-based APIs for integration that exchange data in one of a couple of standard formats: XML and JSON. While RSS and ATOM are also seen in APIs as options, these are generally used only when the data being presented is frequently updated and of a “listing” style nature. XML and JSON are used to deliver more complex structures that do not fit well in to the paradigm described by RSS and ATOM formatted information. Increasingly libraries or toolkits are used to build interactive Web 2.0 style applications – XAJAX, SAJAX, Dojo, Prototype, script.aculo.us – and these, too, generally default to XML or JSON, though other formats are often supported as well. So as you’re building out that Web 2.0 style application and thinking about the API you’re going to offer to make it easier for partners/customers/other departments to handle integration with their Web 2.0 style applications – or even thinking about the way in which data will be exchanged with the client (browser) - you need to think carefully about the choice you’re making. There are pros and cons to both JSON and XML, and the choice has implications outside the confines of application development in your organization. The debate on which is “best” or “optimal” is far from over, and it’s likely to eclipse – for developers anyway – the religious-style wars over the choice of browser. Even mainstream technology coverage is taking an interest in the subject. A recent piece from C|NET on “NoSQL and the future of cloud databases” says “Mapping object data to JSON, a JavaScript data interchange format, is far less complex. The "schemaless" nature of many of these products is an excellent fit with agile development methodologies.” Indeed, schemaless data formats are certainly more flexible, but that flexibility has a price that may need to be paid by the rest of the infrastructure.252Views0likes2CommentsCloudware and information privacy: TANSTAAFL
Ars Technica is reporting on a recent Pew study on cloud computing and privacy, specifically concerning remote data storage and the kind of data-mining performed on it by providers like Google, indicates that while consumers are concerned about the privacy of their data in the cloud, they still subject themselves to what many consider to be an invasion of privacy and misuse of data. 68 percent of respondents who said they'd used cloud services declared that they would be "very" concerned, and another 19 percent at least "somewhat" concerned, if their personal data were analyzed to provide targeted advertising. This, of course, is precisely what many Web mail services, such as Google's own Gmail, do—which implies that at least some of those who profess to be "very" concerned about the practice are probably nevertheless subjecting themselves to it. One wonders why those who profess to be very concerned about privacy and data-mining tactics used by cloudware providers would continue to use those services? One answer might lie in the confusing legalese of the EULA (end user license agreement) presented by corporations. Where's F5? VMWorld Sept 15-18 in Las Vegas Storage Decisions Sept 23-24 in New York Networld IT Roadmap Sept 23 in Dallas Oracle Open World Sept 21-25 in San Francisco Storage Networking World Oct 13-16 in Dallas Storage Expo 2008 UK Oct 15-16 in London Storage Networking World Oct 27-29 in Frankfurt It's necessary, of course, that the EULA be written using the language of the courts under which it will be enforced. But there are two problems with EULAs: first, they aren't really required to be read and second, even if they were really required to be read, they can't be easily understood by the vast majority of consumers. I'll be the first to admit I rarely read EULAs. They're long, filled with legalese speak, and they always come down to the same basic set of rules: it's our software, we don't make any guarantees, and oh, yeah, any rights not specifically listed (like the use of the data you use with our "stuff") are reserved for us. It's that last line that's the killer, by the way because just about everything falls under that particular clause in the EULA. Caveat emptor truly applies in the world of cloudware and online services. Buyer beware! You may be agreeing to all sorts of things you didn't intend. The argument against such privacy and security assurances for consumers is that they aren't paying for the service, therefore the provider needs some way to generate revenue to continue providing the service. That revenue is often generated by advertising and partnerships, but it's also largely provided by selling off personal information either directly gleaned from users or mined from their data. Which is what Google does with GMail. Enterprises, at least, are not only aware of but thoroughly understand the ramifications of storing their data "in the cloud". SaaS (Software as a Service) has had to provide proof positive that the data stored in their systems are the property of the consumer, that the data is not being used for data-mining or sharing purposes, and that security is in place to protect it from theft/viewing/etc... But in between the consumer and the enterprise markets lies the SMB, the small-medium business. Not quite financially able to afford a full data center and IT staff of their own, they often take advantage of cloudware services as a stop-gap measure. But in doing so, they put their business and data at risk, because they aren't necessarily using cloudware designed with businesses in mind, at least not from a data security perspective, and that means they are often falling under the more liberal end-user license agreement. All bets are off on the sanctity of their data. TANSTAAFL. There ain't no such thing as a free lunch, people, and that has never rang as true as it does in the world of cloudware and online services. If it's heralded as "free" that only means you aren't paying money for it, but you are bartering for the service; exchanging your personal information and data for the privilege of using that online service. In many cases folks weigh the value they receive from the "free" service against divulging personal information and data and make an informed choice to exchange that information for the service. When that's the case - the consumer or business is making an informed choice - it's all good. Everybody wins. Bartering is, after all, the oldest form of exchanging goods or services. And it's still used today. My grandmother paid her doctor three chickens for delivering my father, and that was in the mid 1900s, not that long ago at all. So exchanging personal information and access to your data for services is completely acceptable; just make sure you understand that's what you're doing - especially if you're a business.240Views0likes0Comments業界唯一のシャーシ型ADCであるViprionシリーズの最小モデル、C2200シャーシを提供開始
このたび、F5ネットワークスジャパン株式会社は、F5 Synthesisアーキテクチャモデルの恩恵を増大する新製品、Viprionシリーズの新モデル、2スロット式小型シャーシであるC2200を発表いたしました。C2200は、従来のミッドレンジであるC2400、上位モデルのC4480、フラッグシップのC4800に加え、小型で省スペース、お求めやすい価格設定で従来と変わらない機能をお届けします。 主なキーポイントは以下の通りです。 Viprionシリーズ最小の2RU(ラックユニット)というサイズ 対応するブレードは最新のミッドレンジブレードであるB2150 / B2250 最大ブレード2枚搭載可能。つまり最大40のvCMP仮想インスタンスを構築可能 対応ソフトウェア(TMOS)のバージョンは11.5.0以降 詳細な情報はViprion製品ページをご参照下さい。スペックを含めたデータシートやプラットフォーム一覧表などもございます。 Viprion C2200では、システムをユーザの必要に応じてアップグレードする能力を保ちながら、スケーリング可能な処理力を加えることが可能となり、企業にとって重要なアプリケーションサービスのパフォーマンスとスケーリングの両方を実現します。F5の仮想クラスタ・マルチプロセシング(vCMP ® )テクノロジを用いて、アプリケーションサービスと十分に活用されていないアプリケーション・デリバリ・コントローラ(ADC)を効率的に統合させ、最高密度のマルチテナントソリューションを提供いたします。 今までも、そこまでインフラの拡張が大きく見込まれないユーザ様環境では、従来機のViprionで最大4枚・8枚という中1-2枚程度で運用が続いている事例も数多くございます。このように、より小規模なキャパシティプランニングをされているユーザ様向けにも拡張性、仮想化ソリューションを展開し、より小型で少ない投資から始める事ができる、というご提案が可能になります。新しいViprion C2200を是非ご検討下さい! 出荷体制は整っております。製品に関する詳しい情報に関しては、F5ネットワークスジャパン株式会社(https://interact.f5.com/JP-Contact.html)、または各販売代理店までご連絡ください。233Views0likes0Comments