Get Social with DevCentral
That title sounds so 2009 but let’s go with it anyway. #Flashback…no, #Throwback…no, how about #TinkerTuesday? Is there such a thing? (There is.) #DevCentral will be ramping up our social activities in 2018 and we wanted to share some of the media channels you can join to stay connected and engaged with the community. Did you know that the Twitter bird has a name? It’s Larry. And while dc’s blue ball logo doesn’t have a name, you can find your @devcentral team members @psilvas, @jasonrahm, and @JohnWagnon on twitter sharing their technology insights along with some personal daily happenings and thoughts. Stay connected for new articles, iRules, videos, the Agility Conference and earn additional DevCentral points for answering the question of the day! Don’t feel like reading anything and prefer to watch stuff? Then head on over to our YouTube channel for hours of instructional videos from our ‘Make it Work’ series, cool tech tips along with the awesome Lightboard Lessons. Lightboard Lessons are one of our most popular pieces of content and by subscribing to our channel, you’ll get the first alerts via email that a new video has published. You’ll probably even get to watch the video before it even posts to DevCentral. That’s right, early access. Prefer to hang out with the LinkedIn crowd? While the F5 Certified! Professionals LinkedIn group is very active, the F5 DevCentral LinkedIn Group has been a little dormant recently so we’re looking to gear that up again also. With a little over a 1000 members, it’s a great way to converse with other members as we march toward the 12,000+ participants in Ken’s group. When DevCentral started back in 2003, it was one of the original ‘social’ community sites when social media was still in its infancy. Members range from beginning to advanced devs, industry thought leaders, and F5 MVPs. I’m also aware that there are BIG-IP discussions on Stack overflow, repos on github, the F5 Facebook page, MVP Kevin Davies’ Telegram F5 Announce and others. Where else should we engage with you and where should we be more active? Hit us up with the hash, #whereisdevcentral and we'll meet you there. ps250Views0likes0CommentsIntroducing PoshTweet - The PowerShell Twitter Script Library
It's probably no surprise from those of you that follow my blog and tech tips here on DevCentral that I'm a fan of Windows PowerShell. I've written a set of Cmdlets that allow you to manage and control your BIG-IP application delivery controllers from within PowerShell and a whole set of articles around those Cmdlets. I've been a Twitter user for a few years now and over the holidays, I've noticed that Jeffrey Snover from the PowerShell team has hopped aboard the Twitter bandwagon and that got me to thinking... Since I live so much of my time in the PowerShell command prompt, wouldn't it be great to be able to tweet from there too? Of course it would! HTTP Requests So, last night I went ahead and whipped up a first draft of a set of PowerShell functions that allow access to the Twitter services. I implemented the functions based on Twitter's REST based methods so all that was really needed to get things going was to implement the HTTP GET and POST requests needed for the different API methods. Here's what I came up with. function Execute-HTTPGetCommand() { param([string] $url = $null); if ( $url ) { [System.Net.WebClient]$webClient = New-Object System.Net.WebClient $webClient.Credentials = Get-TwitterCredentials [System.IO.Stream]$stream = $webClient.OpenRead($url); [System.IO.StreamReader]$sr = New-Object System.IO.StreamReader -argumentList $stream; [string]$results = $sr.ReadToEnd(); $results; } } function Execute-HTTPPostCommand() { param([string] $url = $null, [string] $data = $null); if ( $url -and $data ) { [System.Net.WebRequest]$webRequest = [System.Net.WebRequest]::Create($url); $webRequest.Credentials = Get-TwitterCredentials $webRequest.PreAuthenticate = $true; $webRequest.ContentType = "application/x-www-form-urlencoded"; $webRequest.Method = "POST"; $webRequest.Headers.Add("X-Twitter-Client", "PoshTweet"); $webRequest.Headers.Add("X-Twitter-Version", "1.0"); $webRequest.Headers.Add("X-Twitter-URL", "http://devcentral.f5.com/s/poshtweet"); [byte[]]$bytes = [System.Text.Encoding]::UTF8.GetBytes($data); $webRequest.ContentLength = $bytes.Length; [System.IO.Stream]$reqStream = $webRequest.GetRequestStream(); $reqStream.Write($bytes, 0, $bytes.Length); $reqStream.Flush(); [System.Net.WebResponse]$resp = $webRequest.GetResponse(); $rs = $resp.GetResponseStream(); [System.IO.StreamReader]$sr = New-Object System.IO.StreamReader -argumentList $rs; [string]$results = $sr.ReadToEnd(); $results; } } Credentials Once those were completed, it was relatively simple to get the Status methods for public_timeline, friends_timeline, user_timeline, show, update, replies, and destroy going. But, for several of those services, user credentials were required. I opted to store them in a script scoped variable and provided a few functions to get/set the username/password for Twitter. $script:g_creds = $null; function Set-TwitterCredentials() { param([string]$user = $null, [string]$pass = $null); if ( $user -and $pass ) { $script:g_creds = New-Object System.Net.NetworkCredential -argumentList ($user, $pass); } else { $creds = Get-TwitterCredentials; } } function Get-TwitterCredentials() { if ( $null -eq $g_creds ) { trap { Write-Error "ERROR: You must enter your Twitter credentials for PoshTweet to work!"; continue; } $c = Get-Credential if ( $c ) { $user = $c.GetNetworkCredential().Username; $pass = $c.GetNetworkCredential().Password; $script:g_creds = New-Object System.Net.NetworkCredential -argumentList ($user, $pass); } } $script:g_creds; } The Status functions Now that the credentials were out of the way, it was time to tackle the Status methods. These methods are a combination of HTTP GETs and POSTs that return an array of status entries. For those interested in the raw underlying XML that's returned, I've included the $raw parameter, that when set to $true, will not do a user friendly display, but will dump the full XML response. This would be handy, if you want to customize the output beyond what I've done. #---------------------------------------------------------------------------- # public_timeline #---------------------------------------------------------------------------- function Get-TwitterPublicTimeline() { param([bool]$raw = $false); $results = Execute-HTTPGetCommand "http://twitter.com/statuses/public_timeline.xml"; Process-TwitterStatus $results $raw; } #---------------------------------------------------------------------------- # friends_timeline #---------------------------------------------------------------------------- function Get-TwitterFriendsTimeline() { param([bool]$raw = $false); $results = Execute-HTTPGetCommand "http://twitter.com/statuses/friends_timeline.xml"; Process-TwitterStatus $results $raw } #---------------------------------------------------------------------------- #user_timeline #---------------------------------------------------------------------------- function Get-TwitterUserTimeline() { param([string]$username = $null, [bool]$raw = $false); if ( $username ) { $username = "/$username"; } $results = Execute-HTTPGetCommand "http://twitter.com/statuses/user_timeline$username.xml"; Process-TwitterStatus $results $raw } #---------------------------------------------------------------------------- # show #---------------------------------------------------------------------------- function Get-TwitterStatus() { param([string]$id, [bool]$raw = $false); if ( $id ) { $results = Execute-HTTPGetCommand "http://twitter.com/statuses/show/" + $id + ".xml"; Process-TwitterStatus $results $raw; } } #---------------------------------------------------------------------------- # update #---------------------------------------------------------------------------- function Set-TwitterStatus() { param([string]$status); $encstatus = [System.Web.HttpUtility]::UrlEncode("$status"); $results = Execute-HTTPPostCommand "http://twitter.com/statuses/update.xml" "status=$encstatus"; Process-TwitterStatus $results $raw; } #---------------------------------------------------------------------------- # replies #---------------------------------------------------------------------------- function Get-TwitterReplies() { param([bool]$raw = $false); $results = Execute-HTTPGetCommand "http://twitter.com/statuses/replies.xml"; Process-TwitterStatus $results $raw; } #---------------------------------------------------------------------------- # destroy #---------------------------------------------------------------------------- function Destroy-TwitterStatus() { param([string]$id = $null); if ( $id ) { Execute-HTTPPostCommand "http://twitter.com/statuses/destroy/$id.xml", "id=$id"; } } You may notice the Process-TwitterStatus function. Since there was a lot of duplicate code in each of these functions, I went ahead and implemented it in it's own function below: function Process-TwitterStatus() { param([string]$sxml = $null, [bool]$raw = $false); if ( $sxml ) { if ( $raw ) { $sxml; } else { [xml]$xml = $sxml; if ( $xml.statuses.status ) { $stats = $xml.statuses.status; } elseif ($xml.status ) { $stats = $xml.status; } $stats | Foreach-Object -process { $info = "by " + $_.user.screen_name + ", " + $_.created_at; if ( $_.source ) { $info = $info + " via " + $_.source; } if ( $_.in_reply_to_screen_name ) { $info = $info + " in reply to " + $_.in_reply_to_screen_name; } "-------------------------"; $_.text; $info; }; "-------------------------"; } } } A few hurdles Nothing goes without a hitch and I found myself pounding my head at why my POST commands were all getting HTTP 417 errors back from Twitter. A quick search brought up this post on Phil Haack's website as well as this Google Group discussing an update in Twitter's services in how they react to the Expect 100 HTTP header. A simple setting in the ServicePointManager at the top of the script was all that was needed to get things working again. [System.Net.ServicePointManager]::Expect100Continue = $false; PoshTweet in Action So, now it's time to try it out. First you'll need to . source the script and then set your Twitter credentials. This can be done in your Twitter $profile file if you wish. Then you can access all of the included functions. Below, I'll call Set-TwitterStatus to update my current status and then Get-TwitterUserTimeline and Get-TwitterFriendsTimeline to get my current timeline as well as that of my friends. PS> . .\PoshTweet.ps1 PS> Set-TwitterCredentials PS> Set-TwitterStatus "Hacking away with PoshTweet" PS> Get-TwitterUserTimeline ------------------------- Hacking away with PoshTweet by joepruitt, Tue Dec 30, 12:33:04 +0000 2008 via web ------------------------- PS> Get-TwitterFriendsTimeline ------------------------- @astrout Yay, thanks! by mediaphyter, Tue Dec 30 20:37:15 +0000 2008 via web in reply to astrout ------------------------- RT @robconery: Headed to a Portland Nerd Dinner tonite - should be fun! http://bit.ly/EUFC by shanselman, Tue Dec 30 20:37:07 +0000 2008 via TweetDeck ------------------------- ... Things Left Todo As I said, this was implemented in an hour or so last night so it definitely needs some more work, but I believe I've got the Status methods pretty much covered. Next I'll move on to the other services of User, Direct Message, Friendship, Account, Favorite, Notification, Block, and Help when I've got time. I'd also like to add support for the "source" field. I'll need to setup a landing page for this library that is public facing so the folks at Twitter will add it to their system. Once I get all the services implemented, I'll more forward in formalizing this as an application and submit it for consideration. Collaboration I've posted the source to this set of functions on the DevCentral wiki under PsTwitterApi. You'll need to create an account to get to it, but I promise it will be worth it! Feel free to contribute and add to if you have the time. Everyone is welcome and encouraged to tear my code apart, optimize it, enhance it. Just as long as it get's better in the process. B-).1.7KViews0likes10CommentsThe Applications of Our Lives
The Internet of Things will soon become The Internet of Nouns There are a few 'The ______ of Our Lives' out there: Days. Time. Moments. Love. They define who we are, where we've been and where we are going. And today, many of those days, times, moments and loves interact with applications. Both the apps we tap and the back end applications used to chronicle these events have become as much a part of our lives as the happenings themselves. The app, Life. As reported on umpteen outlets yesterday, Twitter went down for about an hour. As news broke, there were also some fun headlines like, Twitter goes down, chaos and productivity ensue, Twitter is down. NFL free agency should be postponed, Twitter is down, let the freak-out commence and Twitter goes down, helps man take note it’s his wife’s birthday. It is amazing how much society has come to rely on social media to communicate. Another article, Why Twitter Can’t Keep Crashing, goes right into the fact that it is globally distributed, real-time information delivery system and how the world has come to depend on it, not just to share links and silly jokes but how it affects lives in real ways. Whenever Facebook crashes for any amount of time people also go crazy. Headlines for that usually read something like, 'Facebook down, birthdays/anniversaries/parties cease to exist!' Apparently since people can't tell, post, like, share or otherwise bullhorn their important events, it doesn't actually occur. 'OMG! How am I gonna invite people to my bash in two weeks without social media?!? My life is over!' Um, paper, envelopes, stamps anyone? We have connected wrist bracelets keeping track of our body movements, connected glasses recording every move, connected thermostats measuring home environments and pretty much any other 'thing' that you want to monitor, keep track of or measure. From banking to buying, to educating to learning, to connecting to sharing and everything in between, our lives now rely on applications so much so, that when an application is unavailable, our lives get jolted. Or, we pause our lives for the moment until we can access that application. As if we couldn't go on without it. My, how application availability has become critical to our daily lives. I think The Internet of Things will soon become The Internet of Nouns since every person, place or thing will be connected. I like that. I call 'The Internet of Nouns' as our next frontier! Sorry adverbs, love ya but you're not connected. ps Related Does Social Media Reflect Society? The Icebox Cometh The Top 10, Top 10 Predictions for 2014 The Internet of Things and DNS Delivering the Internet of Things Technorati Tags: apps,applications,social media,life,availability,twitter,facebook,society,humans,people,silva,f5,iot Connect with Peter: Connect with F5:363Views0likes0CommentsSecurity’s FUD Factor
Had a short but interesting twitter exchange with @securityincite@Gillis57and @essobi(Mike Rothman, Gillis Jones andnot sure (sorry!!)respectively) about usingFear,Uncertainty andDoubt when talking IT security services. @Gillis57initially asked, ‘Question: We discuss FUD constantly (and I agree that it's too prominent) But isn't security inherently built upon fear?’ I sent an‘09 Rothman article(@securityincitesaid it was ‘old school’ but still has some great comments) about that very topic. Soon, @essobichimed in with, ‘Our foundation shouldn't be fear, it should be education. :D,’ @Gillis57responded, ‘So, look. I agree wholeheartedly, but why do people need to be educated?’ @essobianswered, ‘imo? Bad programming/exploitable logic processes. we need to raise the bar or lower expectations.’ @Gillis57added, ‘I really don't think we need to keep selling fear, but denying that we are a fear based industry isn't helping.’ @securityincitewizdom’d with, ‘Fear is a tactic like anything else. Depends in situation, context, catalyst. And use sparingly.’And Iconceded that, ‘splitting hairs but I try to talk about risk rather than fear - what's the risk if...which often generates fear.’ Most of the time when we talk about security there is a fear factor because we are talking about risk. Risk is the potential for something Bad happening and typically those things scare or make us uncomfortable. Often when vendors talk about things like protection, benefits, etc, it’s measured in terms of numbers, stats, performance…metrics. Security is also about Peace of Mind; a feeling that you have. Those military people who can get some good sleep even with bullets flying over their heads have peace of mind. Even in a very high risk, dangerous, vulnerable and insecure environment, they feel secure. I saw an article about the difference betweenselling insurance and the lottery – Fear vs. Dreams. Maybe we should discuss IT Security in terms of how it has made an IT guy’s life better? I think it would be cool if ‘security’ case studies included a side bar or something with a quote that brags, ‘Now that we have this solution installed, I’m able to attend my daughter’s piano recitals.’ ‘I’m able to get a good night’s sleep knowing that our web site is ok/won’t get paged at 3AM/won’t have to work for 16hrs.’ Adding to the quality of life over and above the usual ROI/TCO/performance/$$. How it may have enhanced life. How it gave peace of mind. How it Reduced Stress. How it allowed someone to be home for dinner. How it allowed someone to enjoy the weekend, do that Science Fair thing with the kid, take a longer vacation… It might be cool for the industry (and the general public) to read how another’s life improved when security is deployed along with all the breaches and headaches. Ultimately, that’s what we are all chasing as humans anyway – that harmony, balance, peace of mind, quality of life, family, love…the cores of our being rather than what we do for a job – even though our work does have a lot to do with quality of life. I also think that education is part of our duty. Not in the ‘Knights of the Roundtable’ duty but if someone needs our security expertise and is willing to learn, sharing (and ultimately, awareness) is important to ensure a more informed public. That is simply being a good internet citizen. And yes, fear does have it’s place especially when someone is not getting it or ignoring that others are at risk. We frequently talk in terms of rational thinking ($$/performance) when security is quite often about an emotional feeling. That’s why some often use FUD to sell security:Fear: emotional,Uncertainly: more emotional than rational,Doubt: gut feeling with little data. But instead of tapping those negative emotions, we should shoot for the Feel Good emotions that provide safety and security. The Dream. -eh, just an idea. And many Mahalos to @securityincite@Gillis57and @essobifor a blog idea. ps References Abandon FUD, Scare Tactics and Marketing Hype Are you Selling Fear or Dreams? Death to FUD Selling FUD creeping back into security sell Time To Deploy The FUD Weapon? How To Sell Security Solutions Without Using Fear, Uncertainty And Doubt Researchers Warn Against Selling On Security Hype How to Sell Security, Externality and FUD How to Sell Security The Four Horsemen of the Cyber-Apocalypse: Security Software FUD(awesome article) Technorati Tags:F5,smartphone,insiders,byod,PeteSilva,security,business,education,technology,fud,threat,human behavior,kiosk,malware,fear,web,internet,twitter207Views0likes0CommentsThe Best Day to Blog Experiment - Day 4
If you missed the past three days, welcome to The Best Day to Blog Experiment; you are now a participant. If you are a returning reader, thanks for your participation and for the first time readers, I’ve come across many stories about when is the best day/time to get the most readership exposure from a blog post and I’m doing my own little brief, non-scientific experiment. The idea was to blog everyday this week, track the results and report back. Mahalo for becoming a statistic, and I mean that in the most gracious way. This is Day 4 of the experiment and so far Day 1 (Monday) got some good traction, Day 2 (Tuesday) grew with a 6.5% jump in visits over Monday while Day 3 (Wednesday) is down 4% from Tuesday but still a decent showing – plus my week is up 37% over the previous. Thursday, is the day before Friday and NBC’s ‘Must See TV’ for many years. As with Wednesday, the name comes from the Anglo-Saxons to signify that this is Thunor's or Thor’s day. Both gods are derived from Thunaraz, god of thunder. Supposedly, Thursday is the best day to post a blog entry. This article (different from the last link) also says that, ‘between 1pm and 3pm PST (after lunch) or between 5pm and 7pm PST (after work) are the best times…and the worst time to post is between 3 and 5 PM PST on the weekends.’ Those articles have a bunch of charts showing traffic patterns to indicate that this is the day. There is some wonder about this, however. Yesterday I mentioned that it might not be the actual day at all, but about knowing when your audience is visiting and making sure content is available before they arrive. Also, if you are only worried about traffic stats and how many subscribers you have, rather than timely engaging content, then you would worry about dropping words on a certain day. If you are creating insightful material, then the readers will find you no matter what day you post. Danny Brown points out that with social media tools like Digg, Stumbleupon and Reddit, and sharing sites like Facebook and Twitter, the blog post can live much longer than the initial push. There’s also a distinction between a personal and business blog. With a personal blog, much of the focus is sharing ideas or writing about some recent personal experience. I realize that’s an oversimplification and there’s much more to it than that, but the day you post might not really matter. With a business blog, often you are covering a new feature of a product, how some new-fangled thing impacts a business, reporting on a press release and basically extending the company’s message. In this case, timely blogs are important since your audience might be looking for just that – how to solve something today or to understand the ramifications of some new regulation or other areas of interest. It’s important for a company to get a jump on these stories and show thought leadership. Also, depending on your industry, most of your colleagues will also be on the Mon-Fri work schedule and you want to catch them when they are digging for answers. Of course, this is not set in stone but is the prevailing notion of those who cover ‘blogging.’ Personally, I only write what would be considered a business blog for F5 Networks with a focus on Security, Cloud Computing and a bit about Social Media but cover just about whatever I feel is appropriate, including pop culture. As a writer and a human, my experiences are gathered over time and influenced by both my upbringing and professional endeavors. I try to bring a bit of who I am rather than what I do to my posts and typically write when inspiration hits. Going back to Danny Brown for a moment, he notes that it’s the writer who makes the blog and we do it because we like it. Communicate with your readers, share with the community and write engaging content and you’ll have visitors and readers no matter what day of the week it gets posted. If you’ve followed this mini-series, you’ll know that ‘Songs about the Day’ is a recurring theme during this blog experiment. All week, I’ve used The Y! Radish’s blog about ‘songs with days in the title’ and for the 4th time in as many days, I’m ‘lifting’ his list for songs about Thursday. Top 10 Songs About Thursday 1. Thursday - Asobi Seksu 2. Thursday - Morphine 3. Thursday - Country Joe & The Fish 4. Thursday The 12th - Charlie Hunter 5. Thursday's Child - Eartha Kitt 6. Thursday - Jim Croce 7. Thursday's Child - David Bowie 8. (Thursday) Here's Why I Did Not Go To Work Today - Harry Nilsson 9. Sweet Thursday - Pizzicato Five 10. Jersey Thursday - Donovan I know it’s a stretch but my favorite Thursday song is God of Thunder – KISS. ps twitter: @psilvas206Views0likes0CommentsNow ReTweet After Me! Ah, Never Mind.
There is some interesting research over on Sysomos Inc. which indicates that 71% of twitter messages get no reaction at all, like a reply or retweet, 85% get one @reply and 92% of the actual retweets happen within the first hour. Over the last two months, they examined 1.2 Billion tweets and found that 29% beget a reaction and only 6% were retweets. Heck, even my tweet about the story only got 1 click according to http://j.mp: While many will take this as and argue that twitter is useless, but Tom Webster at BrandSavant has a different take in this blog. He notes that measuring click-stream data alone will never give accurate results, you need to measure both online and offline exploration to gauge audience participation. We already know that most people don’t really engage on twitter and Tom makes the comparison to a newspaper editorial page. You can’t measure the circulation of the New York Times just by how many people write letters. His follow up blog also looked at it another way – instead of 71% not responding, how about ‘Nearly 3 in 10 Tweets Provoke a Reaction.’ That actually sounds better and depending on the number of followers, could be a huge spread. The other question is not necessarily how many responded to your company’s tweet – but do you watch and listen to what’s being said about YOU – which is probably one of the biggest benefits of micro-blogging. You can engage your audience by responding to their needs rather than blasting what you think they need. Quickly responding to a dissatisfied customer (who may not follow you at all) can transform them into a huge advocate. We’ve seen that here. Someone might be having difficulties with a configuration or simply expressing frustration and we either provide some guidance or a link to the solution and voila! Their next tweet is about how awesome we are. That’s how we humans operate. It’s not so much that we get what we want when things go bad, it’s that someone actually listened and had empathy to our situation. We gravitate to those who care, are willing to help, or just lend an ear to our grief. This NYT article talks about small businesses can take advantage of twitter. Many small businesses don’t have a lot to spend on advertising and their inventory may change often. They can use twitter to update their customers about new flavors, colors or a weekend sale for free. The key is not to be boring. With any advertising, you need to stand out amongst all the other billboards fighting for our attention. Add a touch of attitude without arrogance and folks will notice. Interesting and entertaining. Other ways to take advantage of the medium is use it like a live FAQ as Whole Foods does. Use it as a portable focus group like Kiss My Bundt. Don’t just sell but pique interest or arouse curiosity and include a link. Throw some trivia out there. Create the intimacy as if you’re the neighborhood corner store. The age-old notion that people buy from people still holds. ps175Views0likes0CommentsDefeating Attacks Easier Than Detecting Them
Defeating modern attacks – even distributed ones – isn’t the problem. The problem is detecting them in the first place. Last week researchers claimed they’ve discovered a way to exploit a basic security flaw that’s used in software that’s in high use by Web 2.0 applications to essentially support if not single-sign on then the next best thing – a single source of online identity. The prevalence of OAuth and OpenID across the Web 2.0 application realm could potentially be impacted (and not in a good way) if the flaw were to be exploited. Apparently a similar flaw was used in the past to successfully exploit Microsoft’s Xbox 360. So the technique is possible and has been proven “in the wild.” The attacks are thought to be so difficult because they require very precise measurements. They crack passwords by measuring the time it takes for a computer to respond to a login request. On some login systems, the computer will check password characters one at a time, and kick back a "login failed" message as soon as it spots a bad character in the password. This means a computer returns a completely bad login attempt a tiny bit faster than a login where the first character in the password is correct. […] But Internet developers have long assumed that there are too many other factors -- called network jitter -- that slow down or speed up response times and make it almost impossible to get the kind of precise results, where nanoseconds make a difference, required for a successful timing attack. Those assumptions are wrong, according to Lawson, founder of the security consultancy Root Labs. He and Nelson tested attacks over the Internet, local-area networks and in cloud computing environments and found they were able to crack passwords in all the environments by using algorithms to weed out the network jitter. Researchers: Password crack could affect millions ComputerWorld, July 2010 Actually, after reading the first few paragraphs I’m surprised that this flaw wasn’t exploited a lot sooner than it was. The ability to measure fairly accurately the components that make up web application performance is not something that’s unknown, after all. So the claim that an algorithm can correctly “weed out” network latency is not at all surprising. But what if the performance was randomized by, say, an intermediary interjecting additional delays into the response? You can’t accurately account for something that’s randomly added (or not added, as the case may be) and as long as you seeded the random generation with something that cannot be derived from the context of the session there are few algorithms that could figure out what the random generation seed might be. That’s important because random number generation often isn’t and it can often be predicted based on knowing what was used to seed the generator. So we could defeat such an attack by simply injecting random amounts of delay into the response. Or, because the attack depends on an observable difference in timing, simply normalizing response times for the login process would also defeat this attack. This is the solution pointed out in another article on the discovery, “OAuth and OpenID Vulnerable to Timing Attack”, in which it is reported developers of impacted libraries indicate that just six lines of code will solve this problem by normalizing response times. This, of course, illustrates a separate problem, which is the reliance on external sources to address security risks that millions may be vulnerable to now because while it’s a simple resolution, it may takes days, weeks, or more before it is available. This particular attack would indeed be disastrous were it to be exploited given the reliance on these libraries by so many popular web sites. And though the solutions are fairly easy to implement, that isn’t the real problem. The real problem is how difficult such attacks are becoming to detect, especially in the face of the risk incurred by remaining vulnerable while solutions are developed and distributed. DEFEATING is EASY. DETECTING is HARD. The trick here, and what makes many modern attacks so dangerous is that it’s really really hard to detect them in the first place. Any attack that could be distributed across multiple clients – clients smart enough to synchronize and communicate with one another – becomes difficult to detect, especially in a load balanced (elastic) environment in which those requests are likely spread across multiple application instances. The variability in where attacks are coming from makes it very difficult to see an attack occurring in real-time because no single stream exhibits the behavior most security-focused infrastructure watches for. What’s needed to detect such an attack is to be more concerned with what is being targeted rather than by whom or from where. While you want to keep track of that data the trigger for such brute-force attacks is the target, not the client activity. Attackers are getting smart, they know that repeated attempts at X or Y will be detected and that more than likely they will find their client blacklisted for a period of time (if not permanently) so they’ve come up with alternative methods that “hide” and try to appear like normal requests and responses instead. In fact it could be postulated that it is not repeated attempts to login from a single location that are the problem today, but rather the attempt to repeatedly login from multiple locations across time that’s the problem. So what you have to look for is not necessarily (or only) repeated requests but also at repeated attempts to access specific resources, like a login page. But a login page is going to see a lot of use so it’s not just the login page you need to be concerned with, but the credentials, as well. In any brute force account level attack there are going to be multiple requests to try to access a resource using the same credentials. That’s the trigger. It requires more context than your traditional connection or request based security triggers because you’re not just looking at IP address, or resource, you’re looking deeper and trying to infer from a combination of data and behavior what’s going on. THIS SITUATION will BECOME the STATUS QUO As we move forward into a new networking frontier, as applications and users are decoupled from IP addresses and distribution of both clients and applications across cloud computing environments becomes more common, the detection of attacks is going to get even more difficult without collaboration. The new network, the integrated and collaborative network, the dynamic network, is going to become a requirement. Network and application network and security infrastructure needs a new way of combating modern threats and their attack patterns. That new way is going to require context and the sharing of that context across all strategic points of control at which such attacks might be mitigated. This becomes important because, as pointed out earlier, many web applications rely upon third-party solutions for functionality. Open source or not, it still takes time for developers to implement a solution and subsequently for organizations to incorporate and redeploy the “patched” application. That leaves users of such applications vulnerable to exploitation or identity theft in the interim. Security infrastructure must be able to detect such attacks in order to protect users and corporate resources (infrastructure and applications and data) from exploitation while they wait for such solutions. When is more important than where when it comes to addressing newly discovered vulnerabilities, but when is highly dependent upon the ability of infrastructure to detect an attack in the first place. We’re going to need infrastructure that is context-aware. Related Posts from tag brute-force198Views0likes0CommentsDevops: Controlling Application Release Cycles to Avoid the WordPress Effect
Minimizing the impact of code changes on multi-tenant applications requires a little devops “magic” and a broader architectural strategy Ignoring the unavoidable “cloud outage” hysteria that accompanies any Web 2.0 application outage today, there’s been some very interesting analysis of how WordPress – and other multi-tenant Web 2.0 applications – can avoid a similar mistake. One such suggestion is the use of a “feathered release schedule”, which is really just a controlled roll-out of a new codebase as a means to minimize the impact of an error. We’d call this “fault isolation” in data center architecture 101. It turns out that such an architectural strategy is fairly easy to achieve, if you have the right components and the right perspective. But before we dive into how to implement such an architecture we need to understand what caused the outage.148Views0likes0CommentsI Can Has UR .htaccess File
Notice that isn’t a question, it’s a statement of fact Twitter is having a bad month. After it was blamed, albeit incorrectly, for a breach leading to the disclosure of both personal and corporate information via Google’s GMail and Apps, its apparent willingness to allow anyone and everyone access to a .htaccess file ostensibly protecting search.twitter.com made the rounds via, ironically, Twitter. This vulnerability at first glance appears fairly innocuous, until you realize just how much information can be placed in an .htaccess file that could have been exposed by this technical configuration faux pas. Included in the .htaccess file is a number of URI rewrites, which give an interesting view of the underlying file system hierarchy Twitter is using, as well as a (rather) lengthy list of IP addresses denied access. All in all, not that exciting, because many of the juicy bits that could be configured via .htaccess for any given website are not done so in this easily accessible .htaccess file. Some things you can do with .htaccess, in case you aren’t familiar: Create default error document Enable SSI via htaccess Deny users by IP Change your default directory page Redirects Prevent hotlinking of your images Prevent directory listing .htaccess is a very versatile little file, capable of handling all sorts of security and application delivery tasks. Now what’s interesting is that the .htaccess file is in the root directory and should not be accessible. Apache configuration files are fairly straight forward, and there are plethora examples of how to prevent .htaccess – and its wealth of information – from being viewed by clients. Obfuscation, of course, is one possibility, as Apache’s httpd.conf allows you to specify the name of the access file with a simple directive: AccessFileName .htaccess It is a simple enough thing to change the name of the file, thus making it more difficult for automated scans to discover vulnerable access files and retrieve them. A little addition to the httpd.conf regarding the accessibility of such files, too, will prevent curious folks from poking at .htaccess and retrieving them with ease. After all, there is no reason for an access file to be viewed by a client; it’s a server-side security configuration mechanism, meant only for the web server, and should not be exposed given the potential for leaking a lot of information that could lead to a more serious breach in security. ~ "^\.ht"> Order allow,deny Deny from all Satisfy All Another option, if you have an intermediary enabled with network-side scripting, is to prevent access to any .htaccess file across your entire infrastructure. Changes to httpd.conf must be done on every server, so if you have a lot of servers to manage and protect it’s quite possible you’d miss one due to the sheer volume of servers to slog through. Using a network-side scripting solution eliminates that possibility because it’s one change that can immediately affect all servers. Here’s an example using an iRule, but you should also be able to use mod_rewrite to accomplish the same thing if you’re using an Apache-based proxy: when HTTP_REQUEST { # Check the requested URI switch -glob [string tolower [HTTP::path]] { "/.ht*" { reject } default { pool bigwebpool } } } However you choose to protect that .htaccess file, just do it. This isn’t rocket science, it’s a straight-up simple configuration error that could potentially lead to more serious breaches in security – especially if your .htaccess file contains more sensitive (and informative) information. An Unhackable Server is Still Vulnerable Twittergate Reveals E-Mail is Bigger Security Risk than Twitter Automatically Removing Cookies Clickjacking Protection Using X-FRAME-OPTIONS Available for Firefox Stop brute force listing of HTTP OPTIONS with network-side scripting Jedi Mind Tricks: HTTP Request Smuggling I am in your HTTP headers, attacking your application Understanding network-side scripting716Views0likes4CommentsAre You Scrubbing the Twitter Stream on Your Web Site?
Never never trust content from a user, even if that user is another application. Web 2.0 is as much about integration as it is interactivity. Thus it’s no surprise that an increasing number of organizations are including a feed of their recent Twitter activity on their site. But like any user generated content, and it is user generated after all, there’s a potential risk to the organization and its visitors from integrating such content without validation. A recent political effort in the UK included launching a web site that integrated a live Twitter stream based on a particular hashtag. That’s a fairly common practice, nothing to get excited about. What happened, however, is something we should get excited about and pay close attention to because as Twitter streams continue to flow into more and more web sites it is likely to happen again. Essentially the Twitter stream was corrupted. Folks figured out that if they tweeted JavaScript instead of plain old messages that the web site would interpret the script as legitimate and execute the code. You can imagine where that led – Rickrolling and redirecting visitors to political opponents sites were the least obnoxious of the results. It [a web site] was also set up to collect Twitter messages that contained the hashtag #cashgordon and republish it in a live stream on the home page. However a configuration error was discovered as any messages containing the #cashgordon hashtag were being published, as well as whatever else they contained. Trend Micro senior security advisor Rik Ferguson commented that if users tweeted JavaScript instead of standard messages, this JavaScript would be interpreted as a legitimate part of the Cash Gordon site by the visitor's browser. This would redirect the user to any site of their choosing, and this saw the site abused to the point of being taken offline. The abuse was noted and led to Twitter users sending users to various sites, including pornography sites, the Labour Party website and a video of 1980s pop star Rick Astley. – Conservative effort at social media experiment leaves open source Cash Gordon site directing to adult and Labour Party websites, SC Magazine UK183Views0likes0Comments