Introducing PoshTweet - The PowerShell Twitter Script Library
It's probably no surprise from those of you that follow my blog and tech tips here on DevCentral that I'm a fan of Windows PowerShell. I've written a set of Cmdlets that allow you to manage and control your BIG-IP application delivery controllers from within PowerShell and a whole set of articles around those Cmdlets. I've been a Twitter user for a few years now and over the holidays, I've noticed that Jeffrey Snover from the PowerShell team has hopped aboard the Twitter bandwagon and that got me to thinking... Since I live so much of my time in the PowerShell command prompt, wouldn't it be great to be able to tweet from there too? Of course it would! HTTP Requests So, last night I went ahead and whipped up a first draft of a set of PowerShell functions that allow access to the Twitter services. I implemented the functions based on Twitter's REST based methods so all that was really needed to get things going was to implement the HTTP GET and POST requests needed for the different API methods. Here's what I came up with. function Execute-HTTPGetCommand() { param([string] $url = $null); if ( $url ) { [System.Net.WebClient]$webClient = New-Object System.Net.WebClient $webClient.Credentials = Get-TwitterCredentials [System.IO.Stream]$stream = $webClient.OpenRead($url); [System.IO.StreamReader]$sr = New-Object System.IO.StreamReader -argumentList $stream; [string]$results = $sr.ReadToEnd(); $results; } } function Execute-HTTPPostCommand() { param([string] $url = $null, [string] $data = $null); if ( $url -and $data ) { [System.Net.WebRequest]$webRequest = [System.Net.WebRequest]::Create($url); $webRequest.Credentials = Get-TwitterCredentials $webRequest.PreAuthenticate = $true; $webRequest.ContentType = "application/x-www-form-urlencoded"; $webRequest.Method = "POST"; $webRequest.Headers.Add("X-Twitter-Client", "PoshTweet"); $webRequest.Headers.Add("X-Twitter-Version", "1.0"); $webRequest.Headers.Add("X-Twitter-URL", "http://devcentral.f5.com/s/poshtweet"); [byte[]]$bytes = [System.Text.Encoding]::UTF8.GetBytes($data); $webRequest.ContentLength = $bytes.Length; [System.IO.Stream]$reqStream = $webRequest.GetRequestStream(); $reqStream.Write($bytes, 0, $bytes.Length); $reqStream.Flush(); [System.Net.WebResponse]$resp = $webRequest.GetResponse(); $rs = $resp.GetResponseStream(); [System.IO.StreamReader]$sr = New-Object System.IO.StreamReader -argumentList $rs; [string]$results = $sr.ReadToEnd(); $results; } } Credentials Once those were completed, it was relatively simple to get the Status methods for public_timeline, friends_timeline, user_timeline, show, update, replies, and destroy going. But, for several of those services, user credentials were required. I opted to store them in a script scoped variable and provided a few functions to get/set the username/password for Twitter. $script:g_creds = $null; function Set-TwitterCredentials() { param([string]$user = $null, [string]$pass = $null); if ( $user -and $pass ) { $script:g_creds = New-Object System.Net.NetworkCredential -argumentList ($user, $pass); } else { $creds = Get-TwitterCredentials; } } function Get-TwitterCredentials() { if ( $null -eq $g_creds ) { trap { Write-Error "ERROR: You must enter your Twitter credentials for PoshTweet to work!"; continue; } $c = Get-Credential if ( $c ) { $user = $c.GetNetworkCredential().Username; $pass = $c.GetNetworkCredential().Password; $script:g_creds = New-Object System.Net.NetworkCredential -argumentList ($user, $pass); } } $script:g_creds; } The Status functions Now that the credentials were out of the way, it was time to tackle the Status methods. These methods are a combination of HTTP GETs and POSTs that return an array of status entries. For those interested in the raw underlying XML that's returned, I've included the $raw parameter, that when set to $true, will not do a user friendly display, but will dump the full XML response. This would be handy, if you want to customize the output beyond what I've done. #---------------------------------------------------------------------------- # public_timeline #---------------------------------------------------------------------------- function Get-TwitterPublicTimeline() { param([bool]$raw = $false); $results = Execute-HTTPGetCommand "http://twitter.com/statuses/public_timeline.xml"; Process-TwitterStatus $results $raw; } #---------------------------------------------------------------------------- # friends_timeline #---------------------------------------------------------------------------- function Get-TwitterFriendsTimeline() { param([bool]$raw = $false); $results = Execute-HTTPGetCommand "http://twitter.com/statuses/friends_timeline.xml"; Process-TwitterStatus $results $raw } #---------------------------------------------------------------------------- #user_timeline #---------------------------------------------------------------------------- function Get-TwitterUserTimeline() { param([string]$username = $null, [bool]$raw = $false); if ( $username ) { $username = "/$username"; } $results = Execute-HTTPGetCommand "http://twitter.com/statuses/user_timeline$username.xml"; Process-TwitterStatus $results $raw } #---------------------------------------------------------------------------- # show #---------------------------------------------------------------------------- function Get-TwitterStatus() { param([string]$id, [bool]$raw = $false); if ( $id ) { $results = Execute-HTTPGetCommand "http://twitter.com/statuses/show/" + $id + ".xml"; Process-TwitterStatus $results $raw; } } #---------------------------------------------------------------------------- # update #---------------------------------------------------------------------------- function Set-TwitterStatus() { param([string]$status); $encstatus = [System.Web.HttpUtility]::UrlEncode("$status"); $results = Execute-HTTPPostCommand "http://twitter.com/statuses/update.xml" "status=$encstatus"; Process-TwitterStatus $results $raw; } #---------------------------------------------------------------------------- # replies #---------------------------------------------------------------------------- function Get-TwitterReplies() { param([bool]$raw = $false); $results = Execute-HTTPGetCommand "http://twitter.com/statuses/replies.xml"; Process-TwitterStatus $results $raw; } #---------------------------------------------------------------------------- # destroy #---------------------------------------------------------------------------- function Destroy-TwitterStatus() { param([string]$id = $null); if ( $id ) { Execute-HTTPPostCommand "http://twitter.com/statuses/destroy/$id.xml", "id=$id"; } } You may notice the Process-TwitterStatus function. Since there was a lot of duplicate code in each of these functions, I went ahead and implemented it in it's own function below: function Process-TwitterStatus() { param([string]$sxml = $null, [bool]$raw = $false); if ( $sxml ) { if ( $raw ) { $sxml; } else { [xml]$xml = $sxml; if ( $xml.statuses.status ) { $stats = $xml.statuses.status; } elseif ($xml.status ) { $stats = $xml.status; } $stats | Foreach-Object -process { $info = "by " + $_.user.screen_name + ", " + $_.created_at; if ( $_.source ) { $info = $info + " via " + $_.source; } if ( $_.in_reply_to_screen_name ) { $info = $info + " in reply to " + $_.in_reply_to_screen_name; } "-------------------------"; $_.text; $info; }; "-------------------------"; } } } A few hurdles Nothing goes without a hitch and I found myself pounding my head at why my POST commands were all getting HTTP 417 errors back from Twitter. A quick search brought up this post on Phil Haack's website as well as this Google Group discussing an update in Twitter's services in how they react to the Expect 100 HTTP header. A simple setting in the ServicePointManager at the top of the script was all that was needed to get things working again. [System.Net.ServicePointManager]::Expect100Continue = $false; PoshTweet in Action So, now it's time to try it out. First you'll need to . source the script and then set your Twitter credentials. This can be done in your Twitter $profile file if you wish. Then you can access all of the included functions. Below, I'll call Set-TwitterStatus to update my current status and then Get-TwitterUserTimeline and Get-TwitterFriendsTimeline to get my current timeline as well as that of my friends. PS> . .\PoshTweet.ps1 PS> Set-TwitterCredentials PS> Set-TwitterStatus "Hacking away with PoshTweet" PS> Get-TwitterUserTimeline ------------------------- Hacking away with PoshTweet by joepruitt, Tue Dec 30, 12:33:04 +0000 2008 via web ------------------------- PS> Get-TwitterFriendsTimeline ------------------------- @astrout Yay, thanks! by mediaphyter, Tue Dec 30 20:37:15 +0000 2008 via web in reply to astrout ------------------------- RT @robconery: Headed to a Portland Nerd Dinner tonite - should be fun! http://bit.ly/EUFC by shanselman, Tue Dec 30 20:37:07 +0000 2008 via TweetDeck ------------------------- ... Things Left Todo As I said, this was implemented in an hour or so last night so it definitely needs some more work, but I believe I've got the Status methods pretty much covered. Next I'll move on to the other services of User, Direct Message, Friendship, Account, Favorite, Notification, Block, and Help when I've got time. I'd also like to add support for the "source" field. I'll need to setup a landing page for this library that is public facing so the folks at Twitter will add it to their system. Once I get all the services implemented, I'll more forward in formalizing this as an application and submit it for consideration. Collaboration I've posted the source to this set of functions on the DevCentral wiki under PsTwitterApi. You'll need to create an account to get to it, but I promise it will be worth it! Feel free to contribute and add to if you have the time. Everyone is welcome and encouraged to tear my code apart, optimize it, enhance it. Just as long as it get's better in the process. B-).1.7KViews0likes10CommentsAPI Request Throttling: A Better Option
This past week there's been some interesting commentary regarding Twitter's change to its API request throttling feature. Request throttling, often used as a method to ensure QoS (Quality of Service) for a variety of network and application uses, is used by Twitter as an attempt to not overwhelm the system such that they are forced to display the now (in)famous Twitter fail whale image. One of the things you can do with a BIG-IP Local Traffic Manager (LTM) and iRules is request throttling. Why would you want to let a mediating device like an application delivery controller control request throttling? Because request throttling implemented by the server still requires the server to respond to the request. The act of responding wastes some of the resources you're trying to save by request throttling in the first place. It's like taking two steps forward and one back. By allowing the application delivery controller to manage throttling requests you're relieving the burden on the servers and freeing up resources so the servers can do what they're designed to do: serve content. Because an intermediary that is also a full proxy (like BIG-IP LTM) terminates the TCP connection on the client side, it does not need to bother the server in the case that a client has exceeded their allotted request usage. Now you might be thinking that such a solution would be fine for an entire site, but Twitter (and others) use request throttling on a per API call basis, not the entire site, and wouldn't a general solution stop people from even connecting to twitter.com in general? It depends on the implementation. In the case of BIG-IP and iRules, request throttling can be done on a per virtual server (usually corresponding to a single "web site") basis or it can get as granular as specific URIs. In the case of a site with an API like twitter, the URIs generally correspond to their REST-based APIs. That means not only can you throttle requests in general, but you could get even more specific and throttle requests based on specific API calls. If one of the API calls is particularly resource-intensive, you could limit it further than those that are less resource intensive. So while querying may be limited to 40 request per hour, perhaps updating is limited to 30. Or vice-versa. The ability to inspect, detect, and direct messages lets you get as specific as you want - or need - according to the needs of your application and your specific architecture. It really gets interesting when you consider that you could further make decisions based on parameters, such as a specific user and the application function. Because an intelligent application delivery controller can inspect messages both on request and reply, you can use information that may be returned from a specific request to control the way future requests are handled, whether that's permanently or for a specified time interval. This kind of functionality is also excellent for service providers moving services to tiers, i.e. "premium (paid) services". By indicating the level of service that should be provided to a given user, usually by setting a cookie, BIG-IP can dynamically apply the appropriate request throttling to that user's service. The reason this is exciting is because it can be done transparently - without modifying the application itself. That means changes in business models can be implemented faster and with less interruption. As an example, here's a simple iRule that throttles HTTP requests to 3 per second per client. Simple, effective, transparent to the servers. Thanks to our guys in the field for writing this one and sharing! when HTTP_REQUEST { set cur_time [clock seconds] if { [HTTP::request_num] > 1 } { if { $cur_time == $start_time } { if { $reqs_sec > 3 } { HTTP::respond 503 Retry-After 2 } incr reqs_sec return } } set start_time $cur_time set reqs_sec 0 } It doesn't make sense to implement request throttling inside an application when the reason you're implementing it is because the servers are overwhelmed. Let an intermediary, an application delivery controller, do it for you.1.1KViews0likes3CommentsI Can Has UR .htaccess File
Notice that isn’t a question, it’s a statement of fact Twitter is having a bad month. After it was blamed, albeit incorrectly, for a breach leading to the disclosure of both personal and corporate information via Google’s GMail and Apps, its apparent willingness to allow anyone and everyone access to a .htaccess file ostensibly protecting search.twitter.com made the rounds via, ironically, Twitter. This vulnerability at first glance appears fairly innocuous, until you realize just how much information can be placed in an .htaccess file that could have been exposed by this technical configuration faux pas. Included in the .htaccess file is a number of URI rewrites, which give an interesting view of the underlying file system hierarchy Twitter is using, as well as a (rather) lengthy list of IP addresses denied access. All in all, not that exciting, because many of the juicy bits that could be configured via .htaccess for any given website are not done so in this easily accessible .htaccess file. Some things you can do with .htaccess, in case you aren’t familiar: Create default error document Enable SSI via htaccess Deny users by IP Change your default directory page Redirects Prevent hotlinking of your images Prevent directory listing .htaccess is a very versatile little file, capable of handling all sorts of security and application delivery tasks. Now what’s interesting is that the .htaccess file is in the root directory and should not be accessible. Apache configuration files are fairly straight forward, and there are plethora examples of how to prevent .htaccess – and its wealth of information – from being viewed by clients. Obfuscation, of course, is one possibility, as Apache’s httpd.conf allows you to specify the name of the access file with a simple directive: AccessFileName .htaccess It is a simple enough thing to change the name of the file, thus making it more difficult for automated scans to discover vulnerable access files and retrieve them. A little addition to the httpd.conf regarding the accessibility of such files, too, will prevent curious folks from poking at .htaccess and retrieving them with ease. After all, there is no reason for an access file to be viewed by a client; it’s a server-side security configuration mechanism, meant only for the web server, and should not be exposed given the potential for leaking a lot of information that could lead to a more serious breach in security. ~ "^\.ht"> Order allow,deny Deny from all Satisfy All Another option, if you have an intermediary enabled with network-side scripting, is to prevent access to any .htaccess file across your entire infrastructure. Changes to httpd.conf must be done on every server, so if you have a lot of servers to manage and protect it’s quite possible you’d miss one due to the sheer volume of servers to slog through. Using a network-side scripting solution eliminates that possibility because it’s one change that can immediately affect all servers. Here’s an example using an iRule, but you should also be able to use mod_rewrite to accomplish the same thing if you’re using an Apache-based proxy: when HTTP_REQUEST { # Check the requested URI switch -glob [string tolower [HTTP::path]] { "/.ht*" { reject } default { pool bigwebpool } } } However you choose to protect that .htaccess file, just do it. This isn’t rocket science, it’s a straight-up simple configuration error that could potentially lead to more serious breaches in security – especially if your .htaccess file contains more sensitive (and informative) information. An Unhackable Server is Still Vulnerable Twittergate Reveals E-Mail is Bigger Security Risk than Twitter Automatically Removing Cookies Clickjacking Protection Using X-FRAME-OPTIONS Available for Firefox Stop brute force listing of HTTP OPTIONS with network-side scripting Jedi Mind Tricks: HTTP Request Smuggling I am in your HTTP headers, attacking your application Understanding network-side scripting700Views0likes4CommentsThe Applications of Our Lives
The Internet of Things will soon become The Internet of Nouns There are a few 'The ______ of Our Lives' out there: Days. Time. Moments. Love. They define who we are, where we've been and where we are going. And today, many of those days, times, moments and loves interact with applications. Both the apps we tap and the back end applications used to chronicle these events have become as much a part of our lives as the happenings themselves. The app, Life. As reported on umpteen outlets yesterday, Twitter went down for about an hour. As news broke, there were also some fun headlines like, Twitter goes down, chaos and productivity ensue, Twitter is down. NFL free agency should be postponed, Twitter is down, let the freak-out commence and Twitter goes down, helps man take note it’s his wife’s birthday. It is amazing how much society has come to rely on social media to communicate. Another article, Why Twitter Can’t Keep Crashing, goes right into the fact that it is globally distributed, real-time information delivery system and how the world has come to depend on it, not just to share links and silly jokes but how it affects lives in real ways. Whenever Facebook crashes for any amount of time people also go crazy. Headlines for that usually read something like, 'Facebook down, birthdays/anniversaries/parties cease to exist!' Apparently since people can't tell, post, like, share or otherwise bullhorn their important events, it doesn't actually occur. 'OMG! How am I gonna invite people to my bash in two weeks without social media?!? My life is over!' Um, paper, envelopes, stamps anyone? We have connected wrist bracelets keeping track of our body movements, connected glasses recording every move, connected thermostats measuring home environments and pretty much any other 'thing' that you want to monitor, keep track of or measure. From banking to buying, to educating to learning, to connecting to sharing and everything in between, our lives now rely on applications so much so, that when an application is unavailable, our lives get jolted. Or, we pause our lives for the moment until we can access that application. As if we couldn't go on without it. My, how application availability has become critical to our daily lives. I think The Internet of Things will soon become The Internet of Nouns since every person, place or thing will be connected. I like that. I call 'The Internet of Nouns' as our next frontier! Sorry adverbs, love ya but you're not connected. ps Related Does Social Media Reflect Society? The Icebox Cometh The Top 10, Top 10 Predictions for 2014 The Internet of Things and DNS Delivering the Internet of Things Technorati Tags: apps,applications,social media,life,availability,twitter,facebook,society,humans,people,silva,f5,iot Connect with Peter: Connect with F5:356Views0likes0CommentsGet Social with DevCentral
That title sounds so 2009 but let’s go with it anyway. #Flashback…no, #Throwback…no, how about #TinkerTuesday? Is there such a thing? (There is.) #DevCentral will be ramping up our social activities in 2018 and we wanted to share some of the media channels you can join to stay connected and engaged with the community. Did you know that the Twitter bird has a name? It’s Larry. And while dc’s blue ball logo doesn’t have a name, you can find your @devcentral team members @psilvas, @jasonrahm, and @JohnWagnon on twitter sharing their technology insights along with some personal daily happenings and thoughts. Stay connected for new articles, iRules, videos, the Agility Conference and earn additional DevCentral points for answering the question of the day! Don’t feel like reading anything and prefer to watch stuff? Then head on over to our YouTube channel for hours of instructional videos from our ‘Make it Work’ series, cool tech tips along with the awesome Lightboard Lessons. Lightboard Lessons are one of our most popular pieces of content and by subscribing to our channel, you’ll get the first alerts via email that a new video has published. You’ll probably even get to watch the video before it even posts to DevCentral. That’s right, early access. Prefer to hang out with the LinkedIn crowd? While the F5 Certified! Professionals LinkedIn group is very active, the F5 DevCentral LinkedIn Group has been a little dormant recently so we’re looking to gear that up again also. With a little over a 1000 members, it’s a great way to converse with other members as we march toward the 12,000+ participants in Ken’s group. When DevCentral started back in 2003, it was one of the original ‘social’ community sites when social media was still in its infancy. Members range from beginning to advanced devs, industry thought leaders, and F5 MVPs. I’m also aware that there are BIG-IP discussions on Stack overflow, repos on github, the F5 Facebook page, MVP Kevin Davies’ Telegram F5 Announce and others. Where else should we engage with you and where should we be more active? Hit us up with the hash, #whereisdevcentral and we'll meet you there. ps242Views0likes0CommentsMy application is not the next Twitter so why should I care about high availability?
It often seems that load balancing and high availability are associated with only high traffic sites, like Twitter and Google. But load balancing and high availability isn't just for Web 2.0 phenomenons or web monsters; it can be an invaluable tool in your strategy to maintain service level agreements and customer satisfaction no matter how large or small your customer base - and data center - might be. Load balancing is integral to scalability, to being able to increase the capacity of your web and application servers. But it also just as inexorably linked to high availability through its ability to provide fail-over. Fail-over ensures that if, for any reason, one server in a pool/farm is unavailable that requests are redirected to a secondary or stand-by server. This ensures the site or application is available at all times. More typically, all servers in a pool are utilized at all times to improve performance and to maintain availability in the event that one or more server becomes unavailable. This is true whether you have two servers or two-hundred servers in your pool; whether you're Twitter or Bob's Widget Shop. Where's F5? Storage Decisions Sept 23-24 in New York Networld IT Roadmap Sept 23 in Dallas Oracle Open World Sept 21-25 in San Francisco Storage Networking World Oct 13-16 in Dallas Storage Expo 2008 UK Oct 15-16 in London Storage Networking World Oct 27-29 in Frankfurt High availability can be used in multiple scenarios to provide for continued availability regardless of size and reach of your site or application. MAINTENANCE WINDOWS Everybody has them, and they often result in downtime that, while understandable, may frustrate customers or users, especially if it's unscheduled. Patching, upgrades, migrations, hardware changes - these can all lead to necessary downtime. By implementing a high availability strategy through load balancing, you can ensure that applications remain available. This is accomplished by performing whatever tasks are necessary on one server, allowing the second (or more) to continue to serve requests. Because the load balancer (or application delivery controller) mediates between clients and your servers, customers see no interruption of service while you are working on any one one the servers in the pool. JUST IN CASE Unscheduled downtime is a nice way of saying "Things happen". And when those things happen that cause a server to fail - hardware, infections, licensing issues, bugs - it's nice to know that your application or site, and thus your overall availability, will not likely be affected. Unanticipated downtime can destroy your overall availability rating and cause users to go into a tizzy. A high availability deployment will prevent "things" from taking down your entire site or application, giving you time to focus on the problem at hand and solve it without fielding calls from any number of interested and angry constituents. WIGGLE ROOM Sometimes you develop an application and you're pretty sure you can serve the needs of your customers just fine. And then something happens and you're the next best thing on the web since Google. Maybe you got slashdotted or farked, or maybe you're the last retailer left in the country that's selling The Hottest Christmas Toy this year. Whatever the reason, a sudden spike in volume of users can leave your servers smoking. Implementing a high availability infrastructure can ensure that even if you don't always need that second (or one hundred and twenty-second) server that in the event you do need it, it's there and immediately usable. There's nothing special you need to do, it just picks up the extra load by virtue of being part of the pool. And if you need even more wiggle room, you can add another, and another, and another server transparently. There's no need to interrupt service, just tell your load balancer or application delivery controller that the server is available and it immediately becomes part of the pool, serving up your application to hungry users. High availability isn't just for huge sites and applications, it's a good strategy for anyone who delivers applications via the web. If your business might suffer from downtime, then you need to consider implementing a high-availability strategy sooner rather than later.222Views0likes1CommentSecurity’s FUD Factor
Had a short but interesting twitter exchange with @securityincite@Gillis57and @essobi(Mike Rothman, Gillis Jones andnot sure (sorry!!)respectively) about usingFear,Uncertainty andDoubt when talking IT security services. @Gillis57initially asked, ‘Question: We discuss FUD constantly (and I agree that it's too prominent) But isn't security inherently built upon fear?’ I sent an‘09 Rothman article(@securityincitesaid it was ‘old school’ but still has some great comments) about that very topic. Soon, @essobichimed in with, ‘Our foundation shouldn't be fear, it should be education. :D,’ @Gillis57responded, ‘So, look. I agree wholeheartedly, but why do people need to be educated?’ @essobianswered, ‘imo? Bad programming/exploitable logic processes. we need to raise the bar or lower expectations.’ @Gillis57added, ‘I really don't think we need to keep selling fear, but denying that we are a fear based industry isn't helping.’ @securityincitewizdom’d with, ‘Fear is a tactic like anything else. Depends in situation, context, catalyst. And use sparingly.’And Iconceded that, ‘splitting hairs but I try to talk about risk rather than fear - what's the risk if...which often generates fear.’ Most of the time when we talk about security there is a fear factor because we are talking about risk. Risk is the potential for something Bad happening and typically those things scare or make us uncomfortable. Often when vendors talk about things like protection, benefits, etc, it’s measured in terms of numbers, stats, performance…metrics. Security is also about Peace of Mind; a feeling that you have. Those military people who can get some good sleep even with bullets flying over their heads have peace of mind. Even in a very high risk, dangerous, vulnerable and insecure environment, they feel secure. I saw an article about the difference betweenselling insurance and the lottery – Fear vs. Dreams. Maybe we should discuss IT Security in terms of how it has made an IT guy’s life better? I think it would be cool if ‘security’ case studies included a side bar or something with a quote that brags, ‘Now that we have this solution installed, I’m able to attend my daughter’s piano recitals.’ ‘I’m able to get a good night’s sleep knowing that our web site is ok/won’t get paged at 3AM/won’t have to work for 16hrs.’ Adding to the quality of life over and above the usual ROI/TCO/performance/$$. How it may have enhanced life. How it gave peace of mind. How it Reduced Stress. How it allowed someone to be home for dinner. How it allowed someone to enjoy the weekend, do that Science Fair thing with the kid, take a longer vacation… It might be cool for the industry (and the general public) to read how another’s life improved when security is deployed along with all the breaches and headaches. Ultimately, that’s what we are all chasing as humans anyway – that harmony, balance, peace of mind, quality of life, family, love…the cores of our being rather than what we do for a job – even though our work does have a lot to do with quality of life. I also think that education is part of our duty. Not in the ‘Knights of the Roundtable’ duty but if someone needs our security expertise and is willing to learn, sharing (and ultimately, awareness) is important to ensure a more informed public. That is simply being a good internet citizen. And yes, fear does have it’s place especially when someone is not getting it or ignoring that others are at risk. We frequently talk in terms of rational thinking ($$/performance) when security is quite often about an emotional feeling. That’s why some often use FUD to sell security:Fear: emotional,Uncertainly: more emotional than rational,Doubt: gut feeling with little data. But instead of tapping those negative emotions, we should shoot for the Feel Good emotions that provide safety and security. The Dream. -eh, just an idea. And many Mahalos to @securityincite@Gillis57and @essobifor a blog idea. ps References Abandon FUD, Scare Tactics and Marketing Hype Are you Selling Fear or Dreams? Death to FUD Selling FUD creeping back into security sell Time To Deploy The FUD Weapon? How To Sell Security Solutions Without Using Fear, Uncertainty And Doubt Researchers Warn Against Selling On Security Hype How to Sell Security, Externality and FUD How to Sell Security The Four Horsemen of the Cyber-Apocalypse: Security Software FUD(awesome article) Technorati Tags:F5,smartphone,insiders,byod,PeteSilva,security,business,education,technology,fud,threat,human behavior,kiosk,malware,fear,web,internet,twitter203Views0likes0CommentsThe Best Day to Blog Experiment - Day 4
If you missed the past three days, welcome to The Best Day to Blog Experiment; you are now a participant. If you are a returning reader, thanks for your participation and for the first time readers, I’ve come across many stories about when is the best day/time to get the most readership exposure from a blog post and I’m doing my own little brief, non-scientific experiment. The idea was to blog everyday this week, track the results and report back. Mahalo for becoming a statistic, and I mean that in the most gracious way. This is Day 4 of the experiment and so far Day 1 (Monday) got some good traction, Day 2 (Tuesday) grew with a 6.5% jump in visits over Monday while Day 3 (Wednesday) is down 4% from Tuesday but still a decent showing – plus my week is up 37% over the previous. Thursday, is the day before Friday and NBC’s ‘Must See TV’ for many years. As with Wednesday, the name comes from the Anglo-Saxons to signify that this is Thunor's or Thor’s day. Both gods are derived from Thunaraz, god of thunder. Supposedly, Thursday is the best day to post a blog entry. This article (different from the last link) also says that, ‘between 1pm and 3pm PST (after lunch) or between 5pm and 7pm PST (after work) are the best times…and the worst time to post is between 3 and 5 PM PST on the weekends.’ Those articles have a bunch of charts showing traffic patterns to indicate that this is the day. There is some wonder about this, however. Yesterday I mentioned that it might not be the actual day at all, but about knowing when your audience is visiting and making sure content is available before they arrive. Also, if you are only worried about traffic stats and how many subscribers you have, rather than timely engaging content, then you would worry about dropping words on a certain day. If you are creating insightful material, then the readers will find you no matter what day you post. Danny Brown points out that with social media tools like Digg, Stumbleupon and Reddit, and sharing sites like Facebook and Twitter, the blog post can live much longer than the initial push. There’s also a distinction between a personal and business blog. With a personal blog, much of the focus is sharing ideas or writing about some recent personal experience. I realize that’s an oversimplification and there’s much more to it than that, but the day you post might not really matter. With a business blog, often you are covering a new feature of a product, how some new-fangled thing impacts a business, reporting on a press release and basically extending the company’s message. In this case, timely blogs are important since your audience might be looking for just that – how to solve something today or to understand the ramifications of some new regulation or other areas of interest. It’s important for a company to get a jump on these stories and show thought leadership. Also, depending on your industry, most of your colleagues will also be on the Mon-Fri work schedule and you want to catch them when they are digging for answers. Of course, this is not set in stone but is the prevailing notion of those who cover ‘blogging.’ Personally, I only write what would be considered a business blog for F5 Networks with a focus on Security, Cloud Computing and a bit about Social Media but cover just about whatever I feel is appropriate, including pop culture. As a writer and a human, my experiences are gathered over time and influenced by both my upbringing and professional endeavors. I try to bring a bit of who I am rather than what I do to my posts and typically write when inspiration hits. Going back to Danny Brown for a moment, he notes that it’s the writer who makes the blog and we do it because we like it. Communicate with your readers, share with the community and write engaging content and you’ll have visitors and readers no matter what day of the week it gets posted. If you’ve followed this mini-series, you’ll know that ‘Songs about the Day’ is a recurring theme during this blog experiment. All week, I’ve used The Y! Radish’s blog about ‘songs with days in the title’ and for the 4th time in as many days, I’m ‘lifting’ his list for songs about Thursday. Top 10 Songs About Thursday 1. Thursday - Asobi Seksu 2. Thursday - Morphine 3. Thursday - Country Joe & The Fish 4. Thursday The 12th - Charlie Hunter 5. Thursday's Child - Eartha Kitt 6. Thursday - Jim Croce 7. Thursday's Child - David Bowie 8. (Thursday) Here's Why I Did Not Go To Work Today - Harry Nilsson 9. Sweet Thursday - Pizzicato Five 10. Jersey Thursday - Donovan I know it’s a stretch but my favorite Thursday song is God of Thunder – KISS. ps twitter: @psilvas202Views0likes0CommentsAre You Scrubbing the Twitter Stream on Your Web Site?
Never never trust content from a user, even if that user is another application. Web 2.0 is as much about integration as it is interactivity. Thus it’s no surprise that an increasing number of organizations are including a feed of their recent Twitter activity on their site. But like any user generated content, and it is user generated after all, there’s a potential risk to the organization and its visitors from integrating such content without validation. A recent political effort in the UK included launching a web site that integrated a live Twitter stream based on a particular hashtag. That’s a fairly common practice, nothing to get excited about. What happened, however, is something we should get excited about and pay close attention to because as Twitter streams continue to flow into more and more web sites it is likely to happen again. Essentially the Twitter stream was corrupted. Folks figured out that if they tweeted JavaScript instead of plain old messages that the web site would interpret the script as legitimate and execute the code. You can imagine where that led – Rickrolling and redirecting visitors to political opponents sites were the least obnoxious of the results. It [a web site] was also set up to collect Twitter messages that contained the hashtag #cashgordon and republish it in a live stream on the home page. However a configuration error was discovered as any messages containing the #cashgordon hashtag were being published, as well as whatever else they contained. Trend Micro senior security advisor Rik Ferguson commented that if users tweeted JavaScript instead of standard messages, this JavaScript would be interpreted as a legitimate part of the Cash Gordon site by the visitor's browser. This would redirect the user to any site of their choosing, and this saw the site abused to the point of being taken offline. The abuse was noted and led to Twitter users sending users to various sites, including pornography sites, the Labour Party website and a video of 1980s pop star Rick Astley. – Conservative effort at social media experiment leaves open source Cash Gordon site directing to adult and Labour Party websites, SC Magazine UK190Views0likes0CommentsThe Soft Risks of Social Networking
Just about every large organization, a whole lot of startups, are trying to leverage the potential of social media in their marketing efforts. We all read great articles containing tips and tricks regarding how to use social media for business purposes, and how to gauge whether or not we are successful. The discussions often ignore the risks, especially the soft risks, of engaging the market and so-called citizen journalists at the Internet's watercoolers. Soft risks are always part of the equation of the return on investment for a product or piece of software. Soft risks are usually nebulous, incalculable costs that are not necessarily directly related to the function of the solution we are purchasing. These are often things like the potential for the vendor to survive a tougher economy, the investment in learning a new skill or programming language required in order to leverage the new technology solution, and the unknowable costs of integrating with the rest of the infrastructure. Like investing in a solution, investing in social media has risks, but unlike solutions that are purchased to do a specific thing social media's risks are almost all soft. They are immeasurable and, often times, not obvious. A recent article, You Better Think Before You Twit, highlighted one of the potential soft risks of social media: the always uncomfortable foot in mouth. Interestingly, this article, while pointing out the potential negative aspects of being always connected to others at the Internet watercooler, kept the focus personal. But the risks involved in engaging social media in such an informal way can adversely affect the company you represent, and it's important to recognize that risk - and give guidance - before your employees are out tweeting or powncing or plurking or uploading pictures of the company's Christmas party to Flickr. It's not just the potential slip of the tongue that reveals upcoming product plans or launches, or that gives away potentially sensitive corporate information. Most employees understand the potential harm to the organization such actions can cause and are careful to ensure they don't cross the lines they know exist. But they aren't so careful about expressing themselves on other subjects because it is, as it were, like hanging around the watercooler. We're just doing it electronically instead of physically. This can be great for remote office and tele-workers for making them feel like a part of the organization, but when the conversation turns to topics of a more personal and sensitive nature, it can backfire on the organization. When Google CEO Eric Schmidt decided to publicly endorse a political candidate, he may have meant to do so personally, but because he used his position at Google while doing so he made one of the first faux pas of social networking on the job: getting political. Discussions around the web indicated a mix of reactions, some good and some bad. Similarly, Apple's donation of $100,000 to the "No on Prop 8" campaign raised similar objections and support at Internet watercoolers around the country. In both cases there were reactions that included "I am not buying/using their products anymore because of this." Right or wrong, the reactions were real. Both companies lost customers - or potential customers - over their decisions to dabble publicly in politics. Sure, that number might be minimal, but it might also be more far reaching than either considered. Conversely, their support might have gained them customers. That's why it's called a soft risk, because the effects can't be easily, if at all, quantified. A long time ago we taught folks that politics and religion had no place in business; that discussing these taboo topics within the confines of the business world was a no-no and dangerous. It was a risk. The same is true for organizations who, unlike Google and Apple, certainly can't afford the negative hits on their reputation across the Internet based on any given employee's public discussions of things best left at home. The line between professional and personal life is indeed blurring, especially for those who are considered corporate spokespersons, as their opinions on subjects that are outside the realm of technology can be taken as reflecting corporate culture and views on those subjects. It's easy to forget when you're hanging out on Twitter that you aren't just you, you're representing your organization. At the beginning of the hype cycle for the election, @prnewswire lamented a bit on this fact, but wisely decided that not commenting on such things was the only logical thing to do lest the person behind the avatar risk damage to the corporate entity it represents. And no matter which side you take on divisive topics, someone is going to be angry with that opinion and may choose to take their business elsewhere because of it. And you kids out there, remember, Google is forever (or at least it looks like it will be) and what you say on the ever-archiving web and how you say it will certainly be discovered in 5 or 10 years when your (next) potential employer searches you out to aid in their decision whether or not to hire you. Before you get all bent out of shape about the potential restriction, remember that when you choose to make yourself a public figure of any kind to any size audience that you are giving up a lot of your privacy and personal flexibility. Becoming an Internet personality sounds great until you realize it can be (and I would argue in many cases should be) a soft muzzle on your personal opinions on touchy subjects. The rule of thumb when you are engaging folks 'out there' is simple. We call it "social media" for a reason, after all. If you're commenting on blogs, or tweeting, or powncing, or just generally engaging in conversation electronically, it behooves you to remember the "media" in social media, and treat everyone like a potential member of the press rather than as "that cool guy/gal I met on Twitter". If you wouldn't talk to the press about political or religion or other potentially divisive topics, then you probably shouldn't be tweeting about them, either.190Views0likes0Comments