isv solutions
20 TopicsIntroducing PoshTweet - The PowerShell Twitter Script Library
It's probably no surprise from those of you that follow my blog and tech tips here on DevCentral that I'm a fan of Windows PowerShell. I've written a set of Cmdlets that allow you to manage and control your BIG-IP application delivery controllers from within PowerShell and a whole set of articles around those Cmdlets. I've been a Twitter user for a few years now and over the holidays, I've noticed that Jeffrey Snover from the PowerShell team has hopped aboard the Twitter bandwagon and that got me to thinking... Since I live so much of my time in the PowerShell command prompt, wouldn't it be great to be able to tweet from there too? Of course it would! HTTP Requests So, last night I went ahead and whipped up a first draft of a set of PowerShell functions that allow access to the Twitter services. I implemented the functions based on Twitter's REST based methods so all that was really needed to get things going was to implement the HTTP GET and POST requests needed for the different API methods. Here's what I came up with. function Execute-HTTPGetCommand() { param([string] $url = $null); if ( $url ) { [System.Net.WebClient]$webClient = New-Object System.Net.WebClient $webClient.Credentials = Get-TwitterCredentials [System.IO.Stream]$stream = $webClient.OpenRead($url); [System.IO.StreamReader]$sr = New-Object System.IO.StreamReader -argumentList $stream; [string]$results = $sr.ReadToEnd(); $results; } } function Execute-HTTPPostCommand() { param([string] $url = $null, [string] $data = $null); if ( $url -and $data ) { [System.Net.WebRequest]$webRequest = [System.Net.WebRequest]::Create($url); $webRequest.Credentials = Get-TwitterCredentials $webRequest.PreAuthenticate = $true; $webRequest.ContentType = "application/x-www-form-urlencoded"; $webRequest.Method = "POST"; $webRequest.Headers.Add("X-Twitter-Client", "PoshTweet"); $webRequest.Headers.Add("X-Twitter-Version", "1.0"); $webRequest.Headers.Add("X-Twitter-URL", "http://devcentral.f5.com/s/poshtweet"); [byte[]]$bytes = [System.Text.Encoding]::UTF8.GetBytes($data); $webRequest.ContentLength = $bytes.Length; [System.IO.Stream]$reqStream = $webRequest.GetRequestStream(); $reqStream.Write($bytes, 0, $bytes.Length); $reqStream.Flush(); [System.Net.WebResponse]$resp = $webRequest.GetResponse(); $rs = $resp.GetResponseStream(); [System.IO.StreamReader]$sr = New-Object System.IO.StreamReader -argumentList $rs; [string]$results = $sr.ReadToEnd(); $results; } } Credentials Once those were completed, it was relatively simple to get the Status methods for public_timeline, friends_timeline, user_timeline, show, update, replies, and destroy going. But, for several of those services, user credentials were required. I opted to store them in a script scoped variable and provided a few functions to get/set the username/password for Twitter. $script:g_creds = $null; function Set-TwitterCredentials() { param([string]$user = $null, [string]$pass = $null); if ( $user -and $pass ) { $script:g_creds = New-Object System.Net.NetworkCredential -argumentList ($user, $pass); } else { $creds = Get-TwitterCredentials; } } function Get-TwitterCredentials() { if ( $null -eq $g_creds ) { trap { Write-Error "ERROR: You must enter your Twitter credentials for PoshTweet to work!"; continue; } $c = Get-Credential if ( $c ) { $user = $c.GetNetworkCredential().Username; $pass = $c.GetNetworkCredential().Password; $script:g_creds = New-Object System.Net.NetworkCredential -argumentList ($user, $pass); } } $script:g_creds; } The Status functions Now that the credentials were out of the way, it was time to tackle the Status methods. These methods are a combination of HTTP GETs and POSTs that return an array of status entries. For those interested in the raw underlying XML that's returned, I've included the $raw parameter, that when set to $true, will not do a user friendly display, but will dump the full XML response. This would be handy, if you want to customize the output beyond what I've done. #---------------------------------------------------------------------------- # public_timeline #---------------------------------------------------------------------------- function Get-TwitterPublicTimeline() { param([bool]$raw = $false); $results = Execute-HTTPGetCommand "http://twitter.com/statuses/public_timeline.xml"; Process-TwitterStatus $results $raw; } #---------------------------------------------------------------------------- # friends_timeline #---------------------------------------------------------------------------- function Get-TwitterFriendsTimeline() { param([bool]$raw = $false); $results = Execute-HTTPGetCommand "http://twitter.com/statuses/friends_timeline.xml"; Process-TwitterStatus $results $raw } #---------------------------------------------------------------------------- #user_timeline #---------------------------------------------------------------------------- function Get-TwitterUserTimeline() { param([string]$username = $null, [bool]$raw = $false); if ( $username ) { $username = "/$username"; } $results = Execute-HTTPGetCommand "http://twitter.com/statuses/user_timeline$username.xml"; Process-TwitterStatus $results $raw } #---------------------------------------------------------------------------- # show #---------------------------------------------------------------------------- function Get-TwitterStatus() { param([string]$id, [bool]$raw = $false); if ( $id ) { $results = Execute-HTTPGetCommand "http://twitter.com/statuses/show/" + $id + ".xml"; Process-TwitterStatus $results $raw; } } #---------------------------------------------------------------------------- # update #---------------------------------------------------------------------------- function Set-TwitterStatus() { param([string]$status); $encstatus = [System.Web.HttpUtility]::UrlEncode("$status"); $results = Execute-HTTPPostCommand "http://twitter.com/statuses/update.xml" "status=$encstatus"; Process-TwitterStatus $results $raw; } #---------------------------------------------------------------------------- # replies #---------------------------------------------------------------------------- function Get-TwitterReplies() { param([bool]$raw = $false); $results = Execute-HTTPGetCommand "http://twitter.com/statuses/replies.xml"; Process-TwitterStatus $results $raw; } #---------------------------------------------------------------------------- # destroy #---------------------------------------------------------------------------- function Destroy-TwitterStatus() { param([string]$id = $null); if ( $id ) { Execute-HTTPPostCommand "http://twitter.com/statuses/destroy/$id.xml", "id=$id"; } } You may notice the Process-TwitterStatus function. Since there was a lot of duplicate code in each of these functions, I went ahead and implemented it in it's own function below: function Process-TwitterStatus() { param([string]$sxml = $null, [bool]$raw = $false); if ( $sxml ) { if ( $raw ) { $sxml; } else { [xml]$xml = $sxml; if ( $xml.statuses.status ) { $stats = $xml.statuses.status; } elseif ($xml.status ) { $stats = $xml.status; } $stats | Foreach-Object -process { $info = "by " + $_.user.screen_name + ", " + $_.created_at; if ( $_.source ) { $info = $info + " via " + $_.source; } if ( $_.in_reply_to_screen_name ) { $info = $info + " in reply to " + $_.in_reply_to_screen_name; } "-------------------------"; $_.text; $info; }; "-------------------------"; } } } A few hurdles Nothing goes without a hitch and I found myself pounding my head at why my POST commands were all getting HTTP 417 errors back from Twitter. A quick search brought up this post on Phil Haack's website as well as this Google Group discussing an update in Twitter's services in how they react to the Expect 100 HTTP header. A simple setting in the ServicePointManager at the top of the script was all that was needed to get things working again. [System.Net.ServicePointManager]::Expect100Continue = $false; PoshTweet in Action So, now it's time to try it out. First you'll need to . source the script and then set your Twitter credentials. This can be done in your Twitter $profile file if you wish. Then you can access all of the included functions. Below, I'll call Set-TwitterStatus to update my current status and then Get-TwitterUserTimeline and Get-TwitterFriendsTimeline to get my current timeline as well as that of my friends. PS> . .\PoshTweet.ps1 PS> Set-TwitterCredentials PS> Set-TwitterStatus "Hacking away with PoshTweet" PS> Get-TwitterUserTimeline ------------------------- Hacking away with PoshTweet by joepruitt, Tue Dec 30, 12:33:04 +0000 2008 via web ------------------------- PS> Get-TwitterFriendsTimeline ------------------------- @astrout Yay, thanks! by mediaphyter, Tue Dec 30 20:37:15 +0000 2008 via web in reply to astrout ------------------------- RT @robconery: Headed to a Portland Nerd Dinner tonite - should be fun! http://bit.ly/EUFC by shanselman, Tue Dec 30 20:37:07 +0000 2008 via TweetDeck ------------------------- ... Things Left Todo As I said, this was implemented in an hour or so last night so it definitely needs some more work, but I believe I've got the Status methods pretty much covered. Next I'll move on to the other services of User, Direct Message, Friendship, Account, Favorite, Notification, Block, and Help when I've got time. I'd also like to add support for the "source" field. I'll need to setup a landing page for this library that is public facing so the folks at Twitter will add it to their system. Once I get all the services implemented, I'll more forward in formalizing this as an application and submit it for consideration. Collaboration I've posted the source to this set of functions on the DevCentral wiki under PsTwitterApi. You'll need to create an account to get to it, but I promise it will be worth it! Feel free to contribute and add to if you have the time. Everyone is welcome and encouraged to tear my code apart, optimize it, enhance it. Just as long as it get's better in the process. B-).1.7KViews0likes10CommentsBuilding an elastic environment requires elastic infrastructure
One of the reasons behind some folks pushing for infrastructure as virtual appliances is the on-demand nature of a virtualized environment. When network and application delivery infrastructure hits capacity in terms of throughput - regardless of the layer of the application stack at which it happens - it's frustrating to think you might need to upgrade the hardware rather than just add more compute power via a virtual image. The truth is that this makes sense. The infrastructure supporting a virtualized environment should be elastic. It should be able to dynamically expand without requiring a new network architecture, a higher performing platform, or new configuration. You should be able to just add more compute resources and walk away. The good news is that this is possible today. It just requires that you consider carefully your choices in network and application network infrastructure when you build out your virtualized infrastructure. ELASTIC APPLICATION DELIVERY INFRASTRUCTURE Last year F5 introduced VIPRION, an elastic, dynamic application networking delivery platform capable of expanding capacity without requiring any changes to the infrastructure. VIPRION is a chassis-based bladed application delivery controller and its bladed system behaves much in the same way that a virtualized equivalent would behave. Say you start with one blade in the system, and soon after you discover you need more throughput and more processing power. Rather than bring online a new virtual image of such an appliance to increase capacity, you add a blade to the system and voila! VIPRION immediately recognizes the blade and simply adds it to its pools of processing power and capacity. There's no need to reconfigure anything, VIPRION essentially treats each blade like a virtual image and distributes requests and traffic across the network and application delivery capacity available on the blade automatically. Just like a virtual appliance model would, but without concern for the reliability and security of the platform. Traditional application delivery controllers can also be scaled out horizontally to provide similar functionality and behavior. By deploying additional application delivery controllers in what is often called an active-active model, you can rapidly deploy and synchronize configuration of the master system to add more throughput and capacity. Meshed deployments comprising more than a pair of application delivery controllers can also provide additional network compute resources beyond what is offered by a single system. The latter option (the traditional scaling model) requires more work to deploy than the former (VIPRION) simply because it requires additional hardware and all the overhead required of such a solution. The elastic option with bladed, chassis-based hardware is really the best option in terms of elasticity and the ability to grow on-demand as your infrastructure needs increase over time. ELASTIC STORAGE INFRASTRUCTURE Often overlooked in the network diagrams detailing virtualized infrastructures is the storage layer. The increase in storage needs in a virtualized environment can be overwhelming, as there is a need to standardize the storage access layer such that virtual images of applications can be deployed in a common, unified way regardless of which server they might need to be executing on at any given time. This means a shared, unified storage layer on which to store images that are necessarily large. This unified storage layer must also be expandable. As the number of applications and associated images are made available, storage needs increase. What's needed is a system in which additional storage can be added in a non-disruptive manner. If you have to modify the automation and orchestration systems driving your virtualized environment when additional storage is added, you've lost some of the benefits of a virtualized storage infrastructure. F5's ARX series of storage virtualization provides that layer of unified storage infrastructure. By normalizing the namespaces through which files (images) are accessed, the systems driving a virtualized environment can be assured that images are available via the same access method regardless of where the file or image is physically located. Virtualized storage infrastructure systems are dynamic; additional storage can be added to the infrastructure and "plugged in" to the global namespace to increase the storage available in a non-disruptive manner. An intelligent virtualized storage infrastructure can further make more efficient the use of the storage available by tiering the storage. Images and files accessed more frequently can be stored on fast, tier one storage so they are loaded and execute more quickly, while less frequently accessed files and images can be moved to less expensive and perhaps less peformant storage systems. By deploying elastic application delivery network infrastructure instead of virtual appliances you maintain stability, reliability, security, and performance across your virtualized environment. Elastic application delivery network infrastructure is already dynamic, and offers a variety of options for integration into automation and orchestration systems via standards-based control planes, many of which are nearly turn-key solutions. The reasons why some folks might desire a virtual appliance model for their application delivery network infrastructure are valid. But the reality is that the elasticity and on-demand capacity offered by a virtual appliance is already available in proven, reliable hardware solutions today that do not require sacrificing performance, security, or flexibility. Related articles by Zemanta How to instrument your Java EE applications for a virtualized environment Storage Virtualization Fundamentals Automating scalability and high availability services Building a Cloudbursting Capable Infrastructure EMC unveils Atmos cloud offering Are you (and your infrastructure) ready for virtualization?505Views0likes4CommentsIBM Rational AppScan
In my last post, I introduced my role as Solution Engineer for our IBM partnership and how many exciting solutions we have coming out in our partnership. Today I’m going to briefly cover one of our latest releases, the IBM Rational AppScan parser. AppScan IBM’s Rational AppScan implements the latest scanning technology to test your web applications for vulnerabilities. I’ve run this scanner many times and the complexity and depth of its scans is mind boggling. There are something like 30,000 tests that it can run in comprehensive mode, looking for all types of attacks against a website. When launching a new application or reviewing your security on an existing site, an investment like Rational AppScan may save your entire organization enormous amounts of pain and expense. So how does AppScan work? You simply point it at your website and go. During a recent test, I tested a sample ecommerce site (designed to have flaws) and found over 129 problems, 37 of them critical exploits such as SQL injection and cross-site scripting. The beautiful thing with AppScan is that you simply see exactly where the exploit took place, how to repeat it and how to mitigate it. It’s an amazing tool and you should definitely check out the trial. Once you have your scan, the next step is to fix the issues. In the example above, the 37 vulnerabilities might take days or weeks to solve. And that doesn’t even address the four dozen other medium and low priority issues. So how do you help speed this along? This is where BIG-IP ASM enters the picture. As of version 11.1, our IBM AppScan integration allows you to export your reports from AppScan, import them into ASM and immediately remediate the critical problems. In my test, I was able to remediate 21 out of the 37 critical vulnerabilities, leaving just a small handful to be worked on by the developers.400Views0likes2CommentsDistributing SAP Load using BIG-IP Advanced Monitoring
Several recent forum posts on DevCentral forums have commented on the fact that SAP Landscapes often have asynchronous batch jobs that cause higher CPU loads on certain servers. This causes problems for application delivery controllers because load balancing methods are typically based on connection counts. Picture the scenario where one connection causes a big CPU or memory spike and then goes away. Now you have the same number of new connections coming into the server while one is slammed. The solution to this problem is relatively straightforward and I recently documented this for everyone in our “Deploying F5 Networks with SAP NetWeaver” deployment guide, located here: SAP NetWeaver and Enterprise SOA: Enterprise Portal (BIG-IP v10.1, WOM, Edge, WA). The solution is based around using SNMP in conjunction with application based monitors. The BIG-IP SNMP monitor provides the ability to perform dynamic load balancing based on CPU, memory or disk utilization while the advanced monitors test the J2EE stack, the authentication system and the database. With this combination, SAP administrators should be able to sleep better at night knowing that their customers and users are getting to a live system that best prepared to service the request. So, how does layer monitoring work? If you are not aware, it’s possible to have two monitors for a particular pool or node. In the UI, it looks like this: In this example there are two monitors, SAP-CPU and ICMP. In the real world, ICMP would be replaced with the advanced application monitor. So, what does the SNMP monitor configuration look like: Here we have an SNMP setup that is set at a CPU Threshold of 80%, a memory Threshold of 0% and a Disk Threshold of 10%. Obviously this is from my testing to insure the monitor is working properly. What this defines is that if the disk is more than 10% full, or the memory is being utilized at 0% or the CPU is being utilized at over 80%, then de-weight the amount of new connections that get sent to this node(server). The coefficients allow further granular control over the traffic weighting determination. This is not a config you would probably run in production, but it’s great for testing! By logging into the BIG-IP advanced shell and enabling logging, I can see exactly what weight is being assigned. This is accomplished through the command: bigpipe db Snmp.SNMPDCA.Log true and then by tailing the snmpdca.log located in /var/tmp : tail -f /var/tmp/snmpdca.log There you have it. Now all we have to do is change the load balancing mechanism for the pool to be based on dynamic, apply the advanced application monitor, and we have a fully dynamic decision making system. You can play with the Thresholds and Coefficients until you have a desired mix. The SNMP monitor will not mark a host down, but it will set the weight (between 1 and 100) in a manner that very few connections will get to a node that has exceeded all tresholds. A quick note on the advanced health monitor. I can’t stress how important it is to have layered monitoring in this and other dynamic load balancing scenarios. Especially in an SAP NetWeaver J2EE stack installation (or even a dual stack implementation) many things can go wrong. Just because the CPU, memory and disk are normal, doesn’t mean that your J2EE stack hasn’t crashed, or that your authentication system has gone down. By layering monitors, you cover all BASIS. :-) I hope this post has been helpful, and as always, please email me if you have any questions. Remember that detailed installation instructions including step-by-step configuration is in the deployment guide linked at the top, or through f5.com ---> Resources -- > Deployment Guides ---> SAP NetWeaver and Enterprise SOA: Enterprise Portal (BIG-IP v10.1, WOM, Edge, WA)299Views0likes0CommentsWhat Do Database Connectivity Standards and the Pirate’s Code Have in Common?
A: They’re both more what you’d call “guidelines” than actual rules. An almost irrefutable fact of application design today is the need for a database, or at a minimum a data store – i.e. a place to store the data generated and manipulated by the application. A second reality is that despite the existence of database access “standards”, no two database solutions support exactly the same syntax and protocols. Connectivity standards like JDBC and ODBC exist, yes, but like SQL they are variable, resulting in just slightly different enough implementations to effectively cause vendor lock-in at the database layer. You simply can’t take an application developed to use an Oracle database and point it at a Microsoft or IBM database and expect it to work. Life’s like that in the development world. Database connectivity “standards” are a lot like the pirate’s Code, described well by Captain Barbossa in Pirates of the Carribbean as “more what you’d call ‘guidelines’ than actual rules.” It shouldn’t be a surprise, then, to see the rise of solutions that address this problem, especially in light of an increasing awareness of (in)compatibility at the database layer and its impact on interoperability, particularly as it relates to cloud computing . Forrester Analyst Noel Yuhanna recently penned a report on what is being called Database Compatibility Layers (DCL). The focus of DCL at the moment is on migration across database platforms because, as pointed out by Noel, they’re expensive, time consuming and very costly. Database migrations have always been complex, time-consuming, and costly due to proprietary data structures and data types, SQL extensions, and procedural languages. It can take up to several months to migrate a database, depending on database size, complexity, and usage of these proprietary features. A new technology has recently emerged for solving this problem: the database compatibility layer, a database access layer that supports another database management system’s (DBMS’s) proprietary extensions natively, allowing existing applications to access the new database transparently. -- Simpler Database Migrations Have Arrived (Forrester Research Report) Anecdotally, having been on the implementation end of such a migration I can’t disagree with the assessment. Whether the right answer is to sit down and force some common standards on database connectivity or build a compatibility layer is a debate for another day. Suffice to say that right now the former is unlikely given the penetration and pervasiveness of existing database connectivity, so the latter is probably the most efficient and cost-effective solution. After all, any changes in the core connectivity would require the same level of application modification as a migration; not an inexpensive proposition at all. According to Forrester a Database Compatibility Layer (DCL) is a “database layer that supports another DBMS’s proprietary SQL extensions, data types, and data structures natively. Existing applications can transparently access the newly migrated database with zero or minimal changes.” By extension, this should also mean that an application could easily access one database and a completely different one using the same code base (assuming zero changes, of course). For the sake of discussion let’s assume that a DCL exists that exhibits just that characteristic – complete interoperability at the connectivity layer. Not just for migration, which is of course the desired use, but for day to day use. What would that mean for cloud computing providers – both internal and external? ENABLING IT as a SERVICE Based on our assumption that a DCL exists and is implemented by multiple database solution vendors, a veritable cornucopia of options becomes a lot more available for moving enterprise architectures toward IT as a Service than might be at first obvious. Consider that applications have variable needs in terms of performance, redundancy, disaster recovery, and scalability. Some applications require higher performance, others just need a nightly or even weekly backup and some, well, some are just not that important that you can’t use other IT operations backups to restore if something goes wrong. In some cases the applications might have varying needs based on the business unit deploying them. The same application used by finance, for example, might have different requirements than the same one used by developers. How could that be? Because the developers may only be using that application for integration or testing while finance is using it for realz. It happens. What’s more interesting, however, is how a DCL could enable a more flexible service-oriented style buffet of database choices, especially if the organization used different database solutions to support different transactional, availability, and performance goals. If a universal DCL (or near universal at least) existed, business stakeholders – together with their IT counterparts – could pick and choose the database “service” they wished to employ based on not only the technical characteristics and operational support but also the costs and business requirements. It would also allow them to “migrate” over time as applications became more critical, without requiring a massive investment in upgrading or modifying the application to support a different back-end database. Obviously I’m picking just a few examples that may or may not be applicable to every organization. The bigger thing here, I think, is the flexibility in architecture and design that is afforded by such a model that balances costs with operational characteristics. Monitoring of database resource availability, too, could be greatly simplified from such a layer, providing solutions that are natively supported by upstream devices responsible for availability at the application layer, which ultimately depends on the database but is often an ignored component because of the complexity currently inherent in supporting such a varied set of connectivity standards. It should also be obvious that this model would work for a PaaS-style provider who is not tied to any given database technology. A PaaS-style vendor today must either invest effort in developing and maintaining a services layer for database connectivity or restrict customers to a single database service. The latter is fine if you’re creating a single-stack environment such as Microsoft Azure but not so fine if you’re trying to build a more flexible set of offerings to attract a wider customer base. Again, same note as above. Providers would have a much more flexible set of options if they could rely upon what is effectively a single database interface regardless of the specific database implementation. More importantly for providers, perhaps, is the migration capability noted by the Forrester report in the first place, as one of the inhibitors of moving existing applications to a cloud computing provider is support for the same database across both enterprise and cloud computing environments. While services layers are certainly a means to the same end, such layers are not universally supported. There’s no “standard” for them, not even a set of best practice guidelines, and the resulting application code suffers exactly the same issues as with the use of proprietary database connectivity: lock in. You can’t pick one up and move it to the cloud, or another database without changing some code. Granted, a services layer is more efficient in this sense as it serves as an architectural strategic point of control at which connectivity is aggregated and thus database implementation and specifics are abstracted from the application. That means the database can be changed without impacting end-user applications, only the services layer need be modified. But even that approach is problematic for packaged applications that rely upon database connectivity directly and do not support such service layers. A DCL, ostensibly, would support packaged and custom applications if it were implemented properly in all commercial database offerings. CONNECTIVITY CARTEL And therein lies the problem – if it were implemented properly in all commercial database offerings. There is a risk here of a connectivity cartel arising, where database vendors form alliances with other database vendors to support a DCL while “locking out” vendors whom they have decided do not belong. Because the DCL depends on supporting “proprietary SQL extensions, data types, and data structures natively” there may be a need for database vendors to collaborate as a means to properly support those proprietary features. If collaboration is required, it is possible to deny that collaboration as a means to control who plays in the market. It’s also possible for a vendor to slightly change some proprietary feature in order to “break” the others’ support. And of course the sheer volume of work necessary for a database vendor to support all other database vendors could overwhelm smaller database vendors, leaving them with no real way to support everyone else. The idea of a DCL is an interesting one, and it has its appeal as a means to forward compatibility for migration – both temporary and permanent. Will it gain in popularity? For the latter, perhaps, but for the former? Less likely. The inherent difficulties and scope of supporting such a wide variety of databases natively will certainly inhibit any such efforts. Solutions such as a REST-ful interface, a la PHP REST SQL or a JSON-HTTP based solution like DBSlayer may be more appropriate in the long run if they were to be standardized. And by standardized I mean standardized with industry-wide and agreed upon specifications. Not more of the “more what you’d call ‘guidelines’ than actual rules” that we already have. Database Migrations are Finally Becoming Simpler Enterprise Information Integration | Data Without Borders Review: EII Suites | Don't Fear the Data The Database Tier is Not Elastic Infrastructure Scalability Pattern: Sharding Sessions F5 Friday: THE Database Gets Some Love The Impossibility of CAP and Cloud Sessions, Sessions Everywhere Cloud-Tiered Architectural Models are Bad Except When They Aren’t277Views0likes1CommentWhen VDI Comes Calling
It is an interesting point/counterpoint to read up about Virtual Desktop Infrastructure (VDI) deployment or lack thereof. The industry definitely seems to be split on whether VDI is the wave of the future or not, what its level of deployment is, and whether VDI is more secure than traditional desktops. There seems to be very little consensus on any of these points, and yet VDI deployments keep rolling out. Meanwhile, many IT folks are worried about all of these “issues” and more. Lots more. Like where the heck to get started even evaluating VDI needs for a given organization. There’s a lot written about who actually needs VDI, and that is a good place to start. Contrary to what some would have you believe, not every employee should be dumped into VDI. There are some employees that will garner a lot more benefit from VDI than others, all depending upon work patterns, access needs, and software tools used. There are some excellent discussions of use cases out there, I won’t link to a specific one just because you’ll need to find one that suits your needs clearly, but searching on VDI use cases will get you started. Then the hard part begins. It is relatively easy to identify groups in your organization that share most of the same software and could either benefit, or at least not be harmed by virtualizing their desktop. Note that in this particular blog post I am ignoring application virtualization in favor of the more complete desktop virtualization. Just thought I’d mention that for clarity. The trick, once you’ve identified users that are generally the same, is to figure out what applications they actually use, what their usage patterns are (if they’re maxing out the CPU of a dedicated machine, that particular user might not be a great choice for VDI unless all the other users that share a server with them are low-usage), and how access from other locations than their desktop could help them to work better/smarter/faster. A plan to plan. I don’t usually blog about toolsets that I’ve never even installed, working at Network Computing Magazine made me leery of “reviews” by people who’ve never touched a product. But sometimes (like with Oracle DataGuard about a year ago), an idea so strikes to the heart of what enterprise IT needs to resolve a given problem than I think it’s worth talking about. Sometimes – like with DataGuard – lots of readers reap the benefits, sometimes – like with Cirtas – I look like a fool. That’s the risks of talking about toys you don’t touch though. That is indeed an introduction to products I haven’t touched . Centrix Software’s Workspace IQ and Lakeside Software’s Systrack Virtual Machine Planner are tools that can help you evaluate usage patterns, software actually run, and usage volumes. Software actually run is a good indicator of what the users actually need, because as everyone in IT knows, often software is installed and when it is no longer needed it is not removed. Usage patterns help you group VMs together on servers. The user that is active at night can share a VM with daytime users without any risk of oversubscription of the CPU or memory. Usage volumes also help you figure out who/how many users you can put on a server. For one group it may be very few heavy users, for another group it may be very many light users. And that’s knowledge you need to get started. It helps you scope the entire project, including licenses and servers, it helps you identify the groups that will have to be trained – and yes coddled – before, during, and shortly after the rollout is occurring, and it helps you talk with vendors about their product’s capabilities. One nice bit about Systrack VMP is that it suggests the correct VDI vendor for your environment. If it does that well, it certainly is a nice feature. These aren’t new tools by any means, but as more and more enterprises look into VDI, talking about solutions like this will give hapless architects and analysts who were just thrown into VDI projects a place to start looking at how to tackle a very large project. It wouldn’t hurt to read the blog over at Moose Logic either, specifically Top Ten VDI Mistakes entry. And when you’re planning for VDI, plan for the network too. There’s a lot more traffic on the LAN in a VDI deployment than there was before you started, and for certain (or maybe all) users you’re going to want high availability. We can help with that when the time comes, just hit DevCentral or our website and search for VDI. Hopefully this is a help to those of you who are put onto a VDI project and expected to deliver quickly. It’s not all there is out there by any stretch, but it can get you started. Related Articles and Blogs: Skills Drive Success It’s Time To Consider Human Capital Again. F5 Friday: In the NOC at Interop Underutilized F5 Friday: Are You Certifiable? SDN, OpenFlow, and Infrastructure 2.0264Views0likes0CommentsF5 Friday: A War of Ecosystems
Nokia’s brutally honest assessment of its situation identifies what is not always obvious in the data center - it’s about an ecosystem. In what was certainly a wake-up call for many, Nokia’s CEO Stephen Elop tells his organization its “platform is burning.” In a leaked memo reprinted by Engadget and picked up by many others, Elop explained the analogy as well as why he believes Nokia is in trouble. Through careful analysis of its competitors and their successes, he finds the answer in the ecosystem its competitors have built -comprising developers, applications and more. The battle of devices has now become a war of ecosystems, where ecosystems include not only the hardware and software of the device, but developers, applications, ecommerce, advertising, search, social applications, location-based services, unified communications and many other things. Our competitors aren’t taking our market share with devices; they are taking our market share with an entire ecosystem. This means we’re going to have to decide how we either build, catalyse or join an ecosystem. If you’re wondering what this could possibility have to do with networking and application delivery, well, the analysis Elop provides regarding the successes of a mobile device vendor can be directly applied to the data center. The nature of data centers and networks is changing. It’s becoming more dynamic, more integrated, more dependent upon collaboration and connections between devices (components) that have traditionally stood alone on their own. But as data center models evolve and morph and demands placed upon them increase the need for contextual awareness and collaboration and the ability to be both reactive and proactive in applying policies across a wide spectrum of data center concerns, success becomes as dependent on a components ability to support and be supported by an ecosystem. Not just the success of vendors, which was Elop’s focus, but success of data center architecture implementations. To counter the rising cost and complexity introduced by new computing and networking models requires automation, orchestration, and collaboration across data center components. cloud computing and virtualization has turned the focus from technology focused components to process-oriented platforms. From individual point solutions to integrated, collaborative systems that encourage development and innovation as a means to address the challenges arising from extreme dynamism. F5 Networks Wins VMware Global Technology Innovator Award Yesterday we took home top honors for enhancing the value of VMware virtualization solutions for companies worldwide. At VMware Partner Exchange 2011, VMware’s annual worldwide partner event, F5 was recognized with VMware’s Technology Innovator Partner of the Year Award. Why is that important? Because it recognizes the significant value placed on building a platform and developing an ecosystem in which that platform can be leveraged to integrate and collaborate on solutions with partners and customers alike. And it is about an ecosystem; it is about collaborative solutions that address key data center challenges that may otherwise hinder the adoption of emerging technologies like cloud computing and virtualization. A robust and flexible application delivery platform provides not only the means by which data and traffic can be dynamically delivered and secured, but also the means through which a more comprehensive strategy to address operational challenges associated with increasingly dynamic data center architectures can be implemented. The collaboration between VMware and F5’s BIG-IP platforms is enabled through integration, through infrastructure 2.0 enabled systems that create an environment in which flexible architectures and dynamism can be managed efficiently. In 2010 alone, F5 and VMware collaborated on a number of solutions leveraging the versatile capabilities of F5’s BIG-IP product portfolio, including: Accelerated long distance live migration with VMware vMotion The joint solution helps solve latency, bandwidth, and packet-loss issues, which historically have prevented customers from performing live migrations between data centers over long distances. An integrated enterprise cloudbursting solution with VMware vCloudDirector The joint solution simplifies and automates use of cloud resources to enhance application delivery performance and availability while minimizing capital investment. Optimized user experience and secure access capabilities with VMware View The solution enhances VMware View user experience with secure access, single sign-on, high performance, and scalability. “Since joining VMware’s Technology Alliance Partner program in 2008, F5 has driven a number of integration and interoperability efforts aimed at enhancing the value of customers’ virtualization and cloud deployments,” said Jim Ritchings, VP of Business Development at F5. “We’re extremely proud of the industry-leading work accomplished with VMware in 2010, and we look forward to continued collaboration to deliver new innovations around server and desktop virtualization, cloud solutions, and more.” It is just such collaboration that builds a robust ecosystem that is necessary to successfully move forward with dynamic data center models built upon virtualization and cloud computing principles. Without this type of collaboration, and the platforms that enable it, the efficiencies of private cloud computing and economy of scale of public cloud computing simply wouldn’t be possible. F5 has always been focused on delivering applications, and that has meant not just partnering extensively with application providers like Oracle and Microsoft and IBM, it has also meant partnering and collaborating with infrastructure providers like HP and Dell and VMware to create solutions that address the very real challenges associated with data center and traffic management. Elop is exactly right when he points to ecosystems being the key to the future. In the case of network and application networking solutions that ecosystem is both about vendor relationships and partnerships as much as it is solutions that enable IT to better align with business and operational goals; to reduce the complexity introduced by increasingly dynamic operations. VMware’s recognition of the value of that ecosystem, of the joint solutions designed and developed through partnerships, is great validation of the important role of the ecosystem in the successful implementation of emerging data center models. F5 Friday: Join Robin “IT” Hood and Take Back Control of Your Applications F5 Friday: The Dynamic VDI Security Game WILS: The Importance of DTLS to Successful VDI F5 Friday: Elastic Applications are Enabled by Dynamic Infrastructure F5 Friday: Efficient Long Distance Transfer of VMs with F5 BIG-IP WOM and NetApp Flexcache F5 Friday: Playing in the Infrastructure Orchestra(tion) Why Virtualization is a Requirement for Private Cloud Computing F5 VMware View Solutions F5 VMware vSphere Solutions Application Delivery for Virtualized Infrastructure DevCentral - VMware / F5 Solutions Topic Group247Views0likes2CommentsDevops Proverb: Process Practice Makes Perfect
#devops Tools for automating – and optimizing – processes are a must-have for enabling continuous delivery of application deployments Some idioms are cross-cultural and cross-temporal. They transcend cultures and time, remaining relevant no matter where or when they are spoken. These idioms are often referred to as proverbs, which carries with it a sense of enduring wisdom. One such idiom, “practice makes perfect”, can be found in just about every culture in some form. In Chinese, for example, the idiom is apparently properly read as “familiarity through doing creates high proficiency”, i.e. practice makes perfect. This is a central tenet of devops, particularly where optimization of operational processes is concerned. The more often you execute a process, the more likely you are to get better at it and discover what activities (steps) within that process may need tweaking or changes or improvements. Ergo, optimization. This tenet grows out of the agile methodology adopted by devops: application release cycles should be nearly continuous, with both developers and operations iterating over the same process – develop, test, deploy – with a high level of frequency. Eventually (one hopes) we achieve process perfection – or at least what we might call process perfection: repeatable, consistent deployment success. It is implied that in order to achieve this many processes will be automated, once we have discovered and defined them in such a way as to enable them to be automated. But how does one automate a process such as an application release cycle? Business Process Management (BPM) works well for automating business workflows; such systems include adapters and plug-ins that allow communication between systems as well as people. But these systems are not designed for operations; there are no web servers or databases or Load balancer adapters for even the most widely adopted BPM systems. One such solution can be found in Electric Cloud with its recently announced ElectricDeploy. Process Automation for Operations ElectricDeploy is built upon a more well known product from Electric Cloud (well, more well-known in developer circles, at least) known as ElectricCommander, a build-test-deploy application deployment system. Its interface presents applications in terms of tiers – but extends beyond the traditional three-tiers associated with development to include infrastructure services such as – you guessed it – load balancers (yes, including BIG-IP) and virtual infrastructure. The view enables operators to create the tiers appropriate to applications and then orchestrate deployment processes through fairly predictable phases – test, QA, pre-production and production. What’s hawesome about the tools is the ability to control the process – to rollback, to restore, and even debug. The debugging capabilities enable operators to stop at specified tasks in order to examine output from systems, check log files, etc..to ensure the process is executing properly. While it’s not able to perform “step into” debugging (stepping into the configuration of the load balancer, for example, and manually executing line by line changes) it can perform what developers know as “step over” debugging, which means you can step through a process at the highest layer and pause at break points, but you can’t yet dive into the actual task. Still, the ability to pause an executing process and examine output, as well as rollback or restore specific process versions (yes, it versions the processes as well, just as you’d expect) would certainly be a boon to operations in the quest to adopt tools and methodologies from development that can aid them in improving time and consistency of deployments. The tool also enables operations to determine what is failure during a deployment. For example, you may want to stop and rollback the deployment when a server fails to launch if your deployment only comprises 2 or 3 servers, but when it comprises 1000s it may be acceptable that a few fail to launch. Success and failure of individual tasks as well as the overall process are defined by the organization and allow for flexibility. This is more than just automation, it’s managed automation; it’s agile in action; it’s focusing on the processes, not the plumbing. MANUAL still RULES Electric Cloud recently (June 2012) conducted a survey on the “state of application deployments today” and found some not unexpected but still frustrating results including that 75% of application deployments are still performed manually or with little to no automation. While automation may not be the goal of devops, but it is a tool enabling operations to achieve its goals and thus it should be more broadly considered as standard operating procedure to automate as much of the deployment process as possible. This is particularly true when operations fully adopts not only the premise of devops but the conclusion resulting from its agile roots. Tighter, faster, more frequent release cycles necessarily puts an additional burden on operations to execute the same processes over and over again. Trying to manually accomplish this may be setting operations up for failure and leave operations focused more on simply going through the motions and getting the application into production successfully than on streamlining and optimizing the processes they are executing. Electric Cloud’s ElectricDeploy is one of the ways in which process optimization can be achieved, and justifies its purchase by operations by promising to enable better control over application deployment processes across development and infrastructure. Devops is a Verb 1024 Words: The Devops Butterfly Effect Devops is Not All About Automation Application Security is a Stack Capacity in the Cloud: Concurrency versus Connections Ecosystems are Always in Flux The Pythagorean Theorem of Operational Risk241Views0likes1CommentF5 Friday: Two Heads are Better Than One
Detecting attacks is good, being able to do something about it is better. F5 and Oracle take their collaborative relationship even further into the data center, integrating web application and database firewall solutions to improve protection against web and database-focused attacks. It is often the case that organizations heavily invested in security solutions designed to protect critical application infrastructure, such as the database, are unwilling to replace those solutions in favor of yet another solution. This is not necessarily a matter of functionality or trust, but a decision based on reliance on existing auditing and management solutions that are a part of the existing deployment. More information is good, but not if it simply becomes an entry in a log somewhere that is disconnected and not integrated into existing operational security processes. Organizations already heavily invested in Oracle technologies are likely to consider deploying the Oracle Database Firewall to protect their critical business information residing in their Oracle database. As enterprise customers deploy more web-based database applications, IT continues to face the challenge of securing both application and database environments from threats such as SQL injection and cross-site scripting attacks. By using F5 and Oracle solutions together, customers can now benefit from enhanced protection for web-based database applications without unnecessarily increasing the auditing burden imposed by additional logging. “70% of the top 100 most popular Web sites either hosted malicious content or contained a masked redirect to lure unsuspecting victims from legitimate sites to malicious sites.” (Websense, 2009) -- WhiteHat/F5, “Strategically Blocking Cross-Site Scripting and SQL Injection Attacks” This collaborative solution pairs F5 BIG-IP ® Application Security Manager ™ (ASM ™ ) and Oracle Database Firewall to provide comprehensive database security from the application layer down to the database. Oracle Database Firewall monitors traffic between applications and the database to detect and prevent SQL injection, privilege or role escalation attacks, and others. Because its target is the database, it uses an innovative SQL grammar analysis approach that is highly accurate and scalable. Unlike web application firewalls, it analyzes the intent of the SQL statements sent to the database. It is not dependent on recognizing the syntax of known security threats, and can therefore block previously unseen attacks, including those targeted against an organization. ASM, on the other hand, focuses on the detection and prevention of attacks at the application layer – including SQL injection – and through integration with Oracle Database Firewall ASM can notify the database firewall of the incoming threat. Such notification includes the context of the request – including user identity, session, IP address and time – that is subsequently logged and acted upon according to Oracle Database Firewall policies, enabling a more comprehensive report of attacks. Because this integration allows operators and administrators to correlate attacks with users, it can better enable the identification of attacks originating from inside the organization – such as from compromised desktops or servers – which can then be leveraged as a means to eradicate potential internal attack vectors such as bots and other trojans proliferating of late throughout the enterprise. That’s important, because a study conducted last year by Microsoft found that over 2.2 million PCs in the U.S. were part of botnets, and that the U.S. is the “number one country consumed with botnet PCs.” With so many potential avenues of attack both internal and external to the organization, there simply can’t be something as too much protection. This F5 component of the solution is included with BIG-IP Application Security Manager at no additional fee. Customers can contact their Oracle representative for pricing on Oracle Database Firewall. For more information on Oracle Database Firewall, please visit www.oracle.com/technetwork/database/database-firewall/index.html. Related Resources: Protect Web Applications and Data with F5 and Oracle – Solution Overview F5 Adds Solutions for Oracle Database – Presentation F5 Solutions for Oracle Database Deployments F5 DevCentral Oracle/F5 Group Forum F5 Friday: BIG-IP WOM With Oracle Products F5 Friday: THE Database Gets Some Love F5 Access Policy Manager & Oracle Access Manager Integration Part 1 Oracle Data Guard sync over the WAN with F5 BIG-IP F5 Friday: Application Access Control - Code, Agent, or Proxy? All F5 Friday Posts on DevCentral <225Views0likes0CommentsF5 Friday: The Evolution of Reference Architectures to Repeatable Architectures
A reference architecture is a solution with the “some assembly required” instructions missing. As a developer and later an enterprise architect, I evaluated and leveraged untold number of “reference architectures.” Reference architectures, in and of themselves, are a valuable resource for organizations as they provide a foundational framework around which a concrete architecture can be derived and ultimately deployed. As data center architecture becomes more complex, employing emerging technologies like cloud computing and virtualization, this process becomes fraught with difficulty. The sheer number of moving parts and building blocks upon which such a framework must be laid is growing, and it is rarely the case that a single vendor has all the components necessary to implement such an architecture. Integration and collaboration across infrastructure solutions alone, a necessary component of a dynamic data center capable of providing the economy of scale desired, becomes a challenge on top of the expected topological design and configuration of individual components required to successfully deploy an enterprise infrastructure architecture from the blueprint of a reference architecture. It is becoming increasingly important to provide not only reference architectures, but repeatable architectures. Architectural guidelines that not only provide the abstraction of a reference architecture but offer the kind of detailed topological and integration guidance necessary for enterprise architects to move from concept to concrete implementation. Andre Kindness of Forrester Research said it well in a recent post titled, “Don’t Underestimate The Value Of Information, Documentation, And Expertise!”: Support documentation and availability to knowledge is especially critical in networking design, deployment, maintenance, and upgrades. Some pundits have relegated networking to a commodity play, but networking is more than plumbing. It’s the fabric that supports a dynamic business connecting users to services that are relevant to the moment, are aggregated at the point of use, and originate from multiple locations. The complexity has evolved from designing in a few links to tens of hundreds of relationships (security, acceleration, prioritization, etc.) along the flow of apps and data through a network. Virtualization, convergence, consolidation, and the evolving data center networks are prime examples of today’s network complexity. REPEATABLE ARCHITECTURE For many years one of F5’s differentiators has been the development and subsequent offering of “Application Ready Solutions”. The focus early on was on providing optimal deployment configuration of F5 solutions for specific applications including IBM, Oracle, Microsoft and more recently, VMware. These deployment guides are step-by-step, detailed documentation developed through collaborative testing with the application provider that offer the expertise of both organizations in deploying F5 solutions for optimal performance and efficiency. As the data center grows more complex, so do the challenges associated with architecting a firm foundation. It requires more than application-specific guidance, it now requires architectural guidance. While reference architectures are certainly still germane and useful, there also needs to be an evolution toward repeatable architectures such that the replication of proposed solutions derived from the collaborative efforts of vendors is achievable. It’s not enough to throw up an architecture comprised of multiple solutions from multiple vendors without providing the insight and guidance necessary to actually replicate that architecture in the data center. That’s why it’s exciting to see our collaborative efforts with vendors of key data center solutions like IBM and VMware result in what are “repeatable architectures.” These are not simply white papers and Power Point decks that came out of joint meetings; these are architectural blueprints that can be repeated in the data center. These are the missing instructions for the “some assembly required” architecture. These jointly designed and developed architectures have already been implemented and tested – and then tested again and again. The repeatable architecture that emerges from such efforts are based on the combined knowledge and expertise of the engineers involved from both organizations, providing insight normally not discovered – and certainly not validated – by an isolated implementation. This same collaboration, this cooperative and joint design and implementation of architectures, is required within the enterprise as well. It’s not enough for architects to design and subsequently “toss over the wall” an enterprise reference architecture. It’s not enough for application specialists in the enterprise to toss a deployment over the wall to the network and security operations teams. Collaboration across compute, network and storage infrastructure requires collaboration across the teams responsible for their management, implementation and optimal configuration. THE FUTURE is REPEATABLE This F5-IBM solution is the tangible representation of an emerging model of collaborative, documented and repeatable architectures. It’s an extension of an existing model F5 has used for years to provide the expertise and insight of the engineers and architects inside the organization that know the products best, and understand how to integrate, optimize and deploy successfully such joint efforts. Repeatable architectures are as important an evolution in the support of jointly developed solutions as APIs and dynamic control planes are to the successful implementation of data center automation. More information on the F5-IBM repeatable enterprise cloud architecture: Why You Need a Cloud to Call Your Own – F5 and IBM White Paper Building an Enterprise Cloud with F5 and IBM – F5 Tech Brief SlideShare Presentation F5 and IBM: Cloud Computing Architecture – Demo Related blogs & articles: F5 Application Ready Solutions F5 and IBM Help Enterprise Customers Confidently Deploy Private Clouds F5 Friday: A War of Ecosystems Data Center Feng Shui: Process Equally Important as Preparation Don’t Underestimate The Value Of Information, Documentation, And Expertise! Service Provider Series: Managing the IPv6 Migration220Views0likes0Comments