apis
10 TopicsIntroducing the F5 CLI – Easy button for the F5 Automation Toolchain
The F5 command-line interface (F5-CLI) is the latest addition to the F5 automation family and helps to enable the ease of F5 deployment of application services using the F5 Automation toolchain and F5 cloud services. The F5-CLI was built with ease of use in mind and takes its inspiration from other cloud-based command-line interfaces like those delivered via AWS, Azure, and GCP. The beauty of the F5 CLI is that you can easily deploy your applications by using the AS3/DO/TS or the cloud services declarative model without any software development or coding knowledge, or the need to use third-party tools or programming languages in order to make use of the APIS. Benefits of the F5 CLI: • Quickly access and consume F5’s APIs and Services with familiar remote CLI UX • Configurable settings • Include common actions in Continuous Deployment (CD) pipelines • Prototyping Test calls that may be used in more complex custom integrations using the underlying SDK Supports discovery activities/querying of command-line results (for example, “list accounts” to find the desired account which will be used as an input to final automation) • Support quick one-off automation activities (for example, leveraging a bash loop to create/delete large lists of objects) Ease of use We are going to show two examples of using the F5 Command Line Interface (F5 CLI) • Using the F5 CLI as a simple command line interface and • Using the F5 CLI for rapid F5 integration into a simple automation pipeline Using the F5 CLI straight from the command line The F5 CLI is delivered as a docker container or as a Python application and is very simple to run. One command is required to get the F5 CLI going as a docker image; for example: docker run -it -v "$HOME/.f5_cli:/root/.f5_cli" -v "$(pwd):/f5-cli" f5devcentral/f5-cli:latest /bin/bash And from there you can simply type “f5” and you will be presented with the command line options. #f5 --help Usage: f5 [OPTIONS] COMMAND [ARGS]... Welcome to the F5 command line interface. Options: --version Show the version and exit. --help Show this message and exit. Commands: bigip Manage BIG-IP login Login to BIG-IP, F5 Cloud Services, etc. cs Manage F5 Cloud Services config Configure CLI authentication and configuration --help is your friend here as it will show you all of the various options that you have. Let’s look at a simple example of logging in and sending an F5 AS3 declaration to a Big-IP: Login #f5 login --authentication-provider bigip --host 2.2.2.2 --user auser --password apassword { "message": "Logged in successfully" } Next, send a declaration to your BIG-IP, BIGIQ, F5 Cloud Services; for example: The following command creates a VIP on a BIG-IP with 2 pool members and associates a WAF Policy. bash-5.0# f5 bigip extension as3 create --declaration basicwafpolicy.json Reference: - Review the AS3 declaration above here Using the F5 CLI as part of a continuous deployment pipeline From there, it is also simple to think about ways that you could use the F5 CLI as part of a continuous deployment pipeline. This example uses Jenkins. Jenkins is an open-source automation server. It helps automate the parts of software development related to building, testing, deploying, and facilitating continuous integration and continuous delivery. It is relatively simple to use the F5 CLI as part of a Jenkins pipeline. In my example, I run a docker container that is running a container-based Cloud Bees Jenkins distribution which in turn is running docker inside of the container in order to make use of the F5 CLI. Configuring the F5 CLI to run on a Jenkins Docker container In order to get Jenkins running in my lab I use the cloud bees Jenkins distribution. In my case for this demonstration I am running docker on my laptop. A production deployment would be configured slightly differently. - First install docker: For more information on how to install docker, see the following.https://docs.docker.com/get-docker/ - Pull a cloud bees Jenkins distribution.https://hub.docker.com/r/cloudbees/cloudbees-jenkins-distribution/ docker pull cloudbees/cloudbees-jenkins-distribution - Run the following command in a terminal in order to start the initial Jenkins setup. docker run -u root -p 8080:8080 -p 50000:50000 -v ~/tmp/jenkins-data:/var/cloudbees-distribution -v /var/run/docker.sock:/var/run/docker.sock cloudbees/cloudbees-jenkins-distribution - Access the Jenkins interface to perform the initial setup on http://0.0.0.0:8080 You will need the initial admin password to proceed, which in my case is written into the terminal window from where I am running Jenkins. Jenkins initial setup is required. An admin user has been created and a password generated. Please use the following password to proceed to installation: d425a181f15e4a6eb95ca59e683d3411 This may also be found at: /var/cloudbees-jenkins-distribution/secrets/initialAdminPassword - Follow the in browser instructions to create users and then get the recommended plugins installed in Jenkins. - After you have Jenkins configured you must then install the docker plugin inside of the container. Open another terminal window and in the terminal window run docker ps ... make a note of the Jenkins container name ... docker exec -it JENKINSCONTAINERNAME /bin/bash ... inside the container shell ... apt install docker.io Setting up your Jenkins pipeline - Inside of Jenkins, select “new item” and then “pipeline.” You will then name your pipeline and select the “OK” button. - After that, you will be presented with a list of tabs and you should be able to copy the pipeline code below into the pipeline tab. This then enables a fairly simple Jenkins pipeline that shows how easy it is to build an automation process from scratch. The following is an example Jenkins pipeline that automates the F5-CLI: pipeline { /* These should really be in a vault/credentials store but for testing * they'll be added as parameters that can be overridden in the job * configuration. */ parameters { string(name: 'BIG-IP_ADDRESS', defaultValue: '2.2.2.2', description: 'The IP address for BIG-IP management.') string(name: 'BIG-IP_USER', defaultValue: 'auser', description: 'The user account for authentication.') password(name: 'BIG-IP_PASSWORD', defaultValue: 'apassword', description: 'The password for authentication.') booleanParam(name: 'DISABLE_TLS_VERIFICATION', defaultValue: true, description: 'Disable TLS and HTTPS certificate checking.') } agent { docker { image 'f5devcentral/f5-cli:latest' } } stages { stage('Prepare F5 CLI configuration') { steps { script { if(params.DISABLE_TLS_VERIFICATION) { echo 'Configuring F5 CLI to ignore TLS/HTTPS certificate errors' sh 'f5 config set-defaults --disable-ssl-warnings true' } } script { echo "Authenticating to BIG-IP instance at ${params.BIG-IP_ADDRESS}" sh "f5 login --authentication-provider bigip --host ${params.BIG-IP_ADDRESS} --user ${params.BIG-IP_USER} --password ${params.BIG-IP_PASSWORD}" } } } stage('Pull declaration from github (or not)') { steps { script { echo 'Pull Waff Policy from Github' sh 'wget https://raw.githubusercontent.com/dudesweet/simpleAS3Declare/master/basicwafpolicy.json -O basicwafpolicy.json' } } } stage('Post declaration to BIG-IP (or not)') { steps { script { echo 'Post declaration to BIG-IP (or not)' sh 'f5 f5 bigip extension as3 create --declaration basicwafpolicy.json' } } } } } Reference:Review the jenkinsfile here I have also included a 5-minute presentation and video demonstration that covers using the F5 CLI. The video reviews the F5 CLI then goes on to run the above Jenkins pipeline: https://youtu.be/1KGPCeZex6A Conclusion The F5 CLI is very easy to use and can get you up and running fast with the F5 API. You don’t have to be familiar with any particular programming languages or automation tools. You can use a command-line to use F5 declarative APIs for simple and seamless F5 service verification, automation, and insertion that can get you up and running with F5 services. Additionally, we showed how easy it is to integrate the F5 CLI into Jenkins, which is an open-source software development and automation server. You can begin your automation journey and easily include F5 as part of your automation pipelines. Perhaps the final thing I should say is that it doesn’t have to stop here. The next step could be to use the F5 Python SDK or to explore how F5 APIs can be integrated into the F5 Ecosystem like Ansible or Terraform and many more. For more information, your starting point should always be https://clouddocs.f5.com/. Links and References F5 SDK GitHub repo: https://github.com/f5devcentral/f5-sdk-python Documentation: https://clouddocs.f5.com/sdk/f5-sdk-python/ F5 CLI GitHub repo: https://github.com/f5devcentral/f5-cli Docker Container: https://hub.docker.com/r/f5devcentral/f5-cli Documentation: https://clouddocs.f5.com/sdk/f5-cli/ If you want to talk about F5 cloud solutions we have a Slack channel: Slack / F5cloudsolutions : https://f5cloudsolutions.slack.com Cloud bees Jenkins distribution: https://hub.docker.com/r/cloudbees/cloudbees-jenkins-distribution/ Docker: https://docs.docker.com/get-docker/1.7KViews3likes0CommentsThe Stealthy Ascendancy of JSON
While everyone was focused on cloud, JSON has slowly but surely been taking over the application development world It looks like the debate between XML and JSON may be coming to a close with JSON poised to take the title of preferred format for web applications. If you don’t consider these statistics to be impressive, consider that ProgrammableWeb indicated that its “own statistics on ProgrammableWeb show a significant increase in the number of JSON APIs over 2009/2010. During 2009 there were only 191 JSON APIs registered. So far in 2010 [August] there are already 223!” Today there are 1262 JSON APIs registered, which means a growth rate of 565% in the past eight months, nearly catching up to XML which currently lists 2162 APIs. At this rate, JSON will likely overtake XML as the preferred format by the end of 2011. This is significant to both infrastructure vendors and cloud computing providers alike, because it indicates a preference for a programmatic model that must be accounted for when developing services, particularly those in the PaaS (Platform as a Service) domain. PaaS has yet to grab developers mindshare and it may be that support for JSON will be one of the ways in which that mindshare is attracted. Consider the results of the “State of Web Development 2010” survey from Web Directions in which developers were asked about their cloud computing usage; only 22% responded in the affirmative to utilizing cloud computing. But of those 22% that do leverage cloud computing, the providers they use are telling: PaaS represents a mere 7.35% of developers use of cloud computing, with storage (Amazon S3) and IaaS (Infrastructure as a Service) garnering 26.89% of responses. Google App Engine is the dominant PaaS platform at the moment, most likely owing to the fact that it is primarily focused on JavaScript, UI, and other utility-style services as opposed to Azure’s middle-ware and definitely more enterprise-class focused services. SaaS, too, is failing to recognize the demand from developers and the growing ascendancy of JSON. Consider this exchange on the Salesforce.com forums regarding JSON. Come on salesforce lets get this done. We need to integrate, we need this [JSON]. If JSON continues its steady rise into ascendancy, PaaS and SaaS providers alike should be ready to support JSON-style integration as its growth pattern indicates it is not going away, but is instead picking up steam. Providers able to support JSON for PaaS and SaaS will have a competitive advantage over those that do not, especially as they vie for the hearts and minds of developers which are, after all, their core constituency. THE IMPACT What the steady rise of JSON should trigger for providers and vendors alike is a need to support JSON as the means by which services are integrated, invoked, and data exchanged. Application delivery, service-providers and Infrastructure 2.0 focused solutions need to provide APIs that are JSON compatible and which are capable of handling the format to provide core infrastructure services such as firewalling and data scrubbing duties. The increasing use of JSON-based APIs to integrate with external, third-party services continues to grow and the demand for enterprise-class service to support JSON as well will continue to rise. There are drawbacks, and this steady movement toward JSON has in some cases a profound impact on the infrastructure and architectural choices made by IT organizations, especially in terms of providing for consistency of services across what is likely a very mixed-format environment. Identity and access management and security services may not be prepared to handle JSON APIs nor provide the same services as it has for XML, which through long established usage and efforts comes with its own set of standards. Including social networking “streams” in applications and web-sites is now as common as including images, but changes to APIs may make basic security chores difficult. Consider that Twitter – very quietly – has moved to supporting JSON only for its Streaming API. Organizations that were, as well they should, scrubbing such streams to prevent both embarrassing as well as malicious code from being integrated unknowingly into their sites, may have suddenly found that infrastructure providing such services no longer worked: API providers and developers are making their choice quite clear when it comes to choosing between XML and JSON. A nearly unanimous choice seems to be JSON. Several API providers, including Twitter, have either stopped supporting the XML format or are even introducing newer versions of their API with only JSON support. In our ProgrammableWeb API directory, JSON seems to be the winner. A couple of items are of interest this week in the XML versus JSON debate. We had earlier reported that come early December, Twitter plans to stop support for XML in its Streaming API. --JSON Continues its Winning Streak Over XML, ProgrammableWeb (Dec 2010) Similarly, caching and acceleration services may be confused by a change from XML to JSON; from a format that was well-understood and for which solutions were enabled with parsing capabilities to one that is not. IT’S THE DATA, NOT the API The fight between JSON and XML is one we continue to see in a general sense. See, it isn’t necessarily the API that matters, in the end, but the data format (the semantics) used to exchange that data which matters. XML is considered unstructured, though in practice it’s far more structured than JSON in the sense that there are meta-data standards for XML that constrain security, identity, and even application formats. JSON, however, although having been included natively in ECMA v5 (JSON data interchange format gets ECMA standards blessing) has very few standards aside from those imposed by frameworks and toolkits such as JQuery. This will make it challenging for infrastructure vendors to support services targeting application data – data scrubbing, web application firewall, IDS, IPS, caching, advanced routing – to continue to effectively deliver such applications without recognizing JSON as an option. The API has become little more than a set of URIs and nearly all infrastructure directly related to application delivery is more than capable of handling them. It is the data, however, that presents a challenge and which makes the developers’ choice of formats so important in the big picture. It isn’t just the application and integration that is impacted, it’s the entire infrastructure and architecture that must adapt to support the data format. The World Doesn’t Care About APIs – but it does care about the data, about the model. Right now, it appears that model is more than likely going to be presented in a JSON-encoded format. JSON data interchange format gets ECMA standards blessing JSON Continues its Winning Streak Over XML JSON versus XML: Your Choice Matters More Than You Think I am in your HTTP headers, attacking your application The Web 2.0 API: From collaborating to compromised Would you risk $31,000 for milliseconds of application response time? Stop brute force listing of HTTP OPTIONS with network-side scripting The New Distribution of The 3-Tiered Architecture Changes Everything Are You Scrubbing the Twitter Stream on Your Web Site?898Views0likes0CommentsWhat Ops Needs to Know about APIs and Compression
I’ve been reading up on APIs cause, coolness. And in particular I really enjoyed reading Best Practices for Designing a Pragmatic RESTful API because it had a lot of really good information and advice. And then I got to the part about compressing your APIs. Before we go too far let me first say I’m not saying you shouldn’t compress your API or app responses. You probably should. What I am saying is that where you compress data and when are important considerations. That’s because generally speaking no one has put their web server (which is ultimately what tends to serve up responses, whether they’re APIs or objects, XML or JSON) at the edge of the Internet. You know, where it’s completely vulnerable. It’s usually several devices back in the networking gauntlet that has be run before data gets from the edge of your network to the server. This is because there are myriad bad actors out salivating at the prospect of a return to an early aughts data center architecture in which firewalls, DDoS protection, and other app security services were not physically and logically located upstream from the apps they protect today. Cause if you don’t have to navigate the network, it’s way easier to launch an attack on an app. Today, we employ an average of 11 different services in the network, upstream from the app, to provide security, scale, and performance-enhancing services. Like compression. Now, you can enable compression on the web server. It’s a standard thing in HTTP and it’s little more than a bit to flip in the configuration. Easy peasy performance-enhancing change, right? Except that today that’s not always true. The primary reason compression improves performance is because when it reduces the size of data it reduces the number of packets that must be transmitted. That reduces the potential for congestion that causes a Catch-22 where TCP retransmits increase congestion that increases packet loss that increases… well, you get the picture. This is particularly true when mobile clients are connecting via cellular networks, because latency is a real issue for them and the more round trips it takes, the worse the application experience. Suffice to say that the primary reason compression improves performance is that it reduces the amount of data needing to be transmitted which means “faster” delivery to the client. Fewer packets = less time = happier users. That’s a good thing. Except when compression gets in the way or doesn’t provide any real reduction that would improve performance. What? How can that be, you ask. Remember that we’re looking for compression to reduce the number of packets transmitted, especially when it has to traverse a higher latency, lower capacity link between the data center and the client. It turns out that sometimes compression doesn’t really help with that. Consider the aforementioned article and its section on compressing. The author ran some tests, and concluded that compression of text-based data produces some really awesome results: Let's look at this with a real world example. I've pulled some data from GitHub's API, which uses pretty print by default. I'll also be doing some gzip comparisons: $ curl https://api.github.com/users/veesahni > with-whitespace.txt $ ruby -r json -e 'puts JSON JSON.parse(STDIN.read)' < with-whitespace.txt > without-whitespace.txt $ gzip -c with-whitespace.txt > with-whitespace.txt.gz $ gzip -c without-whitespace.txt > without-whitespace.txt.gz The output files have the following sizes: without-whitespace.txt - 1252 bytes with-whitespace.txt - 1369 bytes without-whitespace.txt.gz - 496 bytes with-whitespace.txt.gz - 509 bytes In this example, the whitespace increased the output size by 8.5% when gzip is not in play and 2.6% when gzip is in play. On the other hand, the act of gzipping in itself provided over 60% in bandwidth savings. Since the cost of pretty printing is relatively small, it's best to pretty print by default and ensure gzip compression is supported! To further hammer in this point, Twitter found that there was an 80% savings (in some cases)when enabling gzip compression on their Streaming API. Stack Exchange went as far as to never return a response that's not compressed! Wow! I mean, from a purely mathematical perspective, that’s some awesome results. And the author is correct in saying it will provide bandwidth savings. What those results won’t necessarily do is improve performance because the original size of the file was already less than the MSS for a single packet. Which means compressed or not, that data takes exactly one packet to transmit. That’s it. I won’t bore you with the mathematics, but the speed of light and networking says one packet takes the same amount of time to transit whether it’s got 496 bytes of payload or 1396 bytes of payload. The typical MSS for Ethernet packets is 1460 bytes, which means compressing something smaller than that effectively nets you nothing in terms of performance. It’s like a plane. It takes as long to fly from point A to point B whether there are 14 passengers or 140. Fuel efficiency (bandwidth) is impacted, but that doesn’t really change performance, just the cost. Furthermore, compressing the payload at the web server means that web app security services upstream have to decompress if they want to do their job, which is to say scan responses for sensitive or excessive data indicative of a breach of security policies. This is a big deal, kids. 42% of respondents in our annual State of Application Delivery survey always scan responses as part of their overall security strategy to prevent data leaks. Which means they have to spend extra time to decompress the data to evaluate it and then recompress it, or perhaps they can’t inspect it at all. Now, that said, bandwidth savings are a good thing. It’s part of any comprehensive scaling strategy to consider the impact of increasing use of an app on bandwidth. And a clogged up network can impact performance negatively so compression is a good idea. But not necessarily at the web server. This is akin to carefully considering where you enforce SSL/TLS security measures, as there are similar impacts on security services upstream from the app / web server. That’s why the right place for compression and SSL/TLS is generally upstream, in the network, after security has checked out the response and it’s actually ready to be delivered to the client. That’s usually the load balancing service or the ADC, where compression can not only be applied most efficiently and without interfering with security services and offsetting the potential gains by forcing extra processing upstream. As with rate limiting APIs, it’s not always a matter of whether or not you should, it’s a matter of where you should. Architecture, not algorithms, are the key to scale and performance of modern applications.599Views0likes0Comments4 things you can do in your code now to make it more scalable later
No one likes to hear that they need to rewrite or re-architect an application because it doesn't scale. I'm sure no one at Twitter thought that they'd need to be overhauling their architecture because it gained popularity as quickly as it did. Many developers, especially in the enterprise space, don't worry about the kind of scalability that sites like Twitter or LinkedIn need to concern themselves with, but they still need to be (or at least should be) concerned with scalability in general and the effects of inserting an application into a high-scalability environment, such as one fronted by a load balancer or application delivery controller. There are some very simple things you can do in your code, when you're developing an application, that can ease the transition into a high-availability architecture and that will eventually lead to a faster, more scalable application. Here are four things you can do now - and why - to make your application fit better into a high availability environment in the future and avoid rewriting or re-architecting your solutions later. Where's F5? Storage Decisions Sept 23-24 in New York Networld IT Roadmap Sept 23 in Dallas Oracle Open World Sept 21-25 in San Francisco Storage Networking World Oct 13-16 in Dallas Storage Expo 2008 UK Oct 15-16 in London Storage Networking World Oct 27-29 in Frankfurt 1. Don't assume your application is always responsible for cookie encryption Encrypting cookies in today's privacy lax environment that is the Internet is the responsible thing to do. In the first iterations of your application you will certainly be responsible for handling the encryption and decryption of cookies, but later on, when the application is inserted into a high-availability environment and there exists an application delivery controller (ADC), that functionality can be offloaded to the ADC. Offloading the responsibility for encryption and decryption of cookies to the ADC improves performance because the ADC employs hardware acceleration. To make it easier to offload this responsibility to an ADC in the future but support it early on, use a configuration flag to indicate whether you should decrypt or encrypt cookies before examining them. That way you can simply change the configuration flag later on and immediately take advantage of a performance boost from the network infrastructure. 2. Don't assume the client IP is accurate If you need to use/store/access the client's IP address, don't assume the traditional HTTP header is accurate. Early on it certainly will be, but when the application is inserted into a high availability environment and a full-proxy solution is sitting in front of your application, it won't be. A full-proxy mediates between client and server, which means it is the client when talking to the server, so its IP address becomes the "client IP". Almost all full-proxies insert the real client IP address into the X-Forwarded-For HTTP header, so you should always check that header before checking the client IP address. If there is an X-Forwarded-For value, you'll more than likely want to use it instead of the client IP address. This simple check should alleviate the need to make changes to your application when it's moved into a high availability environment. 3. Don't use relative paths Always use the FQDN (fully qualified domain name) when referencing images, scripts, etc... inside your application. Furthermore, use different host names for different content types - i.e. images.example.com and scripts.example.com. Early on all the hosts will point to the same server, probably, but by insuring that you're using the FQDN now makes architecting that high availability environment much easier. While any intelligent application delivery controller can perform layer 7 switching on any part of the URI and arrive at the same architecture, it's much more efficient to load balance and route application data based on the host name. By using the FQDN and separating host names by content type you can later optimize and tune specific servers for delivery of that content, or use the CNAME trick to improve parallelism and performance in request heavy applications. 4. Separate out API rate limiting functionality If you're writing an application with an API for integration later, separate out the rate limiting functionality. Initially you may need it, but when the application is inserted into a high-availability environment with an intelligent application delivery controller, it can take over that functionality and spare your application from having to reject requests that exceed the set limits. Like cookie encryption, use a configuration flag to determine whether you should check this limitation or not so it can be easily be turned on and off at will. By offloading the responsibility for rate limiting to an application delivery controller you remove the need for the server to waste resources (connections, RAM, cycles) on requests it won't respond to anyway. This improves the capacity of the server and thus your application, making it more efficient and more scalable. By thinking about the ways in which your application will need to interact with a high availability infrastructure later and adjusting your code to take that into consideration you can save yourself a lot of headaches later on when your application is inserted into that infrastructure. That means less rewriting of applications, less troubleshooting, and fewer servers needed to scale up quickly to meet demand. Happy coding!360Views0likes1CommentPerformance Testing Web 2.0: Don't forget the APIs
Keynote, well known for its application performance testing and monitoring services, just announced a new version of its KITE (Keynote Internet Testing Environment) that is now capable of testing Web 2.0 sites that make use of AJAX, Flash, and other "hidden" methods of obtaining content. Announcement of KITE 2 Performance testing dynamic and HTML websites is now a fairly straightforward process, however the rise of Web 2.0 sites that don’t rely on clicking to reveal another new page have been almost impossible to test. However Keynote has now developed a scripted system that allows you to test Web 2.0 sites as easily as Web 1.0 sites. KITE 2.0 (Keynote Internet Testing Environment), is the latest version of Keynote’s product for testing and analysing the performance of Web applications. KITE 2.0 gives users the flexibility of testing instantly from their desktop or from geographic locations across Keynote’s on-demand global test and measurement network. According to Keynote KITE 2.0 enables Web developers, QA professionals, performance analysts and others, to execute rapid performance analysis and validation by measuring the end user experience of next Web 2.0 applications that include AJAX and asynchronously downloaded content. Ensuring that the additional burden placed on servers by "hidden" requests is imperative when attempting to understand both capacity and end-user performance. But when you're laying out that testing plan, don't forget about any APIs you might be providing. Like Twitter, Google, Facebook, and Amazon, if you're using an API to allow integration with other Web 2.0 sites or applications make certain you performance test that, as well. And then test them at the same time you're load-testing your application. As we've learned from Twitter's most public stability issues, the APIs are going to put additional stress on your network, servers, and databases that must be considered when determining capacity and performance levels. Important, too, is to take into consideration RSS feeds coming from your site. Include those in your performance test, as the retrieval of those files adds an additional burden on servers and can impact the performance of the entire site. Most feed-readers and aggregators poll for RSS feeds on a specified interval so set up a script to grab that RSS from multiple clients every 10 minutes or so during the testing to ensure that you're simulating maximum load as closely as possible. Performance testing sites today is getting more complex because we're adding so many entry points into our sites and applications. Don't forget about those extra entry points when you perform your load-testing or you might find out in a most public way that your application just can't handle the load.262Views0likes0CommentsCan the future of application delivery networks be found in neural network theory?
I spent a big chunk of time a few nights ago discussing neural networks with my oldest son over IM. It's been a long time since I've had reason to dig into anything really related to AI (artificial intelligence) and at first I was thinking how cool it would be to be back in college just exploring topics like that. Then, because I was trying to balance a conversation with my oldest while juggling my (fussy) youngest on my lap, I thought no, no it wouldn't. Artificial neural networks (ANN) are good for teaching a system how to recognize patterns, discern complex mathematical relationships, and make predictions based on a variety of inputs. It learns by trying and trying again until the output matches what is expected given a sample (training) data set. That learning process requires feedback; feedback that is often given via backpropagation. Backpropagation can be tricky, but essentially it's the process of determining how far off the output is from the expected output, and then propagating that back into the network so it can essentially learn from its mistakes. Just like us. If you guessed that this was going to tie back into application delivery, you guessed correctly. An application delivery network is not a neural network, but it often has many of the same properties, such as using something similar to a hidden layer (the application delivery controller) to make decisions about application messages, such as to which server to distribute them and how to best optimize those messages. More interestingly, perhaps, is the ability to backpropagate errors and information through the application delivery network such that the application delivery network automatically adjusts itself and makes different decisions for subsequent requests. If the application delivery network is enabled with a services-based API, for example, it can be integrated into applications to provide valuable feedback regarding the state of that application and the messages it receives to the application delivery controller, which can then be adjusted to reflect changes in the state of that application. This is how we change the weights of individual servers in the load balancing algorithms in what is somewhat akin to modifying the weights of the connections between neurons in a neural net. But it's merely a similarity now; it's not a real ANN as it's missing some key attributes and behaviors that would make it one. When you look at the way in which an application delivery network is deployed and how it acts, you can (or at least I can) see the possibilities of employing a neural network model in building an even smarter, more adaptable delivery network. Right now we have engineers that deploy, configure, and test application delivery networks for specific applications like Oracle, Microsoft, and BEA. It's an iterative process in which they continually tweak the configuration of the solutions that make up an application delivery network based on feedback such as response time, size of messages, and load on individual servers. When they're finished, they've documented an Application Ready Network with a configuration that is configured for optimal performance and scalability for that application that can easily be deployed by customers. But the feedback loop for this piece is mostly manual right now, and we only have so many engineers available for the hundreds of thousands of applications out there. And that's not counting all the in-house developed applications that could benefit from a similar process. And our environment is not your environment. In the future, it would awesome if application delivery networks acted more like neural networks, incorporating the feedback themselves based on designated thresholds (response time must be less than X, load on the server must not exceed Y) and tweak itself until it met its goals; all based on the applications and environment unique to the organization. It's close; an intelligent application delivery controller is able to use thresholds for response time and size of application messages to determine to which server an individual request should be sent. And it can incorporate feedback through the use of service-based APIs integrated with the application. But it's not necessarily modifying its own configuration permanently based on that information; it doesn't have a "learning mode" like so many application firewall and security solutions. That's an important piece we're missing - the ability to learn the behavior of an application in a specific environment and adjust automatically to that unique configuration. Like learning that in your environment a specific application task runs faster on server X than it does on servers Y and Z, so it always sends that task to server X. We can do the routing via layer 7 switching, but we can't (yet) deduce what that routing should be from application behavior and automatically configure it. We've come a long way since the early days of load balancing, where the goal was simply to distribute requests across machines equally. We've learned how to intelligently deliver applications, not just distribute them, in the years since the web was born. So it's not completely crazy to think that in the future the concepts used to build neural networks will be used to build application delivery neural networks. At least I don't think it is. But then crazy people don't think they're crazy, do they?254Views0likes1CommentThe Case (For & Against) Management-Driven Scalability in Cloud Computing Environments
Examining responsibility for auto-scalability in cloud computing environments. [ If you’re coming in late, you may want to also read previous entries on the network and the application ] Today, the argument regarding responsibility for auto-scaling in cloud computing as well as highly virtualized environments remains mostly constrained to e-mail conversations and gatherings at espresso machines. It’s an argument that needs more industry and “technology consumer” awareness, because it’s ultimately one of the underpinnings of a dynamic data center architecture; it’s the piece of the puzzle that makes or breaks one of the highest value propositions of cloud computing and virtualization: scalability. The question appears to be a simple one: what component is responsible not only for recognizing the need for additional capacity, but acting on that information to actually initiate the provisioning of more capacity? Neither the answer, nor the question, it turns out are as simple as appears at first glance. There are a variety of factors that need to be considered, and each of the arguments for – and against - a specific component have considerable weight. Today we’re going to specifically examine the case for management frameworks as the primary driver of scalability in cloud computing environments. ANSWER: MANAGEMENT FRAMEWORK We’re using “management framework” as a catch-all for the “system” in charge of “the cloud”. In some cases this might be a commercial solution offered by popular vendors like VMware, Citrix, Microsoft, or even those included in open-source solutions like Ubuntu. It might be a custom-built solution managed by a provider, like that of Amazon, Rackspace, BlueLock, and other cloud computing providers. These systems generally allow end-user (IT) control via APIs or web-based management consoles, and allow in varying degrees the provisioning and management of virtual instances and infrastructure services within the environment. This management capability implies, of course, control over the provisioning of resources – compute, network, and storage – as well as any services required, such as load balancing services required to enable scalability. Obviously this means the management framework has the ability to initiate a scaling event because it has control over the required systems and components. The only problem with this approach is one we’ve heard before – integration fatigue. Because the network and server infrastructure often present management interfaces as open, but unique APIs, the management framework must be enabled through integration to control them. This is less of a problem for server infrastructure, where there are few hypervisor platforms that require such integration. It is more of a challenge in the infrastructure arena where there are many, many options for load balancing services. But let’s assume for the sake of this argument that the network infrastructure has a common API and model and integration is not a problem. The question remains, does the management framework recognize the conditions under which a scaling event should be initiated, i.e. does it have the pertinent information required to make that decision? Does it have the visibility necessary? In general, the answer is no, it does not. Most “cloud” management frameworks do not themselves collect the data upon which a decision made. Doing so would almost certainly require us to return to an agent-based collection model in which agents are deployed on every network and server infrastructure component as a means to feed that data back into the management system, where it is analyzed and used to determine whether a scaling event – up or down – is necessary. This is not efficient and, if you were to ask most customers, the prospect of paying for the resources consumed by such an agent may be a negative factor in deciding whether to use a particular provider or not. The question remains, as well, as to how such a system would manage in real-time to track the appropriate thresholds, by application and by instance, to ensure a scaling event is initiated at the right time. It would need to manage each individual instance as well as the virtual application entity that exists in the load balancing service. So not only would it need to collect the data from each, but it would need to correlate and analyze on a fairly regular basis, which would require a lot of cycles in a fairly large deployment. This processing would be in addition to managing the actual provisioning process and keeping an eye on whether additional resources were available at any given moment. It seems impractical and inefficient to expect the management framework to perform all these duties. Perhaps they can – and even do for small environments – but scaling the management framework itself would be a herculean task as the environment and demands on it grew. To sum up, management frameworks have the capabilities to manage scaling events, but like the “application” it has no practical means of visibility into the virtual application. Assuming visibility was possible there remains processing and operational challenges that may in the long run likely impact the ability of the system to collect, correlate, and analyze data in large environments, making it impractical to lay sole responsibility on the management framework. NEXT: Resolution to the Case (For & Against) X-Driven Scalability in Cloud Computing Environments The Case (For & Against) Network-Driven Scalability in Cloud Computing Environments The Case (For & Against) Application-Driven Scalability in Cloud Computing Environments The Cloud Configuration Management Conundrum IT as a Service: A Stateless Infrastructure Architecture Model If a Network Can’t Go Virtual Then Virtual Must Come to the Network You Can’t Have IT as a Service Until IT Has Infrastructure as a Service This is Why We Can’t Have Nice Things Intercloud: Are You Moving Applications or Architectures? The Consumerization of IT: The OpsStore What CIOs Can Learn from the Spartans235Views0likes0CommentsAmazon Elastic Load Balancing Only Simple On the Outside
Amazon’s ELB is an exciting mix of well-executed infrastructure 2.0 and the proper application of SOA, but it takes a lot of work to make anything infrastructure look that easy. The notion of Elastic Load Balancing, as recently brought to public attention by Amazon’s offering of the capability, is nothing new. The basic concept is pure Infrastructure 2.0 and the functionality offered via the API has long been available on several application delivery controllers for many years. In fact, looking through the options for Amazon’s offering leaves me feeling a bit, oh, 1999. As if load balancing hasn’t evolved far beyond the very limited subset of capabilities exposed by Amazon’s API. That said, that’s just the view from the outside. Though Amazon’s ELB might be rudimentary in what it exposes to the public it is certainly anything but primitive in its use of SOA and as a prime example of the power of Infrastructure 2.0. In fact, with the exception of GoGrid’s integrated load balancing capabilities, provisioned and managed via a web-based interface, there aren’t many good, public examples of Infrastructure 2.0 in action. Not only has Amazon leveraged Infrastructure 2.0 concepts with its implementation but it has further taken advantage of SOA in the way it was meant to be used. NOTE: What follows is just my personal analysis, I don’t have any especial knowledge about what really lies beneath Amazon’s external interfaces. The diagram is a visual interpretation of what I’ve deduced seems likely in terms of the interactions with ELB given my experience with application delivery and the information available from Amazon and should be read with that in mind. WHAT DOES THAT MEAN? When I say Amazon has utilized SOA in a way that it was meant to be used I mean that their ELB “API” isn’t just a collection of Web Services, or POWS, wrapped around some other API. It’s actually a well-thought out and designed set of interfaces that describe tasks associated with load balancing and not individual product calls. For example, if you take a look at the ELB WSDL you can see a set of operations that describe tasks, not management or configuration options, such as: CreateLoadBalancer DeleteLoadBalancer RegisterInstancesWithLoadBalancer DeregisterInstancesFromLoadBalancer To understand why these are so significant and most certainly represent tasks and not individual operations you have to understand how a load balancer is typically configured, and how the individual configuration components fit together. Saying “DeleteLoadBalancer” is a lot easier than what really has to occur under the covers. Believe me, it’s not as easy as a single API call to any load balancing solution. There’s a lot of relationships inherent in a load balancing configuration between the virtual server/IP address and the (pools|farms|clusters) and individual nodes, a.k.a. instance in Amazon-speak. Yet if you take a look at the parameters required to “register instances” with the load balancer, you’ll see only a list of instance ids and a load balancer name. All must be configured, but the APIs make this process appear almost magical. The terminology used here indicates (to me at least) an abstraction which means these operations are not communicating directly with a physical (or even virtual) device but rather are being sent to a management or orchestration system that in turn relays the appropriate API calls to the underlying load balancing infrastructure. The abstraction here appears to be pure SOA and it is, if you don’t mind my saying, a beautiful thing. Amazon has abstracted the actual physical implementation of not only the management or orchestration system, but also decoupled (as is proper) the physical infrastructure implementation from the services being provided. There is a clear separation of service from implementation, which allows for Amazon to be using product X or Y, hardware or software, virtual or concrete, and even one or more vendor solutions at the same time without the service consumer being aware of what that implementation may be. The current offering appears to be pure layer 4 load balancing which is a good place to start, but lacks the robustness of a full layer 7 capable solution and eventually Amazon will need to address some of the challenges associated with load balancing stateful applications for its customers; challenges that are typically addressed by the use of persistence, cookies, and URI rewriting type functionality. Some of this type of functionality appears built-in, but is not well-documented by Amazon. For example, the forwarding of client-IP addresses is a common challenge with load-balanced applications, and is often solved by using the HTTP custom header: X-Forwarded-For. Ken Weiner addresses this is a blog post, indicating Amazon is indeed using common conventions to retain the client IP address and forward it to the instances being load balanced. It may be the case that more layer 7 specific functionality is exposed than it appears, but is simply not as well documented. If the underlying implementation is capable – and it appears to be given the way ELB addresses client IP address preservation - it is a pretty good bet that Amazon will be able to address other challenges with relative ease given the foundation they’ve already built. That’s agility; that’s Infrastructure 2.0 and SOA. Can you tell I’m excited about this? I thought you might. This gives Amazon some pretty powerful options as it could switch out physical implementations with relative ease, as it so desires/needs, with virtually (sorry) no interruption to consumer services. Coupling this nearly perfect application of SOA with Infrastructure 2.0 results in an agility that is often mentioned as a benefit but rarely actually seen in the wild. THIS IS INFRASTRUCTURE 2.0 IN ACTION This is a great example of the power of Infrastructure 2.0. Not only is the infrastructure automated and remotely configured by the consumer, but it is integrated with other Amazon services such as CloudWatch (monitoring/management) and Auto Scaling. The level of sophistication under the hood of this architecture is cleverly hidden by the simplicity and elegance of the overlying SOA-based control plane which encompasses all aspects of the infrastructure necessary to deliver the application and ensure availability. Several people have been trying to figure out what, exactly, is providing the load balancing under the covers for Amazon. Is it a virtual appliance version of an existing application delivery controller? Is it a hardware implementation? Is it a proprietary, custom-built solution from Amazon’s own developers? The reality is that you could insert just about any Infrastructure 2.0 capable application delivery controller or load balancer into the “?” spot on the diagram above and achieve the same results as Amazon. Provided, of course, you were willing to put the same amount of effort into the design and integration as has obviously been put into ELB. While it would certainly be interesting to know for sure, the answer to that question is overridden in my mind by a bigger one: what other capabilities does the physical implementation have and will they, too, surface in yet another service offering from Amazon? If the solution has other features and functionality, might they, too, be exposed over time in what will slowly become the Cloud Menu from which customers can build a robust infrastructure comprising more than just simple application delivery? Might it grow to provide security, acceleration, and other application delivery-related services, too? If the underlying solution is Infrastructure 2.0 capable – and it certainly appears to be - then the feasibility of such service offerings is more likely than not. Cloud computing is not Burger King. You can’t have it your way. Yet. Ken’s Blog: Amazon ELB – Capturing Client IP Address Is Your Cloud Sticky? It should be. Using "X-Forwarded-For" in Apache or PHP Paradox: When Cloud Is Both the Wrong and the Right Solution Amazon Compliance Confession About Customers, Not Itself Virtual Private Cloud (VPC) Makes Internal Cloud bursting Reality Your Cloud is Not a Precious Snowflake (But it Could Be) Infrastructure 2.0 Is the Beginning of the Story, Not the End The Revolution Continues: Let Them Eat Cloud218Views0likes1CommentMcCloud: Would You Like McAcceleration with Your Application?
The right infrastructure will eventually enable providers to suggest the right services for each customer based on real needs. When I was in high school I had a job at a fast food restaurant, as many teenagers often do. One of the first things I was taught was “suggestive selling”. That’s the annoying habit of asking every customer if they’d like an additional item with their meal. Like fries, or a hot apple pie. The reason behind the requirement that employees “suggest” additional items is that studies showed a significant number of customers would, in fact, like fries with their meal if it was suggested to them. Hence the practice of suggestive selling. The trick is, of course, that it makes no sense to suggest fries with that meal when the customer ordered fries already. You have to actually suggest something the customer did not order. That means you have to be aware of what the customer already has and, I suppose, what might benefit them. Like a hot apple pie for desert. See, it won’t be enough for a cloud provider to simply offer infrastructure services to its customers; they’re going to have to suggest services to customers based on (1) what they already have and (2) what might benefit them. CONTEXT-AWARE SUGGESTIVE SERVICE MENUS Unlike the real-time suggestive selling practices used for fast-food restaurants, cloud providers will have to be satisfied with management-side suggestive selling. They’ll have to compare the services they offer with the services customers have subscribed to and determine whether there is a good fit there. Sure, they could just blanket offer services but it’s likely they’ll have better success if the services they suggest would actually benefit the customer. Let’s say a customer has deployed a fairly typical web-based application. It’s HTTP-based, of course, and the provider offers a number of application and protocol specific infrastructure services that may benefit the performance and security of that application. While the provider could simply offer the services based on the existence of that application, it would be more likely a successful “sell” if they shared the visibility into performance and capacity that would provide a “proof point” that such services were needed. Rather than just offer a compression-service, the provider could – because it has visibility into the data streams – actually provide the customer with some data pertinent to the customer’s application. For example, telling the customer they could benefit from compression is likely true, but showing the customer that 75% of their data is text and could ostensibly reduce bandwidth by X% (and thus their monthly bandwidth transfer costs) would certainly be better. Similarly, providers could recognize standard-applications and pair them with application-specific “templates” that are tailored to improve the performance and/or efficiency of that application and suggest customers might benefit from the deployment of such capabilities in a cloud computing environment. This requires context, however, as you don’t want to be offering services to which the customer is already subscribed or which may not be of value to the organization. If the application isn’t consistently accessed via mobile devices, for example, attempting to sell the customer a “mobile device acceleration service” is probably going to be annoying as well as unsuccessful. The provider must not only manage its own infrastructure service offerings but it must also be able to provide visibility into traffic and usage patterns and intelligently offer services that make sense based on actual usage and events that are occurring in the environment. The provider needs to leverage visibility to provide visibility into the application’s daily availability, performance, and security such that they understand not only why but what additional infrastructure services may be of value to their organization. SERVICES MUST EXIST FIRST Of course in order to offer such services they must exist, first. Cloud computing providers must continue to evolve their environments and offerings to include infrastructure services in addition to their existing compute resource services lest more and more enterprise turn their eyes inward toward their own data centers. The “private cloud” that does/doesn’t exist – based on to whom you may be speaking at the moment – is an attractive concept for most organizations precisely because such services can continue to be employed to make the most of a cloud computing environment. While cloud computing in general is more efficient, such environments are made even more efficient by the ability to leverage infrastructure services designed to optimize capacity and improve performance of applications deployed in such environments. Cloud computing providers will continue to see their potential market share eaten by private cloud computing implementations if they do not continue to advance the notion of infrastructure as a service and continue to offer only compute as a service. It’s not just about security even though every survey seems to focus on that aspect; it’s about the control and flexibility and visibility required by organizations to manage their applications and meet their agreed upon performance and security objectives. It’s about being able to integrate infrastructure and applications to form a collaborative environment in which applications are delivered securely, made highly available, and perform up to expected standards. Similarly, vendors must ensure that providers are able to leverage APIs and functionality as services within the increasingly dynamic environments of both providers and enterprises. An API is table stakes, make no mistake, but it’s not the be-all-and-end-all of Infrastructure 2.0. Infrastructure itself must also be dynamic, able to apply policies automatically based on a variety of conditions – including user environment and device. It must be context-aware and capable of changing its behavior to assist in balancing the demands of users, providers, and application owners simultaneously. Remember infrastructure 2.0 isn’t just about rapid provisioning and management, it’s about dynamically adapting to the changing conditions in the client, network, and server infrastructure in as automated a fashion as possible. Related blogs & articles: Infrastructure 2.0 + Cloud + IT as a Service = An Architectural Parfait You Can’t Have IT as a Service Until IT Has Infrastructure as a Service Cloud Today is Just Capacity On-Demand Your Cloud is Not a Precious Snowflake (But it Could Be) Cloud computing is not Burger King. You can’t have it your way. Yet. The Other Hybrid Cloud Architecture The New Distribution of The 3-Tiered Architecture Changes Everything Infrastructure 2.0: Aligning the network with the business (and ... Infrastructure 2.0: Flexibility is Key to Dynamic Infrastructure Infrastructure 2.0: The Diseconomy of Scale Virus Lori MacVittie - Infrastructure 2.0167Views0likes0CommentsRSA Impressions: The Intersection of Security and SDN
#rsac #infosec #devops #sdn If you cross log analysis with infrastructure integration you get some interesting security capabilities A lot of security-minded folks immediately pack up their bags and go home when you start talking about automating anything in the security infrastructure. Automating changes to data center firewalls, for example, seem to elicit a reaction akin not unlike that to a suggestion to putting an unpatched Windows machine directly on the public Internet. At RSA yesterday I happened to see a variety of booths with a focus on .. .logs. That isn't surprising as log analysis is used across the data center and across domains for a variety of reasons. It's one of the ways databases are replicated, it's part of compiling access audit reports and it's absolutely one of the ways in which intrusions attempts can be detected. And that's cool. Log analysis for intrusion detection is a good thing. But what if it could be better? What if we started considering operationalizing the process of acting on events raised by log analysis? One of the promises of SDN is agility through programmability. The idea is that because the data path is "programmable" it can be modified at any time by the control plane using an API. In this way, SDN-enabled architectures can respond in real time to conditions on the network impacting applications. Usually this focuses on performance but there's no reason it couldn't 'be applied to security, as well. If you're using a log analysis tool capable of performing said analysis in near-time, and the analysis results in suspicious activity, there's no reason it couldn't inform a controller of some kind on the network, which in turn could easily decide to enable infrastructure capabilities across the network. Perhaps to start capturing the flow, or injecting a more advanced inspection service (malware detection perhaps) into the service chain for the application. In the service provider world, it's well understood that the requirement in traditional architectures to force flows through all services is inefficient. It increases the cost of the service and requires scaling every single service along with subscriber growth. Service providers are turning to service chaining and traffic steering as a means to more efficiently use only those services that are applicable, rather than the entire chain. While enterprise organizations for the most part aren't going to adopt service provider architectures, they can learn from then the value inherent in more dynamic network and service topologies. Does every request and response need to go through every security service? Or are some only truly needed for deep inspection? It's about intelligence and integration. Real time analysis on what is traditionally data at rest (logs) can net actionable data if infrastructure is API-enabled. It's taking the notion of scalability domains to a more dynamic level by not only ensuring scale of services individually to reduce costs but further to improve performance and efficiency by only consuming resources when necessary, instead of all the time. The key is being able to determine when it's necessary and when it isn't. More reading on infrastructure architecture patterns supporting scalability domains Infrastructure Scalability Pattern: Partition by Function or Type Infrastructure Scalability Pattern: Sharding Sessions Infrastructure Scalability Pattern: Sharding Streams In a service provider world that's based on subscriber and traffic type. In the enterprise it's more behavioral analysis, it's what someone is trying to do and with what application or data. But in the end, both environments need to be dynamic with policy enforcement and service invocation based on the unique combination of devices, networks and applications and enabled by the increasing prevalence of API-enabled infrastructure. SDN is going to propel not just operational networks as a cost savings vehicle, but as part of the technology that ultimately unlocks the software-defined data center. And that includes security.158Views0likes0Comments