ibm
39 TopicsQuick! The Data Center Just Burned Down, What Do You Do?
You get the call at 2am. The data center is on fire, and while the server room itself was protected with your high-tech fire-fighting gear, the rest of the building billowed out smoke and noxious gasses that have contaminated your servers. Unless you have a sealed server room, this is a very real possibility. Another possibility is that the fire department had to spew a ton of liquid on your building to keep the fire from spreading. No sealed room means your servers might have taken a bath. And sealed rooms are a real rarity in datacenter design for a whole host of reasons starting with cost. So you turn to your DR plan, and step one is to make certain the load was shifted to an alternate location. That will buy you time to assess the damage. Little do you know that while a good start, that’s probably not enough of a plan to get you back to normal quickly. It still makes me wonder when you talk to people about disaster recovery how different IT shops have different views of what’s necessary to recover from a disaster. The reason it makes me wonder is because few of them actually have a Disaster Recovery Plan. They have a “Pain Alleviation Plan”. This may be sufficient, depending upon the nature of your organization, but it may not be. You are going to need buildings, servers, infrastructure, and the knowledge to put everything back together – even that system that ran for ten years after the team that implemented it moved on to a new job. Because it wouldn’t still be running on Netware/Windows NT/OS2 if it wasn’t critical and expensive to replace. If you’re like most of us, you moved that system to a VM if at all possible years ago, but you’ll still have to get it plugged into a network it can work on, and your wires? They’re all suspect. The plan to restore your ADS can be painful in-and-of itself, let alone applying the different security settings to things like NAS and SAN devices, since they have different settings for different LUNS or even folders and files. The massive amount of planning required to truly restore normal function of your systems is daunting to most organizations, and there are some question marks that just can’t be answered today for a disaster that might happen in a year or even ten – hopefully never, but we do disaster planning so that we’re prepared if it does, so never isn’t a good outlook while planning for the worst. While still at Network Computing, I looked at some great DR plans ranging from “send us VMs and we’ll ship you servers ready to rock the same day your disaster happens” to “We’ll drive a truck full of servers to your location and you can load them up with whatever you need and use our satellite connection to connect to the world”. Problem is that both of these require money from you every month while providing benefit only if you actually have a disaster. Insurance is a good thing, but increasing IT overhead is risky business. When budget time comes, the temptation to stop paying each month for something not immediately forwarding business needs is palpable. And both of those solutions miss the ever-growing infrastructure part. Could you replace your BIG-IPs (or other ADC gear) tomorrow? You could get new ones from F5 pretty quickly, but do you have their configurations backed up so you can restore? How about the dozens of other network devices, NAS and SAN boxes, network architecture? Yeah, it’s going to be a lot of work. But it is manageable. There is going to be a huge time investment, but it’s disaster recovery, the time investment is in response to an emergency. Even so, adequate planning can cut down the time you have to invest to return to business-as-usual. Sometimes by huge amounts. Not having a plan is akin to setting the price for a product before you know what it costs to produce – you’ll regret it. What do you need? Well if you’re lucky, you have more than one datacenter, and all you need to do is slightly oversize them to make sure you can pick up the slack if one goes down. If you’re not one of the lucky organizations, you’ll need a plan for getting a building with sufficient power, internet capability, and space, replace everything from power connections to racks to SAN and NAS boxes, restorable backups (seriously, test your backups or replication targets. There are horror stories…), and time for your staff to turn all of these raw elements into a functional datacenter. It’s a tall order, you need backups of the configs of all appliances and information from all of your vendors about replacement timelines. But should you ever need this plan, it is far better to have done some research than to wake up in the middle of the night and then, while you are down, spend time figuring it all out. The toughest bit is keeping it up to date, because a project to implement a DR plan is a discrete project, but updating costs for space and lists of vendors and gear on a regular basis is more drudgery and outside of project timelines. But it’s worth the effort as insurance. And if your timeline is critical, look into one of those semi trailers – or the new thing (since 2005 or 2007 at least), containerized data centers - because when you need them, you need them. If you can’t afford to be down for more than a day or two, they’re a good stopgap while you rebuild. SecurityProcedure.com has an aggregated list of free DR plans online. I’ve looked at a couple of the plans they list, they’re not horrible, but make certain you customize them to your organization’s needs. No generic plan is complete for your needs, so make certain you cover all of your bases if you use one of these. The key is to have a plan that dissects all the needs post-disaster. I’ve been through a disaster (The Great NWC Lab Flood), and there are always surprises, but having a plan to minimize them is a first step to maintaining your sanity and restoring your datacenter to full function. In the future – the not-too-distant future – you will likely have the cloud as a backup, assuming that you have a product like our GTM to enable cloud-bursting, and that Global Load Balancer isn’t taken out by the fire. But even if it is, replacing one device to get your entire datacenter emulated in the cloud would not be anywhere near as painful as the rush to reassemble physical equipment. Marketing Image of an IBM/APC Container Lori and I? No, we have backups and insurance and that’s about it. But though our network is complex, we don’t have any businesses hosted on it, so this is perfectly acceptable for our needs. No containerized data centers for us. Let’s hope we, and you, never need any of this.614Views0likes0CommentsWebSphere MQ and BIG-IP
WebSphere MQ is the industry leading solution for messaging within the enterprise. As a result, I receive many questions and a lot of interest about BIG-IP Local Traffic Manager (LTM) and WebSphere MQ. A deployment guide is coming out soon and testing is completed but F5 Networks already has customers using LTM to provide high availability and offload in MQ environments so I thought I would share some of this guidance. There are two ways to deploy BIG-IP with WebSphere MQ, either deploying BIG-IP in front of DataPower XI50 devices or in front of WebSphere Message Broker servers directly. In either case, the end result is the same, high availability, TCP and SSL offload. When XI50 devices are in play, they will be be used for some XML transformation. Take a look at the IBM Redbook on deploying load balancing with WebSphere MQ. Beginning on page 148, we can see the typical scenario with DataPower devices in play: If we are using DataPower XI50s the recommendation is that LTM load balance the XI50s. This is a typical setup with no persistence, least connections load balancing methodology and a TCP monitor. BIG-IP brings TCP optimization, SSL offload and outage detection through monitoring. If XI50 devices are not in the infrastructure the setup is very similar. Message Broker servers are set up identically, with Channels, Transmission Queues, Queue Managers and Queues setup with identical TCP ports and names on both systems. The TCP port of each of the queues are setup on BIG-IP LTM as pools. Remember that with WebSphere MQ, each queue can have a TCP port starting typically with TCP 1414. The next step is to setup BIG-IP LTM virtuals for each pool which is mapping to a queue. In either scenario (either XI50 deployed or not), TCP profiles should be adjusted on the BIG-IP to increase the default timeout as WebSphere MQ keeps a connection open indefinitely and uses a heartbeat to avoid timeouts. The TCP timeout should be set for a value slightly larger than the value of the hearbeat in order to avoid either port exhaustion on the BIG-IP or connection flapping on the MQ TCP connection port. The heartbeat and timeout are both configurable. I will update this blog once the official deployment guidance has been published. In the meantime, contact me if you have any questions.599Views0likes1CommentReplacing the WebSphere Apache Plugin with iRules
Problem Definition "We’re having a bit of difficulty configuring the LTM to handle all the redirects that this WebSphere application does. We’ve tried streaming profiles and iRules, but every method seems to break one component while fixing another. The main trick seems to be trying to deal with the default WebSphere ports of 9081 and 9444 for HTTP and HTTPS, respectively. We ideally want to hide these odd-number ports from the end-user. Normally this is a fairly simple procedure, but it’s proved pretty challenging. The issue may lie on the server and/or in the application code, but we’d like to be able to flex the muscle of the F5, if we could, and solve the problem there. One of the main stumbling blocks seems to be pop-up windows for viewing documents (PDF and Word). Word docs instantiate a Java applet, and we’ve had some success rewriting the requests there, but it’s the Adobe file transfer / view that has been the most confounding. The real puzzler is that IBM provides an Apache-based load-balancer with a WebSphere plugin that works really well in hiding the odd port numbers behind standard 80/443. Unfortunately, it’s poorly documented (if at all), so I’m not sure there will be any opportunity to reverse-engineer it and map it to LTM. So, if anyone has any direct experience with WebSphere, or more specifically the IBM SCORE application and can pass on any insights, it would be appreciated." hmmmm, I think we might be able to do something here... The Apache Plugin The "Apache webserver plugin" used by WebSphere is an XML file that defines the server clusters, the services they provide, and the URI's which should be forwarded to each cluster. The items in the plugin file that interest us are the UriGroup, ServerCluster and Route definitions. UriGroup statements group selected URIs together so that Route statements may be used to direct requests to a specific ServerCluster based on the URIs requested. URI Groups Since the functionality we wish to replace is the determination of which service will receive requests for specific URI's, we will start with the definition of the groups of URI's -- the "UriGroup" definitions in the XML file: <UriGroup Name="default_host_WebSphere_Portal_URIs"> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/wps/PA_1_0_6D/*"/> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/wps/PA_1_0_6E/*"/> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/wps/PA_1_0_6C/*"/> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/wps/*"/> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/wsrp/*"/> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/wps/content/*"/> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/wps/pdm/*"/> ... </UriGroup> ... <UriGroup Name="default_host_Server_Cluster_URIs"> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/snoop/*"/> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/hello"/> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/hitcount"/> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="*.jsp"/> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="*.jsv"/> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="*.jsw"/> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/j_security_check"/> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/ibm_security_logout"/> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/servlet/*"/> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/ivt/*"/> ... </UriGroup> The UriGroup definition contains URI strings with glob-style pattern matching (which will come in handy later). All URIs within each group are intended to use the same ServerCluster. Server Clusters Four application services are defined by the named ServerCluster definitions. Each physical node has two different services, and each service has two ports (one for http and one for https). Looking at the bolded items in the XML file snip below, you can see the service WebSphere_Portal is defined to run on ports 9081 (http) and 9444 (https) on server1.domain.com and server2.domain.com, and the service Server_Cluster is defined on ports 9080 (http) and 9443 (https) on both nodes: <ServerCluster CloneSeparatorChange="false" LoadBalance="Round Robin" Name="WebSphere_Portal"... <Server CloneID="12xx2868r" ConnectTimeout="0" ExtendedHandshake="false" LoadBalanceWeight="2" ... <Transport Hostname="server1.domain.com" Port="9081" Protocol="http"/> <Transport Hostname="server1.domain.com" Port="9444" Protocol="https"> <Property Name="keyring" Value="D:\IBM\WebSphere\AppServer\etc\plugin-key.kdb"/> <Property Name="stashfile" Value="D:\IBM\WebSphere\AppServer\etc\plugin-key.sth"/> </Transport> </Server> <Server CloneID="12vxx4xx3" ConnectTimeout="0" ExtendedHandshake="false" LoadBalanceWeight="2" ... <Transport Hostname="server2.domain.com" Port="9081" Protocol="http"/> <Transport Hostname="server2.domain.com" Port="9444" Protocol="https"> <Property Name="keyring" Value="D:\IBM\WebSphere\DM\etc\plugin-key.kdb"/> <Property Name="stashfile" Value="D:\IBM\WebSphere\DM\etc\plugin-key.sth"/> </Transport> </Server> <PrimaryServers> <Server Name="WebSphere_Portal_1"/> <Server Name="WebSphere_Portal_2"/> </PrimaryServers> </ServerCluster> <ServerCluster CloneSeparatorChange="false" LoadBalance="Round Robin" Name="Server_Cluster" ... <Server ConnectTimeout="0" ExtendedHandshake="false" MaxConnections="-1" Name="server01" ... <Transport Hostname="server1.domain.com" Port="9080" Protocol="http"/> <Transport Hostname="server1.domain.com" Port="9443" Protocol="https"> <Property Name="keyring" Value="D:\IBM\WebSphere\DM\etc\plugin-key.kdb"/> <Property Name="stashfile" Value="D:\IBM\WebSphere\DM\etc\plugin-key.sth"/> </Transport> </Server> <Server ConnectTimeout="0" ExtendedHandshake="false" MaxConnections="-1" Name="server2" ... <Transport Hostname="server2.domain.com" Port="9080" Protocol="http"/> <Transport Hostname="server2.domain.com" Port="9443" Protocol="https"> <Property Name="keyring" Value="D:\IBM\WebSphere\DM\etc\plugin-key.kdb"/> <Property Name="stashfile" Value="D:\IBM\WebSphere\DM\etc\plugin-key.sth"/> </Transport> </Server> <PrimaryServers> <Server Name="Server_Cluster_1"/> <Server Name="Server_Cluster_2"/> </PrimaryServers> </ServerCluster> The physical nodes (Transport definitions) are added as FQDNs (server1.domain.com and server2.domain.com), so you will need to resolve names to the actual IP addresses to create your server pools. Route Statements Route statements correlate a UriGroup with the corresponding ServerCluster: <Route ServerCluster="Server_Cluster" UriGroup="default_host_Server_Cluster_URIs" VirtualHostGroup="default_host"/> ... <Route ServerCluster="WebSphere_Portal" UriGroup="default_host_WebSphere_Portal_URIs" VirtualHostGroup="default_host"/> In this case, any request for a URI in the group default_host_Server_Cluster_URIs will be routed to the Server_Cluster pool, and requests for those URI's in the group default_host_WebSphere_Portal_URIs will be routed to the WebSphere_Portal pool. The LTM Configuration Now that we have a better understanding of what the plugin XML file defines, we can build the corresponding LTM configuration: server pools the iRule that selects them persistence, ssl and http profiles and the virtual servers that tie them all together to accept HTTP and HTTPS requests Pools Look up the FQDNs provided in the XML file to create the required server pools with pool members on the indicated IP addresses and ports. In most cases, HTTPS traffic will be decrypted at LTM and forwarded to the servers over HTTP, so only the 2 HTTP pools will be required. Assuming the hostnames server1.domain.com and server2.domain.com resolve to 192.168.100.1 and 192.168.100.2, we would create the following pools: pool Server_Cluster_http { member 192.168.100.1:9080 member 192.168.100.2:9080 } pool WebSphere_Portal_http { member 192.168.100.1:9081 member 192.168.100.2:9081 } If traffic will be re-encrypted, create the HTTPS pools as well: pool Server_Cluster_https { member 192.168.100.1:9443 member 192.168.100.2:9443 } pool WebSphere_Portal_https { member 192.168.100.1:9444 member 192.168.100.2:9444 } Examine the Server definitions in the XML file for other pool member settings that might be relevant, such as ratio (LoadBalanceWeight in the XML file) and connection limits. SSL Profile LTM must decrypt HTTP requests to manage them under this configuration. You can either offload SSL to LTM completely, or decrypt and re-encrypt HTTPS requests. In either case, create a clientssl profile containing a certificate & key pair for the virtual server hostname.If you are offloading SSL (HTTPS traffic will be decrypted at LTM and forwarded to the servers over HTTP), that's all you need for SSL. If instead you will be re-encrypting, create a serverssl profile to handle the re-encryption task. HTTP Profile If you are offloading SSL, create a custom HTTP profile with the "Rewrite Redirects" option set to "All", allowing the system to rewrite any self-referencing server-set redirects to the proper protocol scheme. Persistence Default cookie persistence is the simplest option you can choose here. Noting the references in the XML file to the AffinityCookie named JSESSIONID, you can alternatively enable JSESSIONID persistence with another simple iRule found in the DevCentral codeshare: JSESSIONID Persistence iRule We will use a switch statement in an iRule to replicate the actions that the Route statements define to the Apache webserver. The switch statement will contain the mappings of the UriGroup definitions to the pools defined in the corresponding Route statements. Using a switch statement in favor of a Data Group List provides the same capabilty for partial glob-style URI matching as that used in the UriGroup definitions. So consider the following UriGroup and Route definitions: <UriGroup Name="default_host_WebSphere_Portal_URIs"> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/wps/*"/> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/wsrp/*"/> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/wps/content/*"/> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/wps/pdm/*"/>... <Route ServerCluster="WebSphere_Portal" UriGroup="default_host_WebSphere_Portal_URIs" VirtualHostGroup="default_host"/> ... <UriGroup Name="default_host_Server_Cluster_URIs"> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/snoop/*"/> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/hello"/> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/hitcount"/> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="*.jsp"/> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="*.jsv"/> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="*.jsw"/> ... <Route ServerCluster="Server_Cluster" UriGroup="default_host_Server_Cluster_URIs" VirtualHostGroup="default_host"/> Here is an iRule that replicates this definition to handle HTTP requests, mapping all of the URI strings for HTTP requests, including the embedded wildcards, to the corresponding HTTP pool: when HTTP_REQUEST { switch -glob [string tolower [HTTP::uri]] { "/wsrp/*" - "/wps/content/*" - "/wps/pdm/*" - "/wps/*" { pool WebSphere_Portal_http } "/snoop/*" - "/hello" - "/hitcount" - "*.jsp" - "*.jsv" - "*.jsw" { pool Server_Cluster_http } } } The "-" after a URI means to execute the next defined script body. The blank lines between groups are added for readability, and are not required. Adding a new URI is as simple as duplicating a line in the appropriate group and changing the URI string. Since the switch command will fall out on the first match, more specific matches must be listed before more general ones matching the same patterns with different script bodies, so thorough testing and some experimentation may be required if you have different patterns that match in both groups. (For instance if you had the URI /wps/randompath/myscript.jsp, it would match both /wps/* and *.jsp, but would be sent to WebSphere_Portal_http since it matched first.) If LTM is offloading encryption, the iRule above would work for both HTTP and HTTPS requests, since the decision is based only on the URI. If LTM is re-encrypting HTTPS requests to the backend servers, we will need a way to send HTTP requests to the HTTP pools, and HTTPS requests to the corresponding HTTPS pools after re-encrypting. The iRule above can be enhanced to check the destination port of the request to see if it was HTTP or HTTPS, then select the appropriate pool based on URI and protocol scheme: when HTTP_REQUEST { switch -glob [string tolower [HTTP::uri]] { "/wsrp/*" - "/wps/content/*" - "/wps/pdm/*" "/wps/*" { if [TCP::local_port == 80] }{ pool WebSphere_Portal_http } else { pool WebSphere_Portal_https } "/snoop/*" - "/hello" - "/hitcount" - "*.jsp" - "*.jsv" - "*.jsw" { if [TCP::local_port == 80] }{ pool Server_Cluster_https } else { pool Server_Cluster_https } } } Alternatively, you could duplicate the HTTP iRule for the HTTPS virtual server and simply replace the pool selections with those appropriate for HTTPS. The resulting iRule is slightly more efficient, since the destination port test isnot required, but it does require maintaining 2 spearate versions of essentially the same iRule. Virtual Servers Finally, define a virtual server for HTTP on port 80 and another for HTTPS on port 443. To each, apply the persistence profile and the appropriate routing iRule. To the HTTPS virtual, apply also the clientssl profile and the custom http profile. (Do NOT apply either to the HTTP virtual server or traffic will not flow as expected.) In the ServerCluster definition, we see the load balancing method is RoundRobin, so we will choose that method here. Examine the ServerCluster definition for other virtual server settings that might be relevant. Once you have associated all the objects with the virtual server, you are ready to test the application without the Apache webserver plugin. Get the Flash Player to see this player. 20080729-ReplacingWebSphereApachePlugin.mp3503Views0likes12CommentsLong Distance Live Partition Mobility– A tale of collaboration
F5 Networks and IBM continue the long tradition of collaboration with the latest supported solution of Long Distance Live Partition Mobility. F5 worked closely with IBM to further prefect IBM Virtual I/O’s technology to better support Long Distance Mobility, proving again than when customers do business with F5 or IBM, they’re getting a wealth of value added benefits from the partnership. WHAT IS IT? Partition Mobility is IBM’s Power VM capability that allows for the transfer of active and inactive partitions from one V/IO server to another. This solution has been offered since Power6 technology based systems, and in our testing, we were able to move a running machine between two datacenters separated by a simulated 1000 Kilometers. The details of how we achieved this, a bit more on the solution is below, after the diagram. SOLUTION OVERVIEW The basics of the solution are that during Active Migrations, a running partition is moved from a primary LPAR (pictured above on the left) to a Failover LPAR (pictured above to the right). Applications can continue to handle their normal workloads during this process. The rest of the picture is comprised of these pieces: The IBM Hardware Managed Console (HMC - pictured above to the top left), the IBM Integration Virtualization Manager (IVM – not pictured), the shared storage (pictured in both data centers) and the F5 BIG-IP technology that enables this, specifically, F5 BIG-IP WAN Optimization Module to enable, secure and accelerate the data transfers and F5-BIP EtherIP, to keep active client sessions connected during transfers and finally, F5 Global Traffic Manager (GTM – not pictured and optional) to direct incoming traffic intelligently during failover events. All of the details of about IBMs Power Mobility feature can be found here: http://www.redbooks.ibm.com/redbooks/pdfs/sg247460.pdf Recommended reading from F5 about setting up these environments can be found here: http://www.f5.com/pdf/deployment-guides/f5-vmotion-flexcache-dg.pdf and http://www.f5.com/pdf/white-papers/cloud-vmotion-f5-wp.pdf We’ve built this solution in lab but a deployment guide is still pending, so in the meantime, I hope that my VMotion deployment guide and the white paper on cloud migration will fill in any questions you have about the nuts and bolts of the deployment. The solutions are very similar while, of course, the under lying technologies are unique to each partner. You can of course email me with any questions you have as well, at n dot moshiri at f5.com WHAT IS REQUIRED? The basics are as follows: AIX 7.1 is recommended, VIOS Version 2.2.2.0 (release August 2012) is recommended, Storage with connectivity to both data centers (or tightly couple replication), A latency (and distance) between the two data centers that can support the nature of the application running in the infrastructure And of course, network connectivity between the two data centers. I will throw in a quick word about VIOS release 2.2.2.0. During initial testing we discovered that IBM’s mobility manager picked an arbitrary TCP port during migration events. For internal migrations this would pose little problem, however, in order to secure, optimize and allow transmission over firewalls, in the long distance scenario, arbitrary ports would simply not do. IBM stepped up and delivered. With version 2.2.2.0, a user selectable port range allows for migration events to happen in a much more controlled manner on the network. Bottom line, migration traffic between the two data centers need to be secured, accelerated and client traffic needs to stay up and know where to go. BIG-IP provides all of this functionality through Local Traffic Manager (LTM), WAN Optimization (WOM) and Global Traffic Manager (GTM). This can be an ideal solution for certain use cases. When examining these architectures analyze: Network connectivity between the data centers, Shared Storage between the data centers, What are the workloads on the partitions, what are the memory footprints. Every Partition Mobility architecture will be different. Reach out to me or your F5 FSE and definitely plan on several rounds of architectural review. F5 and IBM will be there to back you up.434Views0likes0CommentsBIG-IP WAN Optimization Module (WOM) and IBM DB2 Replication
In this post I will share some of the configuration details of accelerating DB2 replication using two popular replication technologies, SQL Replication (and optionally SQL Replication with MQ) and high availability disaster recovery (HADR) Replication. This work started when I had the great pleasure of working with DB2 for the last several quarters. I explored the latest features in DB2 version 9.7 and version 10 and through the IBM Information On Demand Conference I was able to vet and validate my ideas with the great technical resources available at the conference. I would like to thank Martin Schlegel and Mohamed El-Bishbeashy for their great training to validate my independent research. F5 now has solutions for the most common types of DB2 Replication, SQL Replication and HADR, enabling faster replication over longer distances, maximizing bandwidth and allowing to connections with more latency to be used. HADR or SQL Replication? The choice between using HADR or SQL replication is one that will require a lot of investigation on the part of any organization. While HADR is easier to setup and maintain, SQL Replication provides some excellent benefits. In either case, my testing showed that BIG-IP WOM can bring benefits to either solution. Why use WAN Optimization technology? Security * Encrypt: Hardware based encryption secures data transfers if they are not already encrypted (HADR) and offloads CPU intensive tasks from servers if they are already encrypted (SQL Rep). Bandwidth * Compression: Hardware based compression reduces the amount of bandwidth needed and effectively speeds transfers. * Deduplication: Data dedup reduces the amount of bandwidth needed and effectively speeds transfers. Comparison of HADR versus SQL with MQ Replication Below are some of the comparison points between HADR versus SQL Q Replication, you can read more about the differences through various IBM Redbooks found on IBM.com/db2 Feature HADR SQL Q Replication Scope of replication Entire DB2 Database Tables Data propagation method Log shipping Capture/Apply tables Synchronous? Yes No Asynchronous? Yes Yes Automatic client routing to standby? Yes Yes Operating systems Linux, Unix, Windows Linux, Unix, Windows, Z/OS Applications read from the standby? No Yes Applications write to the standby? No Yes SQL DDL replicated? Yes No Hardware supported Hardware, OS, version of DB2 must be identical Hardware, OS, version of Db2 may be different Tools for monitoring? Yes Yes Network compress or encryption? No Yes Partitioned DB support? No Yes Configuration Basics DB2 Configuration for HADR Without sounding too biased, purely from an implementation standpoint I felt that HADR was by far the more simple solution to setup and maintain. Prerequisites: Clock synchronization Trust between hosts Route between hosts DNS or /etc/hosts configuration Identical DB2 instance users and DB2 fences uses and identical UID and GID on both hosts Identical home directories Identical port number and name Automatic instance start must be turned off Configuration: The actual configuration for HADR is to modify the database configuration to set the proper parameters. Just for reference, these parameters include: HADR_LOCAL_HOST HADR_REMOTE_HOST HADR_LOCAL_SVC HADR_REMOTE_SVC HADR_REMOTE_INST HADR_SYNCMODE HADR_PEER_WINDOW HADR_TIMEOUT You can read more about HADR through the Redbook here: http://www.redbooks.ibm.com/abstracts/sg247363.html DB2 Configuration for SQL Q-Replication Configuration for SQL, optionally with MQ, has two parts, first, the MQ setup should be completed if that is utilized. The second part, creation of Capture and Apply tables is universal whether Q-Replication is used or not, only the BIG-IP configuration would change if Q-Replication is utilized. Define WebSphere MQ queue managers Define WebSphere MQ Channels Define WebSphere MQ local and remote queues Configuration of Capture and Apply Tables: Create source and target control tables Enable both databases for replication Create replication queue maps if MQ replication is being utilized Create Q subscriptions if MQ replication is being utilized You can read more about SQL Replication here: http://publib.boulder.ibm.com/infocenter/db2luw/v8/index.jsp?topic=/com.ibm.db2.ii.doc/start/cgpch200.htm BIG-IP Configuration BIG-IP configuration for the WOM module is straightforward whether HADR or SQL Replication is being used. In the case of SQL Replication with MQ, the configuration is different, as compression and encryption should be turned off on MQ first for maximum results. This is a scenario I have not yet tested so for this configuration I am focusing directly on SQL replication and HADR replication. In my tests I enabled both data duplication and compression as I tested with two BIG-IPs over a simulated WAN using a LanForge Virtual Appliance. I looked at bandwidths from 45 Mbps and 100 ms of latency up to 622 Mbps and 20 ms of latency. My dataset was large blobs of text data. In the diagram at the start of the article you can see that this is a symmetric solution requiring BIG-IPs on either end and in each data center. A license for the WOM module is required: BIG-IP Prerequisites: Symmetric deployment requires BIG-IPs in each data center WAN Optimization Module (WOM) license BIG-IP Configuration: After completing the initial configuration of the WOM, the steps require are only to create an optimized application entry for DB2. Your database port, along with the DB2 control port should be optimized. In my case as below, that would be port 50,000 and port 523. I selected memory based dedup and saw the best results in this mode in my tests. For detailed information about basic BIG-IP WOM configuration, see the following chapter of BIG-IP Documentation http://support.f5.com/kb/en-us/products/wan_optimization/manuals/product/wom_config_11_0_0/1.html (Free login account may be required). #next_pages_container { width: 5px; hight: 5px; position: absolute; top: -100px; left: -100px; z-index: 2147483647 !important; } #next_pages_container { width: 5px; hight: 5px; position: absolute; top: -100px; left: -100px; z-index: 2147483647 !important; }400Views0likes1CommentIBM Rational AppScan
In my last post, I introduced my role as Solution Engineer for our IBM partnership and how many exciting solutions we have coming out in our partnership. Today I’m going to briefly cover one of our latest releases, the IBM Rational AppScan parser. AppScan IBM’s Rational AppScan implements the latest scanning technology to test your web applications for vulnerabilities. I’ve run this scanner many times and the complexity and depth of its scans is mind boggling. There are something like 30,000 tests that it can run in comprehensive mode, looking for all types of attacks against a website. When launching a new application or reviewing your security on an existing site, an investment like Rational AppScan may save your entire organization enormous amounts of pain and expense. So how does AppScan work? You simply point it at your website and go. During a recent test, I tested a sample ecommerce site (designed to have flaws) and found over 129 problems, 37 of them critical exploits such as SQL injection and cross-site scripting. The beautiful thing with AppScan is that you simply see exactly where the exploit took place, how to repeat it and how to mitigate it. It’s an amazing tool and you should definitely check out the trial. Once you have your scan, the next step is to fix the issues. In the example above, the 37 vulnerabilities might take days or weeks to solve. And that doesn’t even address the four dozen other medium and low priority issues. So how do you help speed this along? This is where BIG-IP ASM enters the picture. As of version 11.1, our IBM AppScan integration allows you to export your reports from AppScan, import them into ASM and immediately remediate the critical problems. In my test, I was able to remediate 21 out of the 37 critical vulnerabilities, leaving just a small handful to be worked on by the developers.400Views0likes2CommentsF5 Friday: Applications aren't protocols. They're Opportunities.
Applications are as integral to F5 technologies as they are to your business. An old adage holds that an individual can be judged by the company he keeps. If that holds true for organizations, then F5 would do well to be judged by the vast array of individual contributors, partners, and customers in its ecosystem. From its long history of partnering with companies like Microsoft, IBM, HP, Dell, VMware, Oracle, and SAP to its astounding community of over 160,000 engineers, administrators and developers speaks volumes about its commitment to and ability to develop joint and custom solutions. F5 is committed to delivering applications no matter where they might reside or what architecture they might be using. Because of its full proxy architecture, F5’s ADC platform is able to intercept, inspect and interact with applications at every layer of the network. That means tuning TCP stacks for mobile apps, protecting web applications from malicious code whether they’re talking JSON or XML, and optimizing delivery via HTTP (or HTTP 2.0 or SPDY) by understanding the myriad types of content that make up a web application: CSS, images, JavaScript and HTML. But being application-driven goes beyond delivery optimization and must cover the broad spectrum of technologies needed not only to deliver an app to a consumer or employee, but manage its availability, scale and security. Every application requires a supporting cast of services to meet a specific set of business and user expectations, such as logging, monitoring and failover. Over the 18 years in which F5 has been delivering applications it has developed technologies specifically geared to making sure these supporting services are driven by applications, imbuing each of them with the application awareness and intelligence necessary to efficiently scale, secure and keep them available. With the increasing adoption of hybrid cloud architectures and the need to operationally scale the data center, it is important to consider the depth and breadth to which ADC automation and orchestration support an application focus. Whether looking at APIs or management capabilities, an ADC should provide the means by which the services applications need can be holistically provisioned and managed from the perspective of the application, not the individual services. Technology that is application-driven, enabling app owners and administrators the ability to programmatically define provisioning and management of all the application services needed to deliver the application is critical moving forward to ensure success. F5 iApps and F5 BIG-IQ Cloud do just that, enabling app owners and operations to rapidly provision services that improve the security, availability and performance of the applications that are the future of the business. That programmability is important, especially as it relates to applications according to our recent survey (results forthcoming)in which a plurality of respondents indicated application templates are "somewhat or very important" to the provisioning of their applications along with other forms of programmability associated with software-defined architectures including cloud computing. Applications increasingly represent opportunity, whether it's to improve productivity or increase profit. Capabilities that improve the success rate of those applications are imperative and require a deeper understanding of an application and its unique delivery needs than a protocol and a port. F5 not only partners with application providers, it encapsulates the expertise and knowledge of how best to deliver those applications in its technologies and offers that same capability to each and every organization to tailor the delivery of their applications to meet and exceed security, reliability and performance goals. Because applications aren't just a set of protocols and ports, they're opportunities. And how you respond to opportunity is as important as opening the door in the first place.325Views0likes0CommentsDeploying Lotus iNotes with BIG-IP LTM
I’ve become a fan of Lotus Notes over the past year. I did not create this deployment guide, it predates my time with working with IBM at F5 Networks, but we have a number of customers rolling out BIG-IP with Lotus Notes and using iNotes (the web based client) so I’ve had to learn the solution and that has meant becoming intimately familiar with Lotus. I have to report that I like Lotus Notes on many levels, from an admin perspective up to a user level. I am primarily a UNIX/LINUX person, but I have spent some time with Exchange as an admin, and Exchange/Outlook is my “daily driver” from an end-user perspective, but if I had to build a mail system today, I would either be using only the current version of Exchange or something like Postfix (a million props to Wietse Venema and IBM Research for saving us from Sendmail in the 1990s) or Zimbra (which takes the open source components and adds great calendar functionality). Now having used Lotus for a while, Lotus gets to join the list of my top enterprise level mail systems. So, let’s take a look at a couple of things, first, what this deployment guide is about and second, why I’m excited about Lotus from an administrative perspective. What is this deployment guide about The goals of this deployment are to provide high availability, acceleration, SSL and TCP offload for Loutus iNotes users while cooperating with the Lotus architecture to provide seamless failover and to direct users to the appropriate servers. As Rahul Garg, our Software Engineer colleagues at IBM states in his write up about this solution: “Lotus Domino stores information about unique NSF files within a particular cluster in the cluster database directory (cldbdir.nsf). . . We created the assistance service, using Notes Formula Language, and placed the key code within a form called ServersLookup in the Lotus iNotes redirector template. When requested by the load balancer, the ServersLookup form returns one of two HTTP response headers in the format X-Domino-xxxxx, each containing a comma-separated list of servers.” The follow goes like this, as requests come into a fully functional environment, the BIG-IP iRule, in conjunction with the Lotus Load Balancing Assistance Service, work together to determine where the particular user’s mailbox resides. The Assistance Service provides an X-Domino header that the iRule parses and uses to direct the user to the appropriate Lotus Notes server. The main components of the BIG-IP installation are described in the deployment guide on page 3 and I will summarize them here: A health monitor that checks the availability of each Lotus Domino Server A pool containing the Lotus Domino Servers as members (Combination of IP and Port) Profiles for Persistence, HTTP Protocol, TCP WAN optimization, TCP LAN optimization and SSL Offload using the clientssl profile The iRule (located on page 7 of the deployment guide) The Virtual Server which ties together all of the above components. Installing and Administering Notes The installation has all of the things that make a command line junkie happy. Specifically, a choice of either command line or graphical install (I chose command line): And, a console driven installation process that is extremely clear: And when you fire up the server, you actually get an interactive shell to interact with (I’m not that good with it yet): And finally, a Windows based administration UI allows for all the granular configuration that you may want a GUI for. In fact, the Assistance Service for achieving load balancing Lotus is installed using this admin UI: I’ve included links to all of the relevant documents below. I hope this post helps clarify what’s going on with Lotus Notes these days and if you’re considering a rollout or working on your current rollout, hopefully these documents will help you properly plan for and execute your high availability deployment. Your resources for Lotus Notes with BIG-IP LTM F5’s Deployment Guide with step-by-step instructions on how to install and configure BIG-IP LTM with Notes The Notes home page at IBM An overview of the solution from Rahul Garg, IBM Software Engineer The Lotus Notes Wiki at IBM The Lotus Notes RedBook325Views0likes0CommentsThe days of IP-based management are numbered
The focus of cloud and virtualization discussions today revolve primarily around hypervisors, virtual machines, automation, network and application network infrastructure; on the dynamic infrastructure necessary to enable a truly dynamic data center. In all the hype we’ve lost sight of the impact these changes will have on other critical IT systems such as network systems management (NSM) and application performance management (APM). You know their names: IBM, CA, Compuware, BMC, HP. There are likely one or more of their systems monitoring and managing applications and systems in your data center right now. They provide alerts, notifications, and the reports IT managers demand on a monthly or weekly basis to prove IT is meeting the service-level agreements around performance and availability made with business stakeholders. In a truly dynamic data center, one in which resources are shared in order to provide the scalability and capacity needed to meet those service-level agreements, IP addresses are likely to become as mobile as the applications and infrastructure that need them. An application may or may not use the same IP address when it moves from one location to another; an application will use multiple IP addresses when it scales automatically and those IP addresses may or may not be static. It is already apparent that DHCP will play a larger role in the dynamic data center than it does in a classic data center architecture. DHCP is not often used within the core data center precisely because it is not guaranteed. Oh, you can designate that *this* MAC address is always assigned *that* dynamic IP address, but essentially what you’re doing is creating a static map that is in execution no different than a static bound IP address. And in a dynamic data center, the MAC address is not guaranteed precisely because virtual instances of applications may move from hardware to hardware based on current performance, availability, and capacity needs. The problem then is that NMS and APM is often tied to IP addresses. Using aging standards like SNMP to monitor infrastructure and utilizing agents installed at the OS or application server layer to collect performance data that is ultimately used to generate those eye-candy charts and reports for management. These systems can also generate dependency maps, tying applications to servers to network segments and their support infrastructure such that if any one dependent component fails, an administrator is notified. And it’s almost all monitored based on IP address. When those IP addresses change, as more and more infrastructure is virtualized and applications become more mobile within the data center, the APM and NMS systems will either fail to recognize the change or, more likely, “cry wolf” with alerts and notifications stating an application is down when in truth it is running just fine. The potential to collect erroneous data is detrimental to the ability of IT to show its value to the business, prove its adherence to agreed upon service-level agreements, and to the ability to accurately forecast growth. NMS and APM will be affected by the dynamic data center; they will need to alter the basic premise upon which they have always acted: every application and network device and application network infrastructure solution is tied to an IP address. The bonds between IP address and … everything are slowly being dissolved as we move into an architectural model that abstracts the very network foundations upon which data centers have always been built and then ignores it. While in many cases the bond between a device or application and an IP address will remain, it cannot be assumed to be true. The days of IP-based management are numbered, necessarily, and while that sounds ominous it is really a blessing in disguise. Perhaps the “silver lining in the cloud”, even. All the monitoring and management that goes on in IT is centered around one thing: the application. How well is it performing, how much bandwidth does it need/is it using, is it available, is it secure, is it running. By forcing the issue of IP address management into the forefront by effectively dismissing IP address as a primary method of identification, the cloud and virtualization have done the IT industry in general a huge favor. The dismissal of IP address as an integral means by which an application is identified, managed, and monitored means there must be another way to do it. One that provides more information, better information, and increased visibility into the behavior and needs of that application. NMS and APM, like so many other IT systems management and monitoring solutions, will need to adjust the way in which they monitor, correlate, and manage the infrastructure and applications in the new, dynamic data center. They will need to integrate with whatever means is used to orchestrate and manage the ebb and flow of infrastructure and applications within the data center. The coming network and data center revolution - the move to a dynamic infrastructure and a dynamic data center - will have long-term effects on the systems and applications traditionally used to manage and monitor them. We need to start considering the ramifications now in order to be ready before it becomes an urgent need.305Views0likes4CommentsDeploying WebSphere MQ with F5 BIG-IP LTM
I am excited to announce today the release of our WebSphere MQ deployment guidance for BIG-IP Local Traffic Management. The guide is available on the Resources section of F5 Networks.com and also linked directly here. This guide is designed to provide guidance on how to achieve high availability, SSL Off Load and TCP Optimization with WebSphere MQ. In my previous post, I described in detail how BIG-IP can help with MQ. Specifically we bringing offload and load balancing, and in this deployment guide I provide the step-by-step instructions on how to setup the pools, how to setup the monitors, how to setup TCP and SSL optimizations and finally, how to setup the Virtual Server. I look forward to comments about the guide, in fact I received some immediately from our colleagues at IBM and will probably be adding some comments to the guide based on this great feedback in the coming weeks. I am also looking forward to version 2.0 of the guide, where we will adding more advanced monitoring, information about transmission queues and global traffic management. The Deployment guide for this solution can be downloaded from the Resources section of F5.com or by clicking here.299Views0likes0Comments