Green IT: More is Less

One of the primary tenets of environmentalism is that we ought to minimize waste. When we apply these concepts to "green" computing initiatives, we often translate that to mean we need to waste fewer resources - RAM, CPU, and storage.

A golden rule of IT over the past decade was to not run critical infrastructure past an acceptable point of resource usage, usually somewhere between 60 and 70 percent of CPU utilization. Green IT, however, tells us that we ought to push that limit further and stop wasting those resources. Whether we achieve more efficient use of those resources via virtualization or some other technology is irrelevant, as we are necessarily more concerned about how to ensure application performance and availability is not negatively affected by such an increase in utilization.

As a server's utilization rises and memory is consumed, it's performance decreases. So what we want is to push server  utilization to the edge of disaster, and then give the server a short break in order to reclaim memory and let utilization drop back to an acceptable level.

Traditional load balancing solutions can't help here; they aren't smart enough in the sense that they aren't keyed into server utilization and can't adjust dynamically to the environment.

To ensure that application performance is maintained, and availability continues according to service level agreements, IT would be well served by the use of an application delivery controller capable of intelligent health monitoring. Such an application delivery controller can dynamically adjust routing of requests based on current conditions on the network and within the server infrastructure, ensuring that service level agreements are met and applications remain available and online.

Basically, when health monitoring indicates that a server is reaching the breaking point, the application delivery controller can immediately begin to direct requests to other servers in order to let the "tired" server take a short break before jumping back into the ring to help.

The optimization and acceleration features available in an application delivery controller can help offset any potential degradation in application performance due to increased utilization on the server, and in many cases is likely to improve performance overall - even at peak server utilization.

An application delivery controller can assist in achieving better server efficiency by allowing you to push your utilization higher without worrying over potential downtime or loss of performance.

Imbibing: Mountain Dew

Published Jun 23, 2008
Version 1.0
No CommentsBe the first to comment