In the data center of the future, you are going to need to be able to bring up new instances of an application, have them fully functional without any user intervention, and when they’re no longer needed they should clean up after themselves and quietly go away. Five years ago this was fantasy talk, two years ago it was coming to the fore, and today we can see clearly that such adaptable infrastructure is going to be mandatory for any installation/application that has a highly variable rate of throughput.
The drivers for this need for adaptability are varied, but the core ones that we have all heard an earful of are cloud – where you are charged for your usage, and leaving apps running when not needed is tantamount to throwing money away – and a highly virtualized environment where there are a lot of virtuals running per physical server and keeping an instance running that you do not need is tantamount to throwing CPU cycles away. Okay, the alliteration was good, but it really IS throwing CPU cycles away, nothing tantamount about it.
The problem is/has been that there are a lot of complex pieces to put together when making such a system work. It is difficult to say “spin up an instance, give it access to all the resources it needs, and make it play well with the other instances of the same application while directing traffic to it.”