Whether you’re playing around with some new NGINX features, the latest F5 release or maybe just some generic servers or systems you always wanted to have a look at, having a lab environment is extremely useful.
I’ve been running lab environments for many years, first purely as hobby, later as part of my job as Cloud Architect, and now again for my own business (...and hobby). A good lab environment is close to unmissable. To steal a quote from an old colleague of mine; “Everyone has a lab, some people are lucky enough to have a lab that’s separate from their production environment”. How true… To be honest, the quickest way to get funding for a lab, is to tell your boss that you’ll be implementing this exciting new feature you learned about this week into production next week!
For a quick test or demo, the likes of VMware Player or VirtualBox will do just fine on your laptop, but as soon as you want to scale up a bit, maybe get a few systems chained behind one another, you’ll be looking at a dedicated server.
So, one of the first questions you need to answer when it comes to a lab; what hypervisor will you be using? How small a question it seems, but it will have a major impact on the VM types you can run, often the hardware that you’ll need and the ease of use going forward. Oh, and price of course.
In this article, I’ll argue my case for Proxmox (https://www.proxmox.com/), but just as a disclaimer; I don’t work for/with Proxmox, anything I’m about to say is my own opinion and am likely to offend some of you, but it is the best knowledge/opinion I have and it works for me for my case. I’d love to know your view on the matter though, so don’t hold back in the comments!
Proxmox is a bare-bone hypervisor; you don’t run it on top of an existing system, but give it the full server. Load it on a USB stick, run it, and off you go! (https://pve.proxmox.com/wiki/Prepare_Installation_Media) It’s no more or less complex than any other OS install really. Once you’re done, you log in to the browser GUI and almost everything else is configured from there. It’s all open source, so no hassle with licensing, but you can pay for business-level support, if you are planning to run live applications on it.
And what does it run on? I’m sure there must be some list of compatible supported hardware somewhere, but in my experience it runs on anything that Linux runs on, and that’s basically everything. This is immediately a great advantage of the system; you don’t need expensive/tailored hardware to run it; just go out and buy the cheapest that you can get away with. Search eBay, Craigslist or whatever slightly iffy second-hand site you fancy. Joking apart, there is nowadays a major market in second-hand servers; equipment that has had it’s life in a production application, and needs to be replaced due to company policies. The equipment will be taken over by a recycler who guarantees that all data will be wiped, the equipment gets cleaned and tested, and sold again for a fraction of the original cost. In the UK, I’ve been using Bargain Hardware (https://www.bargainhardware.co.uk/) for quite a number of servers over the years. In my current install, the server I bought in 2016 is still humming away nicely.
My current system of 4 servers (96 CPU cores, 512GB RAM, 26TB storage) has probably cost me about 2000 pounds (2500 dollar) in total. A mere fraction of what the same would cost me new!
For this price, I also don’t care too much about protecting the hardware from its environment – I’m running it in my laundry room to help dry clothes and heat the house, and at this price, I’ve got a backup cluster in my backup datacenter (...my parents’ laundry room...)
Ehm, basically anything that you’ll find on all major hypervisors plus the kitchen sink. As you’re running basic Linux, but with the Proxmox management shell around it, anything that KVM can do, this can do too. Plus;
This last one is roughly the same as ballooning, but done on Hypervisor level, which means it always works, rather than only when the virtual machine drivers feel like it. If needed, I can often squeeze 25% more memory out of the systems than I actually have!
Now, storage, another contentious topic. What system should you go for, what kind of RAID, and how do you get the best flexibility the cheapest? For me, the answer turned out to be Ceph.
(https://ceph.io/) It’s a software-defined storage platform that is natively supported in Proxmox. It gives you the ability to throw a bunch of hard drives at the platform, and you don’t (really) have to care about the quality or status of the hard drives. If one of them fails, meh, there’s still more copies available and the system restores itself.
For me the main benefit of this is that I can buy the cheapest still functioning hard drives that I can find, and as long as they spin up, they’ll work. You also don’t need separate storage enclosures or fast switches, again bringing the overall cost down dramatically. Because the data segments are distributed between the hard drives, the overall speed of the system is not limited to a single drive, but spread out between all of them. As long as you have enough drives dealing with requests (in my case, I normally have about 30-ish in the system), you can run large amounts of VM’s at the same time without real throughput issues – I’m normally running between 10 and 50 at a time. Again, using the cheapest second-hand drives I can find!
As mentioned earlier, Proxmox offers business support if you run this as your main hypervisor in live systems (enough companies do!), and if you like them, it would be a great way to support them as well.
Besides that, there is also the community forum where other members as well Proxmox employees, are quite active (https://forum.proxmox.com). The few issues I had with my systems (...mostly down to me being too eager, rather than their software to be honest), the forum has always been a good place to turn to.
So, now that a lab environment should no longer be your main concern, stop putting off those F5 exams!