Redesign of my Home Lab: Back to Hardware

While it seems others are increasingly looking to virtualise their home labs and even host them in the cloud, I’ve begun looking the opposite direction, and move from a predominantly virtual lab to a hardware one.

Firstly, allow me to introduce to you my current home lab:

Yup, that is indeed a HP ProLiant DL380 G7, wedged face-first into a cupboard at a precarious 80 degree angle, surrounded by an old TP-Link router, ISP router, and our apartment’s monthly supply of toilet paper. At least it has some flood prevention!

Ignoring the mess and the fact that a number of 8-legged creatures have made this their new home, it is a pretty decent spec; 12 cores, 144GB RAM, 1TB of storage in RAID10. The iLO is a life-saver when you can’t physically press the power button!

Despite its appearance, it’s been an excellent learning environment for me. I have a decent Windows domain running on this thing, about 20 Windows and Linux VMs, including:

  • HA Domain Controllers, Server 2012
  • HA File Servers, Server 2012
  • HA Exchange 2013, Server 2012
  • HA Web Servers, Server 2012
  • WSUS Server, Server 2008
  • Veeam Backup Server, Server 2012
  • PRTG Monitoring, Server 2008
  • phpIPAM IP Address Management, Cent OS 7
  • pfSense Firewall, Cent OS 7
  • General purpose server, Server 2008
  • Squid Proxy, Cent OS 7
  • Windows 10 VM
  • Windows 7 VM
  • A number of email security gateways I was recently trialing: Barracuda, Fortimail, Symantec.

All the above running barely tickle 50-60% of the server’s resources. Most importantly, it was quick and responsive to use, and was a fantastic lab environment for education and work related testing. The environment has full internet access, NAT and policy through the pfSense VM firewall, scheduled backups to some networked storage, and a fully working Exchange environment receiving mail to my own public domain. Public-facing OWA is even protected by Duo MFA!

I also used it as a GNS3 VM for virtual networking, although quite unsuccessfully, which I’ll get to in a moment. Needless to say, it did not have a problem running large virtual networks, but complete overkill for my requirements.

So while this setup was great for a Windows and Linux server environment, and separately as a virtual network lab, and separately again as a physical ESXi host for some VMware tinkering, I really wanted to combine all these elements into one, to cover all aspects of my job as a sysadmin. My dream home lab was to have layer 2 – 7 fully virtualised: VMs running on nested ESXi hosts, connected by virtualised switches, connected by virtualised routers, connecting out to the internet, and even with some WAN/VPN connectivity to virtualised remote offices running within the same box! Windows, Linux, Cisco, VMware etc. all ticked off in one virtualised environment.

This unfortunately proved very difficult to achieve for a number of reasons, and had a lot of unforeseen limitations.

Firstly, as fantastic as GNS3 is, I found it to be quite buggy, frustrating to setup and use, and constantly throwing problems at me. Eventually it became a hindrance to learning, as I found myself wrestling with GNS3 and related errors more than with the technology I was trying to learn.

I came quite close to the “dream” lab I described: I had the majority of the above VMs inside GNS3, connected via a few layers of Cisco IOSvL2 Qemu VMS, through a couple of Cisco 7200 Dynamips routers, out through my home router to the internet. Unfortunately, bandwidth and latency between the VMs was appalling, with anything other than pings creating huge latency, dropouts, and Kb bandwidth. It was unusable as a learning environment.

I haven’t been able to confirm fully, but I recall reading somewhere that the IOSvL2 images have a built-in bandwidth limit to stop their use in a production environment, which would explain what I was experiencing. At this stage I had had enough of GNS3, and started to turn to the idea of a physical lab; hardware servers, hardware switches and routers, hardware cables! Even if I had achieved a fully virtualised lab, it still would have convoluted and complicated even the simplest of issues, and it would have been difficult to accurately re-create problems that I face at work.

So, a hardware lab is forming. I’m selling the DL 380, and have already purchased two Dell R610s. I’ve purchased two because I want to experiment with vCenter failover/HA/NIC teaming, something that would have been difficult in a virtual lab. I’ve inherited some unused gear from work:

  • 2 x Cisco 1841
  • 2 x Nortel BayStack 5520
  • Cisco Catalyst 1950
  • Juniper SSG 140
  • Synology NAS

This is already a lot closer to what we are running in my work environment (Cisco/Extreme/Avaya/Nortel), and I hope to expand this soon to replicate it even more, with some L3 Extreme switches and a Palo Alto firewall. Even just migrating the VMs to the R610s, setting up VLANs, and getting basic internet connectivity for the lab threw up plenty of issues I had to overcome, but issues that are completely relevant to what I face day to day at work, unlike the issues experienced in GNS3, or any heavily virtualised lab for that matter. And it really is great to have layer 1 in your home lab.

So there you have it, the start of a new home lab. I’ll shortly be buying a 12U rack and patch panel to get things neat and tidy, and once all set up I’ll post the final gear list, topology, and some electricity bill invoices!

One thought on “Redesign of my Home Lab: Back to Hardware

Leave a Reply

Your email address will not be published. Required fields are marked *