Homelab Server Hardware

In my last post I I talked about a refurbished rackmount server I picked up to replace my old gaming desktop which was being used as a server for the last few years. I outgrew my last server, both in hardware and workload, and now it was time to prepare for the next few years starting with hardware.

CPU & RAM

There isn’t much more here to say than what was said in my last post, nor much more I can add to this server. The only possible upgrade would be CPU however it wouldn’t make much sense to pay to upgrade the processors in this server. Here are the specs.

  • Processors: 2x 2.93 GHz X5670 12-Cores Total
  • Memory: 18×8 GB (144GB) RAM

Storage

Data Volume

The new homelab server came shipped with 6 x 2 TB drives, which is a total of 12 TB of storage. In total, this is more than enough space than I need however I need some redundancy. It also came with a PERC H700 RAID adapter, so all I needed to do now was choose the right configuration for my array. My options were:

  • RAID 0 (Stripe set)
    • Usable capacity: 12 TB
    • Speed gain: 6x read speed gain, 6x write speed gain
    • Tolerance: None
  • RAID 5 (Stripe set with parity)
    • Usable capacity: 10 TB
    • Speed gain: 5x read speed gain, no write speed gain
    • Tolerance: 1 drive failure
  • RAID 0+1 (Mirror of stripes or “RAID 10”)
    • Usable capacity: 6 TB
    • Speed gain: 6x read speed gain, 3x write speed gain
    • Tolerance: 1 drive failure
  • RAID 6 (Stripe with double parity)
    • Usable capacity: 8 TB
    • Speed gain: 4x read speed gain, no write speed gain
    • Tolerance: 2 drive failure

Like most things, there are trade-offs with each option. I had to balance space, with speed, with fault tolerance. RAID 0 isn’t really an option for me, no tolerance means if one drive dies I lose the entire array. Most of the data stored on this volume is critical data. It’s pictures, documents, music, and all other digital junk I have been collecting over the years. It’s also a location for all the TV I record over-the-air with Plex. So, in short, I need the most space, with some decent read/write speeds, and some fault tolerance. I ended up choose RAID 10. Looking back, RAID 6 would have also been a good option however I (thought) I needed the write performance when recording multiple shows with Plex. It’s also worth mentioning that even though I have some drive redundancy, it isn’t a backup solution. I do use a backup service to back up my critical data. I will go into detail on that in a later post.

So that took care of my “slow” data volume – my long term/slow moving/large file store, but what about fast moving, performant storage? SSDs are the obvious choice however I have limited physical space and wiring within the server chassis. I narrowed down my options to 2 parts – Host OS and Guest OS. Since I will be using a Hypervisor, I will need some storage for it (albeit not the fastest) and then fast storage for all of my guest OSes.

Host OS Volume

The host OS I chose was Proxmox, more about this in a future post, however I needed drive space for this. If you recall, my server came with all storage slots taken. There was however this (near) useless optical drive. That’s when I got the idea to replace it. I remembered reading about dummy slots for laptops that would allow you to remove the DVD drive and replace it with a drive carriage that would house a hard drive. So that’s exactly what I did. I picked up this hard drive caddy so I could replace the DVD drive with a solid state. It worked out perfectly. It was as simple as removing a few screws, popping in my hard drive, then sliding it back into the server. Worth nothing that this is a SATA II controller, so I decided to put an older SSD drive here. This is really just housing the operating system, which does limited reads and writes after booting so it was a perfect fit.

Guest OS Volumes

Now that I have my storage volume and host OS volume taken care of it was time to figure out my guest OS volume. This will house Windows and Linux guest operating systems and will need fast random read/write. Again, this server doesn’t have any available drive slots or power and my options were limited. After doing some research, I realized that the server came with quite a few options for I/O:

  • Two x8 and two x4 PCIe Gen2 slots
  • or One x16 PCIe slot and two x4 PCIe Gen2

I toyed with the idea of getting drive caddies for traditional SSD drives that could pull power and I/O from PCIe slots but then decided on something a little different. I stumbled upon these adapters that would allow you to attach an NVMe M.2 drive and use it in a PCIe slot. I figured these would work perfect. This would give me access to fast NVMe drives with out the need for power or cabling, something this server lacks. I picked up 2 of the caddies along with 2 500 GB Crucial NVMe drives. This would allow me to divide up my guest OS workloads between 2 drives to limit I/O on each. Although the jobs are critical that run on these drives, the data is not. I don’t back up any of the data on these drives except for config and a few databases.

You can see the NVMe drives below in the upper left, connected to the PCIe riser. You can also see the optical drive replacement on the upper right, however you can’t tell there’s a drive inside!

Peripherals

GPU

If you look closely, you can see that I have a video card tucked in my server. This was one of the biggest challenges for this server and one of the reasons I wanted to upgrade my previous one. Gaming on a server? Not really. Mining Bitcoin? Nope. (I did however learn a ton about building a mining rig through all this!) Running an interactive stream bot that encodes video on-the-fly on Twitch? You guessed it! Without going into detail as to why I have a video card in my server, I had to overcome a fee challenges with getting a video card in here.

  1. No power pins for a video card
  2. Cannot draw more than 25w on boot
  3. No PCIe 16x slot

This left me with very few options…

Regarding the power, I knew that running external power to this card via adapters or splicing server wiring was not going to be an option for me. I didn’t want to run the risk or damaging the server or burning my house down. I am sure people have been able to accomplish this without doing either of the two however this is not something I wanted to pursue. I knew I had to find a video card that didn’t need an additional power pin.

Power draw was also going to be an issue however I assumed that most cards that didn’t need power from one of the power pins weren’t going to draw more than 25w on boot. This was an assumption and I had to roll this dice on this one.

My next challenge was the PCIe slot. Dell sells a 16x riser however they are crazy expensive and only turn up every so often on eBay. I didn’t want to sink more money into this server for a part that can’t be used in the future. Also, my GPU load didn’t require the full 16x bus. I could get by on 8x, which I did on my last server – so I sought out modifying the riser card. These are cheap to replace and a 16x card can run at 8x. I ended up taking a Dremel and grinding down the end of the 8x slot. I took it slow and ground the back of the slot off so that the card could still be seated. It’s difficult to explain but if you grind perpendicular to the slot (not intuitive) you can slowly remove enough plastic without damaging the board or pins. After I removed enough plastic I was able to seat the card in the slot with the other half (8x) hanging out.

So which card did I go with? I ended up choosing an MSI Nvidia GTX1050 2G OC and it worked out perfectly!

  • It gets all the power it needs from the PCIe slot
  • It doesn’t draw more than 25w of power when it boots
  • After modifying the PCIe riser, it runs great at 8x

Overall I am very satisfied with the hardware build. I am able to get lots of processing power and RAM, lots of slow storage, and lots of fast storage too. Adding a GPU was icing on the cake (although a requirement for my application). This hardware build is setting me up perfectly for a Hypervisor OS that can run all of my workloads, including video encoding, all in one system.

How about you? Have you ever modded the hardware in your PC/server? Ever run into a problem where you had to apply a “creative” solution?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.