My home lab upgrade...
 
Notifications
Clear all

My home lab upgrades this 2023

30 Posts
7 Users
24 Reactions
2,601 Views
Posts: 14
(@life-from-scratch)
Eminent Member
Joined: 11 months ago

Finally coming together! My server rack drawer. At the back I have my 5G modem on the left, 9 port managed switch (8 2.5G copper with a 10G SFP+) in the middle and a CWWK X86-P5 on the right. Off to the side is the EAP-613 Access Point.

Below the modem is a power distribution block, everything pictured is running directly off my 12V bus.

The X86-P5 has an Intel N100 with 8GB RAM and 500GB storage, 2x I226V nics. It's running Proxmox with a PfSense VM and will soon also host my Home Assistant VM.

And at the front of the drawer I have 3U of room for activities. Which will also be powered off the bus via PicoPSU.

Now I just need some shorter patch cables.ย 

PXL 20240107 031853403
Reply
2 Replies
Brandon Lee
Admin
(@brandon-lee)
Joined: 14 years ago

Member
Posts: 395

@life-from-scratch awesome. This is really using some ingenuity on how you have the rack fit inside a drawer. I don't think I have seen that before, very cool! Do you have any problems with heat build-up with the equipment in there? Also which 2.5 GbE switch are you using? cool stuf.

Reply
(@life-from-scratch)
Joined: 11 months ago

Eminent Member
Posts: 14

@brandon-lee The drawer is going over my fridge, which is obviously not great from a heat perspective, but there is airflow coming up the back of the fridge and I plan to run the fans in the racked case (future server build) backwards, blowing out the front. So in theory it'll draw air in underneath the fridge, up behind it, past all the equipment and out the front. If that isn't sufficient I can blow a hole through the wall at the back and put a fan there. I've had all my stuff just sitting up there for a while now and I haven't seen any issues. So far the whole lab is running less than 50W so it's not generating a ton of heat.ย 

The switch is just the Mokerlink one available on Amazon. No real testing as of yet, but logging into the webUI was straight forward and it seems to be doing what a network switch is supposed to do. On my list of projects is to reallocate the 2.5GbE nics in my Elitedesk server that used to be for PfSense to give me a 2.5G link to TrueNAS and then string a wire to the 2.5GbE port on my mini desktop and see what kind of speeds I can get.

Reply
Posts: 5
(@alpha754293)
Active Member
Joined: 10 months ago

So I was referred to this topic/post from here:
https://www.youtube.com/channel/UCrxcWtpd1IGHG9RbD_9380A/community?lc=UgwV41sv_SsTQu60wql4AaABAg.9zLDW57PdS69zNQno3z53m&lb=Ugkxm8Q8S0adqfW-kFvpiKHT1JMi04J88mdQ

Here are the pictures of my homelab:

IMG 0685
IMG 0686

ย 

Here is a run down of the specs:

Rack:
18U 39" depth IT and Telecom cabinet from sysracks.com

QNAP TS-453Be 4-bay NAS
4x HGST 3 TB SATA 6 Gbps HDDs in it
That's my iSCSI target as well as the backup target for pictures from wife's phone. Also the target for Mac Time Machine (because QNAP make it ridiculously easy).
Dual GbE
Celeron J3455 (not the fastest, but it works enough)
Upgraded the RAM I think last year or maybe two years ago from 2 GB to 8 GB (which is the max that thing will take).

Netgear GS116 v2 - 16 port GbE switch. Nothing special other than its 7 W idle power consumption (super low).

Mellanox MSB-7890 36-port 100 Gbps Infiniband externally managed switch
(Yes, I run 100 Gbps in the basement of my home.)

(That connects to my main server at the bottom, plus two micro HPC compute nodes, as well as my system that runs my LTO-8 tape backup system.)

NFSoRDMA is AWESOME!

Node-to-node, in a RDMA aware application, I can hit about 80-ish Gbps (10 GB/s). But using the Infiniband benchmarking tools, I think that I topped out at something like 96.3 Gbps (~12 GB/s). Something like that. I can't remember exactly anymore.

QNAP TS-832 (there's actually two of them - you can see the front one in the picture, but there's actually a second one behind it that's sitting on top of the server).

They're not used anymore.

Had a mass consolidation project January 2023 where I originally moved 4 NAS servers down to one (the big one at the bottom), but then the Proxmox server started to get overloaded (well...more specifically, drive I/O wait times were too long which was causing some of my Linux VMs to through a CPU stuck warning messages, plus inconsistent behaviour/performance running iSCSI off of my Proxmox server (directly on the Debian 11 base install that lies underneath the Proxmox middleware), hence why iSCSI got offloaded back onto the QNAP NAS appliance.

But that's got an Annapurina Labs ARM processor. I can't remember what it is, but a quick google search can rectify that.

2x SFP+ 10 GbE ports, 2x RJ45 GbE NICs. (That was one nice thing about the two was that I got two 6" SFP+ to SFP+ cables so they were able to talk to each other at 20 Gbps without needing a 10 Gbps SFP+ switch.)

Next, I think is my old TrueNAS server. Has a Supermicro X7DBE (I think) motherboard in it. Dual Intel Xeon L5310 I think. 16 GB of RAM. Very slow, can't do much in the way of virtualisation, and very power hungry (relatively speaking). That was also decommissioned.

Next is my Supermicro Twin^2 blade server. Each blade is dual Xeon E5-2690 (v1) (8-cores/16-threads), 128 GB of DDR3-1866 running at DDR3-1600 speeds. 4 half-width blades in total for a total of 8 sockets, 64-cores/128-threads and 512 GB of RAM. This was my old micro HPC cluster. This has also been since decommissioned. Too much noise and uses too much power. (1.6 kW nominal, but I've gotten it to peak around 1.9 kW on a 120 VAC line.)

One of the coolest projects that I ran with it was actually running GlusterFS version 3.7, where I created 4x 110 GB RAM drives, and then created a distributed stripped Gluster volume across the RAM drives, and then exported the resulting gvol to the 100 Gbps IB network as a NFSoRDMA export.

That was fun. Can't do that anymore though as that ability was deprecated in GlusterFS version 5 and completely removed by GlusterFS version 6.

But that was a way for me to create high speed storage without burning up erase/program cycles from SSDs.

Beyond that, it used to have an Intel 540 series or 545 series 1 TB SATA 6 Gbps SSD for the CentOS boot drive, and then a HGST 3 TB local bulk storage drive (whilst everything else ran off the cluster headnode, which was also the larger pool of scratch space).

Next, it is my old micro HPC cluster headnode. Core i7-4930K I think, running on an Asus P9X79 WS-E motherboard. I think it was also 64 GB of RAM. Had a 1 TB boot drive, 4x Samsung 860 EVO 1 TB SATA SSD, and 8x HGST 10 TB SAS 12 Gbps HDDs connected to I think it was an Avago/Broadcom/LSI MegaRAID 12 Gbps SAS 9341-8i, I think.

Four SSDs were in RAID0.

8x HDDs were in RAID5.

Four SSDs were the fast scratch disk (cheaper at the time than buying one big SSD).

HDDs were for larger projects. (External aerodynamics CFD for a semi-tractor trailer produced about 5 TiB of data, per run.)

So it handled all of the data storage and some of the processing as well. For larger data processing jobs, that was dealt with by the compute nodes themselves since each node had a 100 Gbps connection back to the cluster headnode.

And last but not least (in the rack) is my latest addition (Jan 2023) - my 36-bay, 4U Supermicro Proxmox server. Has a Supermicro X10DRi-T4+ dual Xeon motherboard, with dual Xeon E5-2697A v4s in there (16-core/32-threads,ย  or 32-cores/64-threads total), 256 GB DDR4-2400 ECC Reg RAM.

4x 3 TB HGST SATA 6 Gbps HDD (for OS, in RAID6)
8x 6 TB HGST SATA 6 Gbps HDD (for VMs/CTs, raidz2)
8x 6 TB HGST SATA 6 Gbps HDD + 8x 6 TB HGST SATA 6 Gbps HDD + 8x 10 TB HGST SAS 12 Gbps HDDs (three raidz2 vdevs, one ZFS pool) for bulk storage for everything else. I think that the last time I checked, formatted capacity, I think was something like 155 TiB. (out of a raw capacity of 176 TB)

The motherboard itself has quad 10 GbE NICs onboard plus IPMI.

Now it runs 7 CTs and 8 VMs (but 20 other CTs and 4 other VMs are stopped at the moment).

So it's my "do everything" server.

Cut my power consumption from the 4 NAS systems that I used to run, plus a Netgear 7248 48-port GbE switch down from 1242 W to ~550 W now. (When the system is running harder, it'll push up into the ~670 W-700 W range.

But that's still better than 1242 W that it was pulling, running multiple systems.

So that's the rack.

The other picture is my latest addition (Jan 2024) which is three OASLOA Mini PCs that I bought off of Amazon. It's got the Intel N95 processor in there, 16 GB LPDDR5 RAM that's soldered onto the board (not upgradable), and originally came with a 512 GB SATA 2242 M.2 SSD, but I replaced that with an Inland 512 GB NVMe 2242 M.2 SSD, and now it's my 3-node Proxmox HA cluster that is responsible for Windows (and Linux) AD DC, DNS resolver (via SLES12 SP4), and Pi-Hole (network wide ad blocker).

Not pictured:
My 3930K system (old daily driver)
Two AMD Ryzen 9 5950X w/ 128 GB of DDR4-3200 UDIMM RAM system, 1 TB HGST SATA 6 Gbps HDD (OS) - my new compute nodes. (Quieter and consumes less power at 100% load.) Had to jumpstart that system with the previously mentioned 3930K because the kernel in CentOS 7.7.1908 is so old that it kernel panics when I try to cold boot the 5950X with CentOS. So I have to install CentOS with my 3930K first, then update the kernel via the EPEL repo, before dropping the drive into the 5950X systems. Linux. Fun, right?

And I would also set up the system with like a GTX 980 for basic video out before that gets pulled and I drop the Mellanox ConnectX-4 dual VPI port 100 Gbps IB card in instead (because the bottom slot is only PCIe 3.0 x4 which really kills the IB card bandwidth).

So they all run headless and I use VNC for graphic manipulation.

Two HP Z420 workstations, also have Intel Xeon E5-2690 (v1) as mentioned before, 128 GB DDR3-1600 ECC Reg. RAM, whatever random drive I have now.

One is running Windows 10 ATM, and the other is running CentOS (with a RTX A2000 6 GB in it) because I was doing some testing with GPU accelerated CFD. (It's STUPID fast (like 4x faster, I think, IIRC) and that's just with the "lowly" RTX A2000. Given faster and more VRAM GPUs, I can probably do quite a lot with it.)

Also not pictured is another 3930K system (where I damaged the CPU when I used to overclock and overvolt it to 1.45 Vcore), so I think that at least one of the cores got damaged, so now it runs as a 4 core CPU instead of a 6 core CPU. But that's okay, because all it does now is run my LTO-8 tape drive/tape backup system. (Which also has a Mellanox ConnectX-4 card, and a direct connection to the Proxmox server, bypassing the Mellanox 100 Gbps switch.) (The IB switch really only comes on when the compute nodes are on.)

So that's a rundown of my homelab setup.

Reply
Posts: 5
(@alpha754293)
Active Member
Joined: 10 months ago

*edit*
So I forgot to add, that for my main Proxmox server, the VM <-> host communication is handled via virtio-fs where and when possible.

That way, it skips having to go through the network stack entirely, and it's WAYYY faster, especially if your folder has a lot of files in it where all of the file explorers (Windows Explorer, Nautilus, Finder, etc.) will ALL struggle to load the contents of the folder, but with virtiofs, it loads the whole thing, practically instantly.

Virtio-fs is AWESOME.

Virtio NIC in Windows 7 shows up as a 100 Gbps NIC.

In Windows 10 and 11, and Linux, (and MacOS) it shows up as 10 Gbps NIC.

So, it's "free" 10 Gbps networking.

No cables, no switches, no adapters, no NICs.

Don't have to mess with different types of fiber cables, different lengths, which require different transceivers, and making sure that the fiber optic cables on both ends are and stay clean.

All of that goes away.

It's the cheapest 10 Gbps network you'll ever deploy.

(I use it. A LOT.)

I did this because I found that a lot of my old multi-NAS setup - what it was always doing was basically talking amongst themselves, and my clients were talking to the server as well.

So, stuff everything in one box, and all of that can happen, much faster, within the same box.

And then not having to use NFS nor CIFS/SMB just made it even better/faster/easier. (Not that setting up a NFS is difficult.)

The only thing that DOESN'T work with virtio-fs is that I can't have that running at the same time that NFSoRDMA is running or trying to run.

(Or maybe Debian 11 just doesn't DO NFSoRDMA at all.)

Ran into a NFS lock issue with that.

But other than that -- it works well.

Also, not pictured in the above picture is my other mini PC - my new daily driver -- a Beelink GTR5 5900HX w/ 64 GB of DDR4-3200 RAM.

It's awesome. It has dual 2.5 GbE NICs, and I can run three monitors off of it, and it's low-ish power, but also performant because of the CPU, and it's tiny vs. your standard tower computer (hence "mini" PC). I love it!

Reply
1 Reply
Brandon Lee
Admin
(@brandon-lee)
Joined: 14 years ago

Member
Posts: 395

@alpha754293 Thanks for sharing your lab! Awesome stuff and great detail in the info you provided here. I will look through everything more closely and I'm sure I will have some questions Smile Also, I need to check out the virtio-fs you have mentioned.

Reply
Posts: 5
(@alpha754293)
Active Member
Joined: 10 months ago

@Brandon Lee
No problem.

re: virtio-fs
One of the lead software engineers for that project is Stefan Hajnoczi (Senior Principal Software Engineer, Redhat). His website is: https://vmsplice.net/

Two of the slide decks that might be of interest or relevance are:
virtio-fs: A Shared File System for Virtual Machines at KVM Forum 2019

and

Virtio-fs for Kata Containers storage at Kata Containers Architecture Committee Call

If you google his name, you can also find videos of his presentations of YouTube.

Yeah - feel free to ask any questions that you may have. Not a problem at all. Smile

Reply
Page 3 / 3