My home lab upgrade...
 
Notifications
Clear all

My home lab upgrades this 2023

30 Posts
7 Users
24 Reactions
2,354 Views
Brandon Lee
(@brandon-lee)
Posts: 338
Member Admin
Topic starter
 

@t3hbeowulf that's great, I love how you took advantage of these and they look like they are working great for you. I am loving the mini PC market more and more as I move more into containerized environments for services. I think this is the future for the most part. Will be great to see what we see come out in 2024.

 
Posted : 19/12/2023 11:10 pm
t3hbeowulf reacted
Brandon Lee
(@brandon-lee)
Posts: 338
Member Admin
Topic starter
 

@malcolm-r @ghaleon @t3hbeowulf check out this potential home lab server....Aoostar NAS with Ryzen 5800U, 6 NVMe, 6 HDD, 10 gig, coming in January, looks to also be barebones if I am reading correctly: https://www.virtualizationhowto.com/community/home-lab-forum/aoostar-nas-with-ryzen-5800u-6-nvme-6-hdd-and-10-gig-network-home-server

 
Posted : 19/12/2023 11:38 pm
Ghaleon reacted
(@t3hbeowulf)
Posts: 22
Eminent Member
 

That certainly ticks a lot of boxes in one small package. One of the things I struggle with is "Do I get a purpose-built solution and dedicate it to my desired task or do I try to build something from spare parts and fill in gaps as needed?"

If I was just starting out, that's appears to be a nice all-in-one solution with plenty of horsepower and features.
With everything already in the lab, I'd really have to commit to replacing and selling some gear because it's already unwieldy. 

 
Posted : 22/12/2023 12:22 pm
Brandon Lee reacted
Brandon Lee
(@brandon-lee)
Posts: 338
Member Admin
Topic starter
 

@t3hbeowulf I am right there with you! I have been holding off on my next round of upgrades for the home lab on the server side as I just haven't quite found the "it solution" that I want to pull the trigger on. I am hoping we see more of these tiny mini PC/NAS type boxes this next year. I think it could help consolidate a lot of things down for me.

Are you looking to stick with your current hardware another year or are you looking at refreshing this next year?

 
Posted : 22/12/2023 12:26 pm
t3hbeowulf reacted
(@t3hbeowulf)
Posts: 22
Eminent Member
 

The short answer: I'm going to stick with my current hardware for a while longer because it is MORE than capable of running everything I throw at it.

The longer answer: With the exception of storage, almost everything I've built was purchased used. I have a soft guideline that I don't upgrade unless the equipment is experiencing some sort of imminent or current failure, or I have backed myself into a corner of storage/capacity and need to build out a replacement. 
It all started with a single "NAS+Desktop" that was pictured at the bottom of the collage in my photos post. A Core i3-3220 dual core processor, 16GB of RAM. 
It "works", but QuickSync on that processor is slow and 2 cores was constrained for running Portainer and a number of containers in addition to Samba for the file share. 

I started pricing out replacements when I stumbled upon the micro PCs. I picked up one, then two for "redundancy" and started to get familiar with ProxMox. I picked up a pair of micros from HP because they had Ryzen processors in them and I wanted to toy with passing through the integrated graphics to a VM. Since I had 4 nodes and "even numbers are bad", I picked up a 5th and settled there. Any single node in the cluster is enough to run every service I have, but I spread them out over several nodes to ensure migration/backups keep working. (I test migration frequently) Of course, this didn't solve the original "NAS" problem but it did solve the workload/capacity problem. 

The NAS in my picture was created because I had just enough other spare parts laying around that I thought: "with a used motherboard and some drives, I can build a new NAS". So I did... and have been slowly migrating storage volumes over to it, while keeping the trusty old Core i3 system online for now. 
I'm finally in a place where I don't have any spare parts itching to find a use and a homelab complex enough to make hardware purchases an extremely considered action. I'm considering consolidation and using a more disciplined approach to spreading services out over nodes, and also staying off eBay. 

 
Posted : 22/12/2023 2:08 pm
Brandon Lee
(@brandon-lee)
Posts: 338
Member Admin
Topic starter
 

@t3hbeowulf I really like your setup and you seem to not fall into the typical state that many of us do without thinking through purchases 😆 I know I fall into that category. It sounds like you methodically have built your lab and I really like how you have a "prod" and testing cluster. Hey I wanted to see if you would mind when you have time to create a new home lab forum topic on your prod vs testing cluster and how you test, promote, remediate, etc. I think that would be an interesting read and I would like to see the discussions we could have on that.

 
Posted : 22/12/2023 2:28 pm
t3hbeowulf reacted
(@techguy2023)
Posts: 1
New Member
 

@brandon-lee I would definitely like to see @t3hbeowulf write up in more detail on his cluster configurations and test/prod.

 
Posted : 23/12/2023 9:23 am
(@t3hbeowulf)
Posts: 22
Eminent Member
 

I have this task pinned.  It's one of my higher priority goals for 2024... better documentation. 

 
Posted : 06/01/2024 10:24 pm
(@life-from-scratch)
Posts: 14
Eminent Member
 

Finally coming together! My server rack drawer. At the back I have my 5G modem on the left, 9 port managed switch (8 2.5G copper with a 10G SFP+) in the middle and a CWWK X86-P5 on the right. Off to the side is the EAP-613 Access Point.

Below the modem is a power distribution block, everything pictured is running directly off my 12V bus.

The X86-P5 has an Intel N100 with 8GB RAM and 500GB storage, 2x I226V nics. It's running Proxmox with a PfSense VM and will soon also host my Home Assistant VM.

And at the front of the drawer I have 3U of room for activities. Which will also be powered off the bus via PicoPSU.

Now I just need some shorter patch cables. 

PXL 20240107 031853403
 
Posted : 07/01/2024 12:00 am
Brandon Lee
(@brandon-lee)
Posts: 338
Member Admin
Topic starter
 

@life-from-scratch awesome. This is really using some ingenuity on how you have the rack fit inside a drawer. I don't think I have seen that before, very cool! Do you have any problems with heat build-up with the equipment in there? Also which 2.5 GbE switch are you using? cool stuf.

 
Posted : 07/01/2024 9:47 pm
(@life-from-scratch)
Posts: 14
Eminent Member
 

@brandon-lee The drawer is going over my fridge, which is obviously not great from a heat perspective, but there is airflow coming up the back of the fridge and I plan to run the fans in the racked case (future server build) backwards, blowing out the front. So in theory it'll draw air in underneath the fridge, up behind it, past all the equipment and out the front. If that isn't sufficient I can blow a hole through the wall at the back and put a fan there. I've had all my stuff just sitting up there for a while now and I haven't seen any issues. So far the whole lab is running less than 50W so it's not generating a ton of heat. 

The switch is just the Mokerlink one available on Amazon. No real testing as of yet, but logging into the webUI was straight forward and it seems to be doing what a network switch is supposed to do. On my list of projects is to reallocate the 2.5GbE nics in my Elitedesk server that used to be for PfSense to give me a 2.5G link to TrueNAS and then string a wire to the 2.5GbE port on my mini desktop and see what kind of speeds I can get.

 
Posted : 08/01/2024 6:19 pm
(@alpha754293)
Posts: 5
Active Member
 

So I was referred to this topic/post from here:
https://www.youtube.com/channel/UCrxcWtpd1IGHG9RbD_9380A/community?lc=UgwV41sv_SsTQu60wql4AaABAg.9zLDW57PdS69zNQno3z53m&lb=Ugkxm8Q8S0adqfW-kFvpiKHT1JMi04J88mdQ

Here are the pictures of my homelab:

IMG 0685
IMG 0686

 

Here is a run down of the specs:

Rack:
18U 39" depth IT and Telecom cabinet from sysracks.com

QNAP TS-453Be 4-bay NAS
4x HGST 3 TB SATA 6 Gbps HDDs in it
That's my iSCSI target as well as the backup target for pictures from wife's phone. Also the target for Mac Time Machine (because QNAP make it ridiculously easy).
Dual GbE
Celeron J3455 (not the fastest, but it works enough)
Upgraded the RAM I think last year or maybe two years ago from 2 GB to 8 GB (which is the max that thing will take).

Netgear GS116 v2 - 16 port GbE switch. Nothing special other than its 7 W idle power consumption (super low).

Mellanox MSB-7890 36-port 100 Gbps Infiniband externally managed switch
(Yes, I run 100 Gbps in the basement of my home.)

(That connects to my main server at the bottom, plus two micro HPC compute nodes, as well as my system that runs my LTO-8 tape backup system.)

NFSoRDMA is AWESOME!

Node-to-node, in a RDMA aware application, I can hit about 80-ish Gbps (10 GB/s). But using the Infiniband benchmarking tools, I think that I topped out at something like 96.3 Gbps (~12 GB/s). Something like that. I can't remember exactly anymore.

QNAP TS-832 (there's actually two of them - you can see the front one in the picture, but there's actually a second one behind it that's sitting on top of the server).

They're not used anymore.

Had a mass consolidation project January 2023 where I originally moved 4 NAS servers down to one (the big one at the bottom), but then the Proxmox server started to get overloaded (well...more specifically, drive I/O wait times were too long which was causing some of my Linux VMs to through a CPU stuck warning messages, plus inconsistent behaviour/performance running iSCSI off of my Proxmox server (directly on the Debian 11 base install that lies underneath the Proxmox middleware), hence why iSCSI got offloaded back onto the QNAP NAS appliance.

But that's got an Annapurina Labs ARM processor. I can't remember what it is, but a quick google search can rectify that.

2x SFP+ 10 GbE ports, 2x RJ45 GbE NICs. (That was one nice thing about the two was that I got two 6" SFP+ to SFP+ cables so they were able to talk to each other at 20 Gbps without needing a 10 Gbps SFP+ switch.)

Next, I think is my old TrueNAS server. Has a Supermicro X7DBE (I think) motherboard in it. Dual Intel Xeon L5310 I think. 16 GB of RAM. Very slow, can't do much in the way of virtualisation, and very power hungry (relatively speaking). That was also decommissioned.

Next is my Supermicro Twin^2 blade server. Each blade is dual Xeon E5-2690 (v1) (8-cores/16-threads), 128 GB of DDR3-1866 running at DDR3-1600 speeds. 4 half-width blades in total for a total of 8 sockets, 64-cores/128-threads and 512 GB of RAM. This was my old micro HPC cluster. This has also been since decommissioned. Too much noise and uses too much power. (1.6 kW nominal, but I've gotten it to peak around 1.9 kW on a 120 VAC line.)

One of the coolest projects that I ran with it was actually running GlusterFS version 3.7, where I created 4x 110 GB RAM drives, and then created a distributed stripped Gluster volume across the RAM drives, and then exported the resulting gvol to the 100 Gbps IB network as a NFSoRDMA export.

That was fun. Can't do that anymore though as that ability was deprecated in GlusterFS version 5 and completely removed by GlusterFS version 6.

But that was a way for me to create high speed storage without burning up erase/program cycles from SSDs.

Beyond that, it used to have an Intel 540 series or 545 series 1 TB SATA 6 Gbps SSD for the CentOS boot drive, and then a HGST 3 TB local bulk storage drive (whilst everything else ran off the cluster headnode, which was also the larger pool of scratch space).

Next, it is my old micro HPC cluster headnode. Core i7-4930K I think, running on an Asus P9X79 WS-E motherboard. I think it was also 64 GB of RAM. Had a 1 TB boot drive, 4x Samsung 860 EVO 1 TB SATA SSD, and 8x HGST 10 TB SAS 12 Gbps HDDs connected to I think it was an Avago/Broadcom/LSI MegaRAID 12 Gbps SAS 9341-8i, I think.

Four SSDs were in RAID0.

8x HDDs were in RAID5.

Four SSDs were the fast scratch disk (cheaper at the time than buying one big SSD).

HDDs were for larger projects. (External aerodynamics CFD for a semi-tractor trailer produced about 5 TiB of data, per run.)

So it handled all of the data storage and some of the processing as well. For larger data processing jobs, that was dealt with by the compute nodes themselves since each node had a 100 Gbps connection back to the cluster headnode.

And last but not least (in the rack) is my latest addition (Jan 2023) - my 36-bay, 4U Supermicro Proxmox server. Has a Supermicro X10DRi-T4+ dual Xeon motherboard, with dual Xeon E5-2697A v4s in there (16-core/32-threads,  or 32-cores/64-threads total), 256 GB DDR4-2400 ECC Reg RAM.

4x 3 TB HGST SATA 6 Gbps HDD (for OS, in RAID6)
8x 6 TB HGST SATA 6 Gbps HDD (for VMs/CTs, raidz2)
8x 6 TB HGST SATA 6 Gbps HDD + 8x 6 TB HGST SATA 6 Gbps HDD + 8x 10 TB HGST SAS 12 Gbps HDDs (three raidz2 vdevs, one ZFS pool) for bulk storage for everything else. I think that the last time I checked, formatted capacity, I think was something like 155 TiB. (out of a raw capacity of 176 TB)

The motherboard itself has quad 10 GbE NICs onboard plus IPMI.

Now it runs 7 CTs and 8 VMs (but 20 other CTs and 4 other VMs are stopped at the moment).

So it's my "do everything" server.

Cut my power consumption from the 4 NAS systems that I used to run, plus a Netgear 7248 48-port GbE switch down from 1242 W to ~550 W now. (When the system is running harder, it'll push up into the ~670 W-700 W range.

But that's still better than 1242 W that it was pulling, running multiple systems.

So that's the rack.

The other picture is my latest addition (Jan 2024) which is three OASLOA Mini PCs that I bought off of Amazon. It's got the Intel N95 processor in there, 16 GB LPDDR5 RAM that's soldered onto the board (not upgradable), and originally came with a 512 GB SATA 2242 M.2 SSD, but I replaced that with an Inland 512 GB NVMe 2242 M.2 SSD, and now it's my 3-node Proxmox HA cluster that is responsible for Windows (and Linux) AD DC, DNS resolver (via SLES12 SP4), and Pi-Hole (network wide ad blocker).

Not pictured:
My 3930K system (old daily driver)
Two AMD Ryzen 9 5950X w/ 128 GB of DDR4-3200 UDIMM RAM system, 1 TB HGST SATA 6 Gbps HDD (OS) - my new compute nodes. (Quieter and consumes less power at 100% load.) Had to jumpstart that system with the previously mentioned 3930K because the kernel in CentOS 7.7.1908 is so old that it kernel panics when I try to cold boot the 5950X with CentOS. So I have to install CentOS with my 3930K first, then update the kernel via the EPEL repo, before dropping the drive into the 5950X systems. Linux. Fun, right?

And I would also set up the system with like a GTX 980 for basic video out before that gets pulled and I drop the Mellanox ConnectX-4 dual VPI port 100 Gbps IB card in instead (because the bottom slot is only PCIe 3.0 x4 which really kills the IB card bandwidth).

So they all run headless and I use VNC for graphic manipulation.

Two HP Z420 workstations, also have Intel Xeon E5-2690 (v1) as mentioned before, 128 GB DDR3-1600 ECC Reg. RAM, whatever random drive I have now.

One is running Windows 10 ATM, and the other is running CentOS (with a RTX A2000 6 GB in it) because I was doing some testing with GPU accelerated CFD. (It's STUPID fast (like 4x faster, I think, IIRC) and that's just with the "lowly" RTX A2000. Given faster and more VRAM GPUs, I can probably do quite a lot with it.)

Also not pictured is another 3930K system (where I damaged the CPU when I used to overclock and overvolt it to 1.45 Vcore), so I think that at least one of the cores got damaged, so now it runs as a 4 core CPU instead of a 6 core CPU. But that's okay, because all it does now is run my LTO-8 tape drive/tape backup system. (Which also has a Mellanox ConnectX-4 card, and a direct connection to the Proxmox server, bypassing the Mellanox 100 Gbps switch.) (The IB switch really only comes on when the compute nodes are on.)

So that's a rundown of my homelab setup.

 
Posted : 09/01/2024 11:57 pm
(@alpha754293)
Posts: 5
Active Member
 

*edit*
So I forgot to add, that for my main Proxmox server, the VM <-> host communication is handled via virtio-fs where and when possible.

That way, it skips having to go through the network stack entirely, and it's WAYYY faster, especially if your folder has a lot of files in it where all of the file explorers (Windows Explorer, Nautilus, Finder, etc.) will ALL struggle to load the contents of the folder, but with virtiofs, it loads the whole thing, practically instantly.

Virtio-fs is AWESOME.

Virtio NIC in Windows 7 shows up as a 100 Gbps NIC.

In Windows 10 and 11, and Linux, (and MacOS) it shows up as 10 Gbps NIC.

So, it's "free" 10 Gbps networking.

No cables, no switches, no adapters, no NICs.

Don't have to mess with different types of fiber cables, different lengths, which require different transceivers, and making sure that the fiber optic cables on both ends are and stay clean.

All of that goes away.

It's the cheapest 10 Gbps network you'll ever deploy.

(I use it. A LOT.)

I did this because I found that a lot of my old multi-NAS setup - what it was always doing was basically talking amongst themselves, and my clients were talking to the server as well.

So, stuff everything in one box, and all of that can happen, much faster, within the same box.

And then not having to use NFS nor CIFS/SMB just made it even better/faster/easier. (Not that setting up a NFS is difficult.)

The only thing that DOESN'T work with virtio-fs is that I can't have that running at the same time that NFSoRDMA is running or trying to run.

(Or maybe Debian 11 just doesn't DO NFSoRDMA at all.)

Ran into a NFS lock issue with that.

But other than that -- it works well.

Also, not pictured in the above picture is my other mini PC - my new daily driver -- a Beelink GTR5 5900HX w/ 64 GB of DDR4-3200 RAM.

It's awesome. It has dual 2.5 GbE NICs, and I can run three monitors off of it, and it's low-ish power, but also performant because of the CPU, and it's tiny vs. your standard tower computer (hence "mini" PC). I love it!

 
Posted : 10/01/2024 12:20 am
Brandon Lee
(@brandon-lee)
Posts: 338
Member Admin
Topic starter
 

@alpha754293 Thanks for sharing your lab! Awesome stuff and great detail in the info you provided here. I will look through everything more closely and I'm sure I will have some questions Smile Also, I need to check out the virtio-fs you have mentioned.

 
Posted : 10/01/2024 3:17 pm
alpha754293 reacted
(@alpha754293)
Posts: 5
Active Member
 

@Brandon Lee
No problem.

re: virtio-fs
One of the lead software engineers for that project is Stefan Hajnoczi (Senior Principal Software Engineer, Redhat). His website is: https://vmsplice.net/

Two of the slide decks that might be of interest or relevance are:
virtio-fs: A Shared File System for Virtual Machines at KVM Forum 2019

and

Virtio-fs for Kata Containers storage at Kata Containers Architecture Committee Call

If you google his name, you can also find videos of his presentations of YouTube.

Yeah - feel free to ask any questions that you may have. Not a problem at all. Smile

This post was modified 8 months ago 2 times by alpha754293
 
Posted : 10/01/2024 4:57 pm
Page 2 / 2