Best Home Lab Serve...
 
Notifications
Clear all

Best Home Lab Server Pics and gear in 2024

94 Posts
39 Users
47 Reactions
8,205 Views
Posts: 1
(@ksteink)
New Member
Joined: 3 months ago
image
image
image

I started my home lab several years ago over a shelf with a very old PC (AMD350 with dual core), a Mikrotik RB2011, an 8 Port Cisco Meraki Switch and couple Meraki WAPs (MR33 and MR18).

ย 

My home lab has evolved to an open 6U Rack using shelf and couple years ago I step up my game with full 20U Rack enclosed).ย  My next goal is to pursue a 42U but I need to move to a different house to get proper space so I have space constrains!

ย 

Now This is the current layout of my rack:

- Outside of the rack I have one Juniper Mist AP41 Access Point.

- Inside in the top of the rack I have my ISP fiber modem (a Huwaei modem) in bridge mode.

- In the following U I have a Mikrotik RB5009 that is my main Internet Edge Router.ย  I do VPNs, Firewall, etc.

- The uplink of the Mikrotik RB5009 goes to my ISP's fiber modem and the Downlink goes to my Layer 2 IPS / AMP solution (Cisco Meraki MX65 configured in Bridge Mode)

- The Meraki MX65 listens passively and applies IPS and AMP rules for anything leaving or anything my internal LAN.ย  If the Meraki 65 is down I have my RB5009 to bypass via another ethernet link back to my both CRS317 using bond interfaces (active / standby).

- Downstream to the Meraki MX65 I have a pair of Mikrotik CRS317 acting as my Layer 3 Core Switches.ย  These switches has 16 ports x 10 Gbps SFP+.ย  Even Mikrotik doesn't support or has VSS for High Availability I have created some scripts to simulate this behavior (Basically I have a primary switch and the second one is in standby mode with all the interfaces down except the heartbeat one and monitors the primary switch constantly (every second).ย  If the connection is lost the secondary switch enables all the SFP+ ports and has almost the same configuration of the primary switch (which is replicated from the primary switch every night to keep the configurations in sync between the 2 switches).

- Below the 2 core switches I have a Mikrotik CRS26 which was my old core switch but now I use it as a Server Access switch.ย  All my 1 Gbps connections (Ethernet RJ45) are connected there.ย  From this switch I have 1 SFP+ uplink to each of my Mikrotik CRS317 to get High availability on my core.

-ย  Below my CRS326 I have a 24 keystone patch panel to connect all the nodes in my rack and the access points from the rest of the house (Also Juniper Mist AP41)

- In the next level I have a shelf with Raspberry Pi 4 running Pi-KVM (v2 DIY) that allows me manage my servers that doesn't have an IPMI interface.

- In the same shelf I have a Juniper EX2300-C switch that has PoE+ capabilities to energize my WAPs (And future cameras)

- In the next shelf I have 2 mini PCs (HP elitedesk mini 800-G2) with 24 and 32 GB of RAM, 512 GB NVMe for boot drive (for my working nodes Proxmox cluster) and 1 TB SSD for my VMs storage using ZFS.ย  This mini PCs have a single link to my CRS326 but I am planning to add USB Ethernet adapater and RJ45 media converters of 1 Gbps to have dual uplinks to my core switches

- I also have an old QNAP NAS (2 Bay with an Celeron CPU and 16 GB of RAM).ย  This NAS is the main storage of my home network for my personal files, work files, photos, games, movies, etc.ย  I am planning to decommission my NAS to a larger server (Bottom of the rack)

- The server with 4 HDDs was my first server build and runs a Chinese X99 MATX motherboard with a Xeon E5-2650Lv3 CPU (12 cores / 24 threads) and 128 GB DDR4 Memory.ย  I recently upgraded the server with a dual 25/10 Gbps Mellanox Connect-X4 NICs with 1 link going to each of my core switches (CRS317). This server is also running Proxmox as part of my cluster.

- The bottom server with 8 bays is my recent addition with a Supermicro X11SPM-TF with a Xeon Golden 6118 with 20 cores and 40 threats and 128 GB of RAM with also a Melanox Connect-X4.ย  This server is also running Proxmox as part of my cluster

ย 

Most of the Hardware I bought it 2nd hand or got it for free doing training certifications (like the Juniper and Meraki)

ย 

What I am running in my Homelab?

- Proxmox as hypervisor for all my servers.

- 2 x TrueNAS Scale in a VMs with HDD passthruย  in my 2 large servers so I have high availability of my data (and sunset my old QNAP).

- Couple windows VMs for testing configurations.

- Dashy Dashboard (planning to replace it with Dashboard).

- Netbox for DCIM and IPAM

- NUT Server for UPS Monitoring (and self shutdown in case of power outage)

- Syncthing for Storage and data replication between servers and workstations.

- NetData for monitoring

- Zabbix and CheckMK (testing both now to select one for my infrastructure monitoring)

- AdGuard for DNS Server

- Apache Guacamole and NextTerm as Jump Servers

- CrowdSec to push 30K block IPs to my RB5009 as an extra layer of IPS security (in top of the Meraki MX65)

- Portainer for all my containers' management

- Jellyfin and Plex for all my local Stream media

- Home Assistant for Home Automations.

- Greenbone OpenVAS for Vulnerability Scanning.

ย 

What are my next steps? Continue my journey to have high availability almost everywhere.

- I want to replace my EliteDesk mini-PCs with more new ones (lower power consumption and higher speed).ย  Winning this award will help me on my next upgrade.

- I want to decommission my old X99 server and replace it with another Xeon Golden 6118 to be the backup of the current one.ย  I want to run in each server Proxmox and TrueNAS Scale in a VM with real time data replication and automatic failover (via custom scripts that I am planning to develop)

- I want to decommission my Meraki MX65 that is capping my BW inspection to 250 Mbps but now I have more than that.ย  I want to replace it with a mini PC that has 4 x SFP+ and few 2.5 Gbps running Proxmox and few VMs (OpenSense with ZenArmor in Layer 2, Ubuntu VM / LXC to have a netinstall server to remotely reformat and reinstall any of my core switches remotely if needed without the need to move cables)

- I want to replace my Mist WAPs that are end of support and license with either Unifi APs (WiFi 7) or Omada (EAP773).ย  I like more the Unifi ecosystem but I see Omada with more hardware capabilities (10 Gbps links)

- I want to replace my Juniper switch to support mGig ports.ย  Probably a Catalyst 3650 or 3850 with UPOE for all my access needs (Cameras, and WAPs).ย  My main concern is power consumption but Unifi or TP-Link options are too expensive or doesn't have all the features that I am looking for.

What I like of my home lab is that allow me to keep myself sharp on technical skills and test new things the way I wanted.

ย 

Reply
Posts: 1
(@practicalhomelabs)
New Member
Joined: 3 months ago

Well, here is my little slice of Geek Heaven that I've been cobbling together, and evolving over the years. Starting from a little Acer mini-pc, up to this power hungry monster. Good thing I work for the Power Company!!

image1
VCenter Stats

I'll start from the top down.

Sophos XG210 v3 as my Edge Firewall

Cisco 2960s 48p POE Core Switch

48 ports of Spaghetti wielding patch panels (RJ45)

TP-Link 16p 10GbE SFP+ switch for my iSCSI, Veeam, and high speed data transfers.

ย TrendNet 8p KVM

SuperMicro 1u SYS-800D 64GB RAM, 512GB NVMe, 2TB NVMe, 5TB 5400RPM Spinner 6x1G RJ45 + 2x10G SFP+

Custom Built 2u Rackmount Server chassis holding a Watercooled Threadripper 2970WX 24c, 64GB RAM, 512GB NVMe, 2TB NVMe, 6x1G RJ45 + 2x10G SFP+

IMG 8030
IMG 8038

HP DL380 G9 128GB RAM, 8x600 10,000RPM SAS Drives for Local Datastore, 8x1G RJ45 + 2x10G SFP+

HP DL380 G9 192GB RAM, 24 SAS Drives of various sizes making up 3 Local Datastores 8x1G RJ45 + 2x10G SFP+, Nvidia Quadro (can't remember the model) passed through to my Plex VM

Custom built 4u TrueNAS Scale with 32GB RAM, Ryzen5 4600g, 12x 2TB Hotswappable HDDs, 8x 1TB SSD, 2x 512GB NVMe, 3x 256GB SSD, 1x 128GB SSD

NAS2
IMG 8006
IMG 8010

2u APC UPS

I'm a VMWare guy at work, and so I keep my home stuff on VMWare as well. Although, I didn't purchase anything newer than v 7. I've already purchased it, so I see no need to go with anything like Proxmox, or Virtualbox.

With more vLANs than you can shake a stick at. I'm just waiting to get a 42u Rack that I can put all my other gear into as well.

ย 

Reply
Posts: 1
(@jackofalltech)
New Member
Joined: 3 months ago

This is my homelab:

  • Unifi UDM PRO
  • Unifi 10g Aggregation Switch
  • Unifi 24 Port Pro POE
  • Synology DS1821+ w/10gb adapter, 32gb and 7x8TB drives and 2x1TB nvme drives for read/write caching
  • Synology DS923+ (not pictured)ย  - Offisite backup unit at a friends house for replication - He does the same with me, his DS418j pictured
  • 2x external USB HD for local synology backups
  • Cyberpower 1500 VA UPSย 

My server specs are:

  • Ryzen 5900 12 Core CPU
  • 64GB Ram
  • 1TB NVME storage
  • nVidia A2000 6GB
  • intel x550-T2 10gb NIC

My Server is running:

  • ESXi 8
  • 1 VM for plex with nVidia A2000 GPU passed through
  • 1 VM for minecraft server for the kids
  • 1 VM Running Docker containers
    • PiHole
    • Portainer
    • Homebridge
    • Uptime Kuma
  • 1 VM configured as a domain controller for AD testing

Other components in the rack:

  • ISP Fiber Jack
  • Raspberry Pi 3B+ - Running Pihole as a secondary DNS to the docker container in case I need to power the VM down or perform other maintenance

Future Expansion:

  • Unifi PoE cameras on the exterior of the house
  • Second UPS to load balance and increase the amount backup time
  • A second server node (I may also switch to proxmox)

ย 

ย 

IMG 7272
IMG 7274
IMG 7275
IMG 7273
Reply
Posts: 1
(@kr1ps)
New Member
Joined: 3 months ago

Hi all,

Thanks for the giveaway, and here are the details of my homelab/playground:

  1. PowerEdge R630 (2x 24 CPUs E5-2690 v3; 128GB Ram; 2TB Nvme, 2TB SSD; 10GB Network )ย 
    1. Vmware ESX as a hypervisor with Vcenter on top for management.
    2. home media ecosystem
      1. plex
      2. overseer
      3. radarr
      4. etc...
    3. k8s stuff
      1. rancherย 
      2. argocd
      3. hashicorp vault
      4. personal (kr1ps.com) and wife website both in wordpress
      5. prometheus
      6. grafana
      7. plantulm
      8. grafana
      9. etc...
    4. miscellaneous stuff
      1. immich
      2. kasm
      3. portainer
      4. traefic
      5. etc...
  2. AI RIG (AMD Ryzen Threadripper 3970X; 128GB Ram; 512Gb Nvme;ย  2x RTX3090-24GB + 1x RTX3050-8GB)
    1. ollama
    2. openwebui
    3. perplexica
    4. morphic
  3. Neo Z83-4 Pro (home assistant)
  4. Network
    1. Unifi Cloud Key Gen2 Plus
    2. edge-switch 24 lite (1GB)
    3. edge-switch 16 XG (10GB)
    4. MikroTik router RB750GR3

ย 

Sorry for the cable mess, but everything is secure.

image
image
Reply
1 Reply
Brandon Lee
Admin
(@brandon-lee)
Joined: 14 years ago

Member
Posts: 416

@kr1ps very nice. Humble setups are just as good. We all start somewhere and labs can grow and shrink as needed.

Reply
Posts: 2
(@jackalltrades)
New Member
Joined: 3 months ago
20240925 104959
20240925 034614

My humble setup. This lab started with a pi 5 running open media vault attached to an external drive case over usb. I bought the synology as what felt at the time a great upgrade to move my workloads to. Didn't take long to realize I didn't like the way synology hard locks cpu to it's VM's. Looked hard at the n100 platform, since low power draw has become part of the mission with this lab, but the price to power just made the R7 too sweet to pass up. To get a machine strong enough to run these virtual loads reliably for less than 300 bucks ........ what a time to be a nerd!!!! I might still go down the n100 route for clustering...as high availability is the next logically step in my learning journey.

Hardware:

ย ย ย ย  Vertiv UPSย  (don't buy if you want nut support)ย  ((yes I just wanted an excuse to write nut support)) (((twice apparantly)))

ย ย ย ย  Synology DS718+ with two 12 TB enterprise HDD'sย  (less than 100 each on amazon) 32 gb ecc ram, and 1 (for now) 1tb nvme for all workloads to run on (surveillance station, syncthing, andย  a couple other docker containers)

ย ย ย ย  GMKTec M5 pro Ryzen 7 5700 U with 32 gb non ecc ram, 1 TB nvme. This box is running proxmox with all the usual suspects (pihole, opnsense, docker, omv, HAOS, azerothcore, etc....). I still don't have gpu passthrough worked out on this machine though.ย 

ย ย ย ย  PI 400 as a thin client. One of the coolest things to me about virtualization is the fact that I can now give every kid their own computer. No more dealing with ......well any of the things that come with 4 people sharing a single pc. Following apalrd's guides, I have set up vdi client to manage access to these vms. With that, this pi 400 is the perfect interface for attaching to these vms.

ย ย ย ย  MSI Trident 9th gen maybe? with an i5-9400F CPU. This is my main pc. I entertained virtualization with this pc as well, but of course the bios doesn't support iommu groups, which would defeat the purpose.....so it stays a windows machine...for now.

Network:

ย ย ย ย  Deco Mesh as the main router (for now) serving DHCP to wireless clients and the clients that I haven't moved behind vlans yet (started this transition last week)

ย ย ย ย  Netgear 8 port managed POE switch with LAG uplink to server closet

ย ย ย ย  TPLink 8 port managed switch in server closet connecting mini pc and synology

ย ย ย ย  I was originally running Deco mesh units to feed the "server closet" to the rest of the network. This is.....functional but not ideal. Honestly it worked basically fine, but I wanted better. But we also rent, so hard wiring wasn't exactly straight forward. Luckily, I have attic access and there was already a hole in the wall for cable in one room and an old security system in the closet. A couple of hours crawling around in an attic and we're in business.

ADVICE FOR FUTURE READER:

ย ย ย ย  If you are going through the arduous journey of running some ethernet cable between two difficult locations, run extra lines. In my case, I wanted two lines running between my server closet and my workstation. I pulled 4 total lines. Two are terminated with ends for now (next phase of cleaning up will include punching wall terminals) and two are spares on standby. I would say even if you are wiring up a new construction, it's always cheaper to run an extra wire now, even if it doubles your cable budget, than it is to add a new wire you forgot about, grew into, or need to replace later.

Reply
Page 5 / 9