homelab

7026 readers
1 users here now

founded 4 years ago
MODERATORS
1
 
 

Hi all! I had posted a similar post in selfhosted, but it was deleted because it was only hardware-related. Therefore a new attempt here in the correct sub.

I want to reorganize my home server landscape a bit. A Proxmox server is to receive an LXC, with ollama and open webui. This is to be used for other containers that categorize via AI (paperless-ai, hoarder, actualbudget-ai, maybe home assistant speech) and also for one or the other chat. Speed is not so important to me, focus is on low idle and models up to 100 GB. It's ok to wait several minutes for answers. I don't want a GPU.

(Currently I successfully use models up to 32b on a Lenovo M920x with i7-8700 and 64 GB RAM. These models are supposed to run faster or slow up to 100b)

I want to spend 2000, if necessary 3000€ (mainboard, CPU, RAM).

My research showed that the bottleneck is always the memory bandwidth of the CPU. I would take 128GB RAM and want to use all channels.

Current variants: Intel i9-10940X, 14C/28T, 3.30-4.80GHz Quad Channel DDR4 93.9GB/s

Intel i9-14900K, 8C+16c/32T, 3.20-6.00GHz Dual Channel, DDR5, 89.6GB/s

Intel Core Ultra 9 285K, 8C+16c/24T, 3.70-5.70GHz Dual Channel, DDR5, 89.6GB/s (CU-DIMM: 102.4GB/s)

Intel Xeon Silver 4510, 12C/24T, 2.40-4.10GHz 8-Channel, DDR5, 281.6GB/s (means you need 8 sticks!)

Intel Ryzen 9 9950X, 16C/32T, 4.30-5.70GHz Dual Channel, DDR5, 89.6GB/s

These would all be in the €1500-2500 range.

Do you have any concrete experience? At the moment I prefer the Ultra 9, it's the latest platform and expandable. I prefer Intel because it is more energy efficient in idle mode.

The Xeons have the disadvantage that they have many channels, which then have to be filled with memory sticks to achieve the full rate =more energy, more cost).

2
7
submitted 6 days ago* (last edited 5 days ago) by sommerset@thelemmy.club to c/homelab@lemmy.ml
 
 

when will amd release Siena CPU refresh?

I like 80W TDP epyc, but its been awhile since released.

3
 
 

I have a T630 that has started powering off after a random amount of time, usually less than 12 hours. When it powers off the backlight of the front panel LCD goes off as do all lights on the case, and iDRAC also doesn't work. So it looks like there's a problem in the power. Dell support seem to have run out of ideas, presumably because they don't want to suggest that I replace parts and they know I'm not going to pay Dell support.

I suspect it could be a faulty Power Backplane board J14R7 0J14R7, how would I test for that?

4
 
 

Hello there,

I’ve been running a little army of raspberry pi and libre computer lepotato for many years now.

Sometime died of overheating, one died because the microsd card failed so hard that some kind of electrical shock took off the whole pi.

I’m looking at this trend: replace that with a single or a 2 node cluster of mini pc.

The point is I still want to consume as less electricity as possible. So low TDP CPUs <10 to 15W is my most important criteria, then 2 disk bays (don’t care about the form factor or connector).

Reading buyers comments on Amazon indicates that cheap Chinese mini pc have their ssd dying quickly, or their motherboard, or their power supply, sometimes in months, not even a year.

Would you please recommend a low power mini pc please ? It may be Chinese but from a reputable brand (which I fail to determine).

5
 
 

I'm looking to upgrade some of my internal systems to 10 gigabit, and seeing some patchy/conflicting/outdated info. Does anyone have any experience with local fiber? This would be entirely isolated to within my LAN, to enable faster access to my fileserver.

Current existing hardware:

  • MikroTik CSS326-24G-2S+RM, featuring 2 SFP+ ports capable of 10GbE
  • File server with a consumer-grade desktop PC motherboard. I have multiple options for this one going forward, but all will have at least 1 open PCIe x4+ slot
  • This file server already has an LSI SAS x8 card connected to an external DAS
  • Additional consumer-grade desktop PC, also featuring an open PCIe x4 slot.
  • Physical access to run a fiber cable through the ceiling/walls

My primary goal is to have these connected as fast as possible to each other, while also allowing access to the rest of the LAN. I'm reluctant to use Cat6a (which is what these are currently using) due to reports of excessive heat and instability from the SFP+ modules.

As such, I'm willing to run some fiber cables. Here is my current plan, mostly sourced from FS:

  • 2x Supermicro AOC-STGN-i2S / AOC-STGN-i1S (sourced from eBay)
  • 2x Intel E10GSFPSR Compatible 10GBASE-SR SFP+ 850nm 300m DOM Duplex LC/UPC MMF Optical Transceiver Module (FS P/N: SFP-10GSR-85 for the NIC side)
  • 2x Ubiquiti UF-MM-10G Compatible 10GBASE-SR SFP+ 850nm 300m DOM Duplex LC/UPC MMF Optical Transceiver Module (FS P/N: SFP-10GSR-85, for the switch side)
  • 2x 15m (49ft) Fiber Patch Cable, LC UPC to LC UPC, Duplex, 2 Fibers, Multimode (OM4), Riser (OFNR), 2.0mm, Tight-Buffered, Aqua (FS P/N: OM4LCDX)

I know the cards are x8, but it seems that's only needed to max out both ports. I will only be using one port on each card.

Are fiber keystone jacks/couplers (FS P/N: KJ-OM4LCDX) a bad idea?

Am I missing something completely? Are these even compatible with each other? I chose Ubiquti for the switch SFP+ since Mikrotik doesn't vendor-lock, AFAICT.

Location: US

6
 
 

I'm currently planning to build a low power nas for my upcoming minirack (10").

It's going to store daily proxmox vm disk snapshots, some image files and some backups from my laptop, all via NFS. Plus some more in the future, but generally, it's going to idle 95% of the day. Not decided on the OS yet, probably TrueNAS Core or OMV.

I already have an Olmaster 5,25" JBOD in which I'll put 3 x 2,5" 2TB SSD via SATA. The JBOD needs a single Molex connector for powering all SSDs. So I need at least 3 SATA + Boot.

Some recherche led me to this post and I tend towards a similar build with a J4105-ITX (cheaper, probably little less power consumption, enough CPU ofr NAS).

These officially are limited to 8GB RAM but seem to work fine with more if you don't update your BIOS which is not optimal but acceptable if everything else works fine. I'd like 16G for efficient ZFS but I guess even 8 are fine if it's not doing much else (2GB base + almost 1 for each TB storage + OS), just don't tell TrueNAS forum users.

While I don't plan 10G ethernet now, the PCIe slot should leave that possibility open.

I read good things about PicoPSUs, but that depends on which case I get as they usually already got some PSU.

The case question remains open - I tend to get something like the LC-1350MI-V2 as it's cheap, contains a 72W PSU and fits into the 10" rack nicely. In that case, I would need to go out of the case with the SATA cables and rack the JBOD on it's own - which is fine since there's pritable files for exactly that. Other possibility would be to get a case with bays for the 2,5" (seems unnecessary since I already have the JBOD and don't want to add more requirements to the PSU) or get a case with a 5,25" bay (rare in cases this size).

I'm mostly asking for advice regarding the case/PSU thing but nothing is set in stone other than the SSD/JBOD combo. I'd like to keep the rest < 150€ and prefer used hardware, at least for the case. I'd be glad for your thoughts and ideas!

7
 
 

I want to establish a second LAN at home. It's supposed to host different services on different infrastructure (vms, k8s, docker) and mostly serving as a lab.

I want to separate this from the default ISP router LAN (192.68.x.0/24).

I have a machine with 2 NIC (eno1 plugged in at ISP router and eno2), both with corresponding bridges and proxmox. I already set up the eno2 bridge with a 10.x.x.x IP and installed a opnsense vm that has eno1 as the WAN interface in the 192 network and eno2 as the LAN interface as 10. network with dhcp server.

I connected a laptop (no wifi) to eno2, got a dhcp lease and can connect the opnsense interface, machines in the 192 network and the internet, same for a vm on the eno2 bridge, so that part is working. There's a pihole in the 192 network that I successfuly set as the dns server in opnsense.

Here's what I am trying to achieve and where I'm not sure about how to properly do it:

  • Block access from the 10 network to 192 network except for specific devices - I guess that's simply firewall rules
  • Make services (by port) in the 10 network accessible to the internet. I currently have a reverse proxy vm in the 192 network which got 80 and 443 forwarded by the ISP router. Do I need to add a second nic to the vm or can I route some services through the firewall? I want to firewall that vm down so it can't open outgoing connections except for specific ports on specific hosts.
  • Make devices in the 10 network available for devices in the 192 network - here I'm not quite sure. Do I need to a static route?
  • Eventually I want to move all non-enduser devices to the new LAN so I can experiment without harming the family network but I want to make sure I understand it properly before doing that

I'd be glad for any hints on this, I'm a bit confused with the nomenclature here. If you have other ideas on how to approach this, I'm open for that too.

8
12
EliteDesk 800 G6 SFF setup (lemmy.dbzer0.com)
submitted 3 weeks ago* (last edited 3 weeks ago) by krazzyk@lemmy.dbzer0.com to c/homelab@lemmy.ml
 
 

Hi guys,

Just picked myself up an EliteDesk 800 G6 SFF:

Current specs:

  • CPU: i5-10500
  • RAM: 8 GB
  • NVMe SSD: 256 GB

My plan is to beef this up with:

  • RAM: Crucial Pro DDR4 RAM 64GB Kit (2x32GB) 3200MHz

  • HDD: 4TB ironwolf NAS drives * 2

  • NVME SSD: Samsung 970 EVO Plus 1 TB PCIe NVMe M.2

How I'm planning my setup:

The existing 256 GB NVMe will host Proxmox.

The new 1 TB NVMe will be for VM's & LXC's

The 4TB ironwolf NAS drives will be configured in a mirror and will be used as a NAS (Best way to do this?) as well as for bulk data from my services, like recordings from Frigate.

Services:

  • Home Assistant (Currently running on a pi4)

  • Frigate (Currently running on a pi4)

  • Pi hole (Maybe, already running on an OG pi)

  • Next cloud (Calendar, photos)

  • TailsScale

  • Vaultwarden

  • Windows 11

My follow on projects will be:

Setup PBS to back up my Host(proxmox-backup-client), VMS & LXC's

I have a Raspberry Pi 4 that I was thinking to use for PBS in the short term, but will eventually move it to something like an n100 mini PC.

I will also setup a second NAS(TrueNAS most likely, bare metal) to back up the 4TB ironwolf NAS.

This is my first proper homeLab, having mostly tinkered with Raspberry Pi's and Arduino's up to this point, any advice on my setup would be really appreciated.

9
 
 

cross-posted from: https://slrpnk.net/post/17736356

Hi there good folks!

I am going to be upgrading my server within the next couple of months and am trying to do some prior planning. My current setup is as follows:

  • Case: Fractal Define R5
  • Mothberboard: Gigabyte Z170X-Designare-CF
  • CPU: i7-6700K CPU @ 4.00GHz
  • Memory: 32 GiB DDR4
  • Storage: 15TB spread across 4 HDDs (10x2x2x1) + 1HDD at 10TB for Parity.
  • OS: Unraid 🧡

While this setup as served me well, I am completely hooked on these mini-racks(Rackmate T1) and am thinking of getting one eventually. Fortunately I'll be getting my hands on my first mini-pc soon, an ASUS ExpertCenter PN52. This little badboy has the following specs:

  • CPU: AMD Ryzen™ 9 5900HX
  • Memory: 32 GiB DDR4
  • Storage: Comes with one NVMe SSD 1TB

From my little cpu knowledege this one is superior in almost all ways, so it feels like an easy choice to swith out the old one. I need an enclosing for my 5 HDDs that connects to this minipc. This leads me to my questions:

  1. What are your suggestions for enclosings?
  2. Whats the best way to connect an enclosing like this to the mini-pc?

Any pinpointers, opinions and suggestions appriciated!

edit: im getting the mini-pc for free actually, so feel like its a no brainer to upgrade.

Pictures of the mini-pc for those interested:

Ports overview

Front

Easily configurable

10
 
 

I recently generated a self-signed cert to use with NGINX via it's GUI.

  1. Generate cert and key
  2. Upload these via the GUI
  3. Apply to each Proxy Host

Now when I visit my internal sites (eg, jellyfin.home) I get a warning (because this cert is not signed by a trusted CA) but the connection is https.

My question is, does this mean that my connection is fully encrypted from my client (eg my laptop) to my server hosting Jellyfin? I understand that when I go to jellyfin.home, my PiHole resolves this to NGINX, then NGINX completes the connection to the IP:port it has configured and uses the cert it has assigned to this proxy host, but the Jellyfin server itself does not have any certs installed on it.

11
 
 

I was looking to see what would happen on the 3rd floor with a ceiling-mounted AP on the 2nd floor. New to Unifi, I keep being surprised with delight how much useful tooling & info there is.

Here's for the U6+:

radiation patterns

12
 
 

I brought a Grandstream GWN7711P switch a while ago but I have found a rather annoying problem.

When the switch does not have an internet connection it is spamming "router.gwn.cloud" every 2-5 seconds and filling my firewall logs (360+ times in 35 min).

Does anyone know how to disable the cloud connection?

13
 
 

Hi everone, basically what the title says. I am just starting my homelab and I am somewhat conflicted on whether I should run Opensense in Proxmox or should I buy a n100 device dedicated for it. What are some of the pros and cons of doind either or. So far in my research I have only come across articles/forum posts explaining how to run Opensense in Proxmox.

14
10
submitted 1 month ago* (last edited 1 month ago) by root@lemmy.world to c/homelab@lemmy.ml
 
 

I recently setup SearXNG to take the place of Whoogle (since Google broke it by disabling JS free query results). I am following the same steps I've always done in adding a new default search engine.

Navigate to the address bar, right click "Add SearXNG" then go into settings and make it my default. After doing this, rather than using the local IP the instance is running at, Firefox uses https://localhost/search for some reason. I don't see a way to edit this in the settings section of Firefox. Anyone else experienced this?

Update: After updating the .env file with my IP address and bring docker down/ up, all is working as expected (able to use SearXNG via Caddy using the https:// address)

15
 
 

For years, I have been using Whoogle for my self-hosted searches. It's been great, but recently there were some upstream changes that seem to have broken it.

I'm guessing that SearXng will soon follow (based on the assumption that they too are using the JS free results Google used to provide).

Does anyone have any self-hosted search options that still work? I hear Kagi is good for paid/ non-self hosted options, but just curious what you all are using.

16
 
 

I'm looking for options to replace my 2-bay DS214play after 10 years of service and I'm looking for recommendations on what direction to go. My main reason for retiring the NAS is that the OS will see no further updates from Synology, and not much will run on i386 architecture.

I run truenas + docker on a NUC-like HM90 mini-pc which is attached to the NAS for storage and this has been working well for the past ~2 years.

I figure that my options are to either continue using the mini-pc with a form of "dumb" network storage, or replace both systems with something that can handle both workloads.

I've considered building my own SFF PC instead of buying a new NAS (as this would have better upgrade paths), but I haven't been able to find anything with space for HDD which will also fit in the 10" cabinet that both of the above systems currently share.

The new NAS lineup from ugreen (DXP2800/DXP4800) seems like reasonable options, but I'm wondering if there's other options I should consider instead, as these models will only barely fit on the cabinet shelf (250Hx210Wx250D).

17
 
 

cross-posted from: https://lemmy.world/post/24140532

Hi everyone!

I’m planning to repurpose an old computer case to house a few Raspberry Pis and could really use some advice.

First, I’m trying to figure out the best way to mount the Raspberry Pis inside the case. Are there any good DIY solutions, or are there premade mounts designed for this kind of project? I want them to be secure but accessible if I need to make changes.

Next, I’d like to power all the Pis centrally. Is there a way to do this efficiently without using separate power adapters for each Pi? I’ve heard of some solutions involving custom power supplies, but I’m not sure where to start.

I’m also thinking about cooling. Would the case’s old fans be sufficient, or should I add heatsinks or other cooling methods to each Pi? I want to make sure everything stays cool, especially if the Pis are running intensive tasks.

Finally, what’s the best way to handle I/O? I’ll need to route HDMI, USB, Ethernet, and other connections out of the case. Are there panel kits or other ways to organize the cables neatly?

I’d love to hear your suggestions or see examples if you’ve done something similar. Thanks in advance for your help!

18
 
 

So I currently have an Asus RT-AC86U that is working fine, but bogging down under load, and also is EOL.

We've got three people and about 15 devices, give or take. Our internet service is currently 300Mb cable.

The AX88U Pro is currently on a very good sale - $220CDN. I figure my options are that, the BE86U at $370, or the BE88U at $500.

Five hundred bucks is out of my justifiable price range. Spending less (a lot!) on the AX router would be nice, but the longevity (and support lifespan) of the BE86 has some appeal too.

I'm also not married to Asus, although they've been consistently excellent for me.

What do y'all think? Any educated guesses on when Asus is going to EOL the AX lineup?

19
8
submitted 1 month ago* (last edited 1 month ago) by root@lemmy.world to c/homelab@lemmy.ml
 
 

My Jellyfin VM has been failing its nightly backups for some time now (maybe a week or so).

I'm currently backing up to a NAS that has plenty of available space and my other 10 VMs are backing up without issues (though they are a bit smaller than this one).

I am backing up with the ZSTD compression option and the Snapshot mode.

The error is as follows:

INFO: include disk 'scsi0' 'Proxbox-Local:vm-110-disk-0' 128G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating vzdump archive '/mnt/pve/Proxbox-NAS/dump/vzdump-qemu-110-2025_01_04-03_29_45.vma.zst'
INFO: started backup task '4be73187-d25c-49cf-aed2-1217fba27f77'
INFO: resuming VM again
INFO:   0% (866.4 MiB of 128.0 GiB) in 3s, read: 288.8 MiB/s, write: 268.0 MiB/s
INFO:   1% (1.5 GiB of 128.0 GiB) in 6s, read: 221.1 MiB/s, write: 216.0 MiB/s
INFO:   2% (2.6 GiB of 128.0 GiB) in 15s, read: 130.5 MiB/s, write: 126.4 MiB/s
INFO:   3% (3.9 GiB of 128.0 GiB) in 25s, read: 128.9 MiB/s, write: 127.5 MiB/s
ERROR: job failed with err -5 - Input/output error
INFO: aborting backup job
INFO: resuming VM again
ERROR: Backup of VM 110 failed - job failed with err -5 - Input/output error
INFO: Failed at 2025-01-04 03:30:17

Anyone experienced this or have any suggestions as to resolving it?

Update: After rebooting the Proxmox node (not just the VM) my backups are now working again. Thanks all for the input!

20
 
 

I am mainly hosting Jellyfin, Nextcloud, and Audiobookself. The files for these services are currently stored on a 2TB HDD and I don't want to lose them in case of a drive failure. I bought two 12TB HDDs because 2TB got tight and I thought I could add redundancy to my system, to prevent data loss due to a drive failure. I thought I would go with a RAID 2 (or another form of RAID?), but everyone on the internet says that RAID is not a backup. I am not sure if I need a backup. I just want to avoid losing my files when the disk fails.
How should I proceed? Should I use RAID2, or rsync the files every, let's say, week? I don't want to have another machine, so I would hook up the rsync target drive to the same machine as the rsync host drive! Rsyncing the files seems to be very cumbersome (also when using a cron job).

21
 
 

Found these guides after having to reprogram my H310 Mini's EEPROM after bricking it with another guide. Can't speak for the other guides, but the PERC H310 MINI guide worked like a charm.

22
 
 

I have a couple rules in place to allow traffic in from specific IPs. Right after these rules I have rules to block everything else, as this firewall is an "allow by default" type.

The problem I'm facing is that when I replace these two ports to match "Any" instead, those machines (matrix server and game server) are unable to perform apt-gets.

I had thought that this should still be allowed, because the egress rules for those two permit outbound traffic to http/s and once that's established it's a "stateful" connection which should allow the traffic to flow back the other way.

What am I doing wrong here, and what is the best way to ensure that traffic only hits these servers from the minimal number of ports.

23
 
 

I'm currently running a Xeon E3-1231v3. It's getting long in the tooth, supports only 32GB RAM, and has only 16 PCIe lanes. I've been butting up against the platform limitations for a couple of years now, and I'm ready to upgrade. I've been running this system for ~10yrs now.

I'm hoping to future proof the next system to also last 8-10 years (where reasonable, considering advancements in tech and improvements in efficiency), but I'm hitting a wall finding CPU candidates.

In a perfect world, I'd like an Intel with iGPU for QuickSync (HWaccel for Frigate/Immich/Jellyfin), AND I would like the 40+ PCIe lanes that the Intel Xeon Scalable CPUs offer.

With only my minimum required PCIe devices I've surpassed the 20 lanes available on desktop CPU's with an iGPU:

  • Dual m.2 for Proxmox ZFS mirror (guest storage) - in addition to boot drive (8 lanes)
  • LSI HBA (8 lanes)
  • Dual SFP+ NIC (8 lanes)

Future proofing:

High priority

  • Dedicated GPU (16 lanes)

Low priority

  • Additional dual m.2 expansion (8 lanes)
  • USB expansions for simplified device passthrough (Coral TPU, Zigbee/Zwave for Home Aassistant, etc) (4 lanes per card) - this assumes the motherboard comes with at least 4-ports
  • Coral TPU PCIe (4 lanes?)

Is there anything that fulfills both requirements? Am I being unreasonable or overthinking it? Is there a solution that adds GPU hardware acceleration to the Xeon Silver line without significantly increasing power draw?

Thanks!

24
 
 

I currently have an HP micro server gen 8 with Xpenology with hybrid raid, which works fairly well, but I’m 2 major versions behind. I’m quite happy with it, but I-d like to have an easier upgrade process, and more options. My main use is NAS and a couple of apps. I’d like to have more flexibility, to easily have an arr suite, etc.

Considerdering the hassle of safely upgrading xpenology because of the hybrid raid (4+4+2+2 Gb HDDs) I-d like a setup which I can easily upgrade and modify.

What are my options here? What RAID options are there that easily and efficiently these these disks?

I don-t have the spare money right now to replace the 2Gb disks. Planned in the future.

25
22
submitted 2 months ago* (last edited 2 months ago) by jet@hackertalks.com to c/homelab@lemmy.ml
 
 

HyperV has GPU para virtualization

But for qemu,kvm,xen it seems like the best option is to passthrough a GPU to a single VM, unless the GPU supports srvio, which almost all of the retail cards don't.

I head about the woof and gaming on whales project, and they seem to get around this by using only containers for the subdivision.

What methods or options have you used to share a GPU with your VMs?

view more: next ›