Selfhosted

42670 readers
629 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
1
 
 

First, a hardware question. I'm looking for a computer to use as a... router? Louis calls it a router but it's a computer that is upstream of my whole network and has two ethernet ports. And suggestions on this? Ideal amount or RAM? Ideal processor/speed? I have fiber internet, 10 gbps up and 10 gbps down, so I'm willing to spend a little more on higher bandwidth components. I'm assuming I won't need a GPU.

Anyways, has anyone had a chance to look at his guide? It's accompanied by two youtube videos that are about 7 hours each.

I don't expect to do everything in his guide. I'd like to be able to VPN into my home network and SSH into some of my projects, use Immich, check out Plex or similar, and set up a NAS. Maybe other stuff after that but those are my main interests.

Any advice/links for a beginner are more than welcome.

Edit: thanks for all the info, lots of good stuff here. OpenWRT seems to be the most frequently recommended thing here so I'm looking into that now. Unfortunately my current router/AP (Asus AX6600) is not supported. I was hoping to not have to replace it, it was kinda pricey, I got it when I upgraded to fiber since it can do 6.6gbps. I'm currently looking into devices I can put upstream of my current hardware but I might have to bite the bullet and replace it.

Edit 2: This is looking pretty good right now.

2
 
 

Hello everyone! Mods here 😊

Tell us, what services do you selfhost? Extra points for selfhosted hardware infrastructure.

Feel free to take it as a chance to present yourself to the community!

🦎

3
6
submitted 1 hour ago* (last edited 1 hour ago) by Lem453@lemmy.ca to c/selfhosted@lemmy.world
 
 

I'm trying to setup owncloud with single sign on using Authentik. I have it working for normal users. There is a feature that allows automatic role assignment to users so that admin users from authentik become admin users for owncloud.

This is described here: https://doc.owncloud.com/ocis/next/deployment/services/s-list/proxy.html#automatic-role-assignments.

In this document, they describe having attributes like

- role_name: admin
  claim_value: ocisAdmin

The problem I have is I don't know how to input this information into an Authentik user. As a result, owncloud is giving me this error:

ERR Error mapping role names to role ids error="no roles in user claims" line=github.com/owncloud/ocis/v2/services/proxy/pkg/userroles/oidcroles.go:84 request-id=5a6d0e69-ad1b-4479-b2d9-30d4b4afb8f2 service=proxy userid=05b283cd-606c-424f-ae67-5d0016f2152c

Any authentik experts out there?

I tried putting this under the attributes section of the user profile in authentik:

role_name: admin
claim_value: ocisAdmin

It doesn't work and it won't let me format YAML like the documentation where the claim_value is a child of the role_name.

4
 
 

AFAIK every NAS just uses unauthenticated connections to pull containers, I'm not sure how many actually allow you to log in even (raising the limit to a whopping 40 per hour).

So hopefully systems like /r/unRAID handle the throttling gracefully when clicking "update all".

Anyone have ideas on how to set up a local docker hub proxy to keep the most common containers on-site instead of hitting docker hub every time?

5
6
15
Intel GVT-g - ArchWiki (wiki.archlinux.org)
submitted 1 day ago* (last edited 1 day ago) by possiblylinux127@lemmy.zip to c/selfhosted@lemmy.world
 
 

This can be used to create a virtual GPU that you pass to hosts. This is applicable to pretty much any Linux system like Proxmox. I do wish it supported newer hardware.

7
 
 

Added the only actions I did in the comments.

8
12
submitted 1 day ago* (last edited 1 day ago) by null_dot@lemmy.dbzer0.com to c/selfhosted@lemmy.world
 
 

Edit: nevermind. Turns out my email host is already running spamassassin and I can configure it how I wish.

My email is hosted at mxroute. I'm happy with their pricing and service and don't want to selfhost my email. However, their spam management isn't great.

I just realised that it might be possible to run spamassassin myself, which will set spam headers on the emails which my email client (thunderbird) can then use to decide what to do.

There seems to be a bunch of poorly maintained / abandoned ways in which to do this. I thought I'd ask here just in case any one else is doing this and can help me skip to the end.

I was hoping for a docker container (or compose stack) that provides an IMAP proxy and runs spamassassin.

Any ideas and insights welcome. My email juggling could use some improvement.

9
 
 

I just spent 2 hours trying to figure out why fail2ban didn't increment the ban count.

***
a/fail2ban/etc/fail2ban/jail.local
+++ b/fail2ban/etc/fail2ban/jail.local
@@ -1,6 +1,6 @@
 [DEFAULT]

-bantime.incremet     = true
+bantime.increment    = true
 bantime.rndtime      =
 bantime.maxtime      =
 bantime.factor       = 1

After I found that I seriously considered becoming a goose farmer.

10
 
 

I've tried GetHomepage and while I've configured most of it I've had a few troubles due to the instructions being very incomplete and confusing.

The one problem that eluded me was setting paperlessngx widget. Worth nothing that, unlike the other services, paperlessngx is running on docker-compose on my server. While the widget detects the service, it never gets any information

Eventually it just gives an API error

# services.yaml (just the relevant part)
   
     - Paperless-ngx:
        href: http://<myserverhost:port>
        description: Document Management System
        icon: https://static-00.iconduck.com/assets.00/paperless-icon-426x512-eoik3emb.png
        server: paperless
        widget:
          type: paperlessngx
          url: http://<local-ip:port>
          token: <token-configured-inside-paperless>


    #docker.yaml

    paperless:
      host: <local-ip>
      port: <port>    

I'm out of ideas. Unfortunately the only instructions are on the site and they aren't easy to follow if you're not already familiarized with docker.

11
 
 

Nextcloud, Qbittorrent, Truenas and loads of other svcs take optional email credentials for sending alerts and other features (eg. password recovery for nextcloud).

What email providers do people usually use to make this process simple to set up? For example, Microsoft doesn't allow basic auth anymore so it's supposedly not possible to use via most of these setups, and some other services seem like they have a low inbox size (does this matter?)

12
 
 

So way back when I used to use Mint.com to help me manage my finances. It worked great until Intuit bought them, ended the app, and redirected their customers to CreditKarma. I hated getting spam messages and haven't used a personal finance app for years. I finally set up ActualBudget and it great for budgeting but I want to keep track of investments, retirement holdings, property, and things outside of the monthly budget. I don't think ActualBudget does that. Are there any self hosted projects that helps me keep track of stocks, property, and other assets?

13
 
 

I run a small server with Proxmox, and I'm wondering what are your opinions on running Docker in separate LXC containers vs. running a specific VM for all Docker containers?

I started with LXC containers because I was more familiar with installing services the classic Linux way. I later added a VM specifically for running Docker containers. I'm thinking if I should continue this strategy and just add some more resources to the docker VM.

On one hand, backups seem to be easier with individual LXCs (I've had situations where I tried to update a Docker container but the new container broke the existing configuration and found it easiest just to restore the entire VM from backup). On the otherhand, it seems like more overhead to install Docker in each individual LXC.

14
34
submitted 2 days ago* (last edited 2 days ago) by Nighed@feddit.uk to c/selfhosted@lemmy.world
 
 

Hi, I'm looking for some recommendations, mostly looking for pointers of where to go and look at/research stuff as I have no idea what is good and what is just well advertised.

Intro: I have finally entered the world of (almost) Gigabit internet, which is opening up options with what I can host.

I currently have:

  • Pi hole on an actual RP (will probably remain there because its easy)
  • Inbound Wireguard VPN on my old router (will stop working when my old ISP stops service) EDIT: my new ISB gave me a router, but it doesn't have VPN functionality
  • Foundry VTT that I run up on my gaming machine when needed

I will probably also be upgrading my gaming PC in the next few months, so my current rig will probably be put behind the TV to use as a server and for couch gaming.

Info/recommendations I would like:

  • VPN software (I want to VPN INTO my network) My goto would be wireguard, is that still a good option? (I assume I just port forward the VPN ports to the server?)
  • Private cloud/File server: I both want to be able to occasionally (but permamently) host files publicly, but still have the main store be available on the local network only. Is that going to be two pieces of software, or just one?
  • Is a local video streaming app actually useful for a rare watcher of movies etc, or can they be streamed directly from the file server? its something that I see a lot of people talk about, but don't really understand why...
  • Is Docker the way to go for everything? or just install on the machine directly?
  • ~Piracy~ VM - Enabling the virtualisation stuff for Docker mostly breaks virtualbox (at least on windows) any recommendations for how to nicely run a VM alongside docker (if that's the recommendation)?
  • Should/Could I be hosting anything else? Foundry will probably be on there. I don't feel like I have a use for smart home stuff, so home assistant wouldn't be much use etc.
15
16
 
 

cross-posted from: https://lemmy.blackeco.com/post/1434522

To re-enable them, you have to set misc.etc_dnsmasq_d to true either by editing /etc/pihole/pihole.toml or using the pihole-FTL command:

sudo pihole-FTL --config misc.etc_dnsmasq_d true

Source

17
18
19
 
 

Hey,

I want to be able to access my projects from my laptop and my desktop, without syncing build folders (patterns are okay for this) or large data folders (manually selected is preferable for those). A bonus would be to be able to selectively keep files remote to use less storage space.

I also want to sync some regular documents and class notes, but everything is able to do that at least.

Syncthing "works" for this, but it doesn't have a web file browser or a "main" hoster, so I don't think it's quite the right tool.

I recently installed owncloud, and its desktop sync can almost do this, but it can't keep files local without uploading them (otherwise it seems pretty good!). Seafile hasn't worked at all for me, and ime nextcloud is decently painful and has way too many features I don't need at all.

Am I using the wrong tool for the job? Is there a way to accomplish what I want to accomplish?

20
 
 

I'm trying to plan a better backup solution for my home server. Right now I'm using Duplicati to back up my 3 external drives, but the backup is staying on-site and on the same kind of media as the original. So, what does your backup setup and workflow look like? Discs at a friend's house? Cloud backup at a commercial provider? Magnetic tape in an underground bunker?

21
 
 

Basically title. I'm in the process of setting up a proper backup for my configured containers on Unraid and I'm wondering how often I should run my backup script. Right now, I have a cron job set to run on Monday and Friday nights, is this too frequent? Whats your schedule and do you strictly backup your appdata (container configs), or is there other data you include in your backups?

22
 
 

I didn't like Kodi due to the unpleasant controls, especially on Android, so I decided to try out Jellyfin. It was really easy to get working, and I like it a lot more than Kodi, but I started to have problems after the first time restarting my computer.

I store my media on an external LUKS encrypted hard drive. Because of that, for some reason, Jellyfin's permission to access the drive go away after a reboot. That means something like chgrp -R jellyfin /media/username does work, but it stops working after I restart my computer and unlock the disk.

I tried modifying the /etc/fstab file without really knowing what I was doing, and almost bricked the system. Thank goodness I'm running an atomic distro (Fedora Silverblue), I was able to recover pretty quickly.

How do I give Jellyfin permanent access to my hard drive?

Solution:

  1. Install GNOME Disks
  2. Open GNOME Disks
  3. On the left, click on the drive storing your media
  4. Click "Unlock selected encrypted partition" (the padlock icon)
  5. Enter your password
  6. Click "Unlock"
  7. Select the LUKS partition
  8. Click "Additional partition options" (the gear icon)
  9. Click "Edit Encryption Options..."
  10. Enter your admin password
  11. Click "Authenticate"
  12. Disable "User Session Defaults"
  13. Select "Unlock at system startup"
  14. Enter the encryption password for your drive in the "Passphrase" field
  15. Click "Ok"
  16. Select the decrypted Ext4 partition
  17. Click "Additional partition options" (the gear icon)
  18. Click "Edit Mount Options..."
  19. Disable "User Session Defaults"
  20. Select "Mount at system startup"
  21. Click "Ok"
  22. Navigate to your Jellyfin Dashboard
  23. Go to "Libraries"
  24. Select "Add Media Library"
  25. When configuring the folder, navigate to /mnt and then select the UUID that points to your mounted hard drive
23
113
submitted 4 days ago* (last edited 4 days ago) by lena@gregtech.eu to c/selfhosted@lemmy.world
 
 

I have been self-hosting for a while now with Traefik. It works, but I'd like to give Nginx Proxy Manager a try, it seems easier to manage stuff not in docker.

Edit: btw I'm going to try this out on my RPI, not my hetzner vps, so no risk of breaking anything

24
 
 

I've been kind of piece-mealing my way towards cleaning up my media server, and could use a little advice on the next steps.

Currently I have a little under 10TB of torrented media that I have been downloading to / seeding from media library folders that Plex and Jellyfin monitor, using my desktop PC as the torrenting client. This requires a bit of manual maintenance--i.e., manually selecting the destination folder for the torrents in a way that Plex/Jellyfin can see.

I recently fired up qBittorrent on my media server (Unraid if that matters), and would like to try out some of the *arrs, but I'm not quite sure how to proceed without creating some kind of unholy mess.

I guess option A is just to import all of my current torrented content from desktop to media server client, and keep manually specifying the torrent destination. It's not a huge deal, since I am typically only adding a few torrents per week, so it's literal seconds or minutes of work to find the content I want.

Option B is to start "clean" and follow one of the many how-tos for starting up an *arr stack. But never having used the software, I don't have a good sense for how it works, and whether there are any pitfalls to watch out for when trying to spin it up with an existing media library that includes both torrented and ripped content.

From a bit of reading, I think radarr for example will only care about new content. So I should be able to migrate all my existing torrents to the new client on my media server, including their existing locations amongst my media library, and then just let radarr locate and manage new content. Is that correct?

Any other advice or suggestions I should be considering?

25
 
 

Hi all, I am trying to use Collabora online, but am stuck. I set it up via the docker instructions here (https://sdk.collaboraonline.com/docs/installation/CODE_Docker_image.html).

But when I go to https://127.0.0.1:9980/, all I get is "OK". The reverse proxy works, but the same "OK". How do I actually use the Collabora?

I am anticipating a web interface to a document editor by the browser.

view more: next ›