Self-hosting

3107 readers
1 users here now

Hosting your own services. Preferably at home and on low-power or shared hardware.

Also check out:

founded 2 years ago
MODERATORS
1
 
 

Beneath the dark and uncertain clouds of bigtech, hidden among the declassed byte workers and the false technological prophets who with siren songs offer their digital services to "facilitate" digital life, rises an anarchic and countercultural community that seeks to reclaim the Internet and fight against those who squeeze our identity in the form of data to generate wealth and advertising for mass social manipulation and cohesion. Navigating the network of networks, with a small fleet of self-managed servers, geographically distributed yet cohesively united by cyberspace, the self-hosting community emerges as a way of life, a logic of inhabiting the digital, a way of fighting for an open, human network, free from the oligarchy of data.

To the naturalization of the already crystallized phrase "the cloud is someone else's computer" we add that this "someone else" is nothing more than a conglomerate of corporations that, like a hungry kraken, devours and controls the oceans of cyberspace. Against this we arm ourselves in community action, direct and self-managed by and for those of us who inhabit and fight for a more sovereign and just Internet. Our objectives are clear, and our principles are precise. We seek to break the mirage and charm that these beasts imposed at the point of ISPs and blacklist and we promote the ideal of an organized community based on their computing needs without the intermediation of outlaws and byte smugglers.

The big tech companies disembarked on the net with a myriad of free services that came to replace standards established during years of work among users, developers, communities, technocrats and other enthusiasts of the sidereal tide of cyberspace. By commoditizing basic Internet services and transforming them into objects of consumption, they led us to their islands of stylized products, built entirely with the aim of commercializing every aspect of our lives in an attempt to digitize and direct our consumption. Sending an email, chatting with family and friends, saving files on the network or simply sharing a link, everything becomes duly indexed, tagged and processed by someone else's computer. An other that is not a friend, nor a family member, nor anyone we know, but a megacorporation that, based on coldly calculated decisions, tries to manipulate and modify our habits and consumption. Anyone who has inhabited these digital spaces has seen how these services have changed our social behaviors and perceptions of reality, or will we continue to turn a blind eye to the tremendous disruption that social networks generate in all young people or the absurd waste of resources involved in sustaining the applications of technological mega-companies? Perhaps those who praise the Silicon Valley technogurus so much do not see the disaster of having to change your cell phone or computer because you can no longer surf the web or send an email.

If this is the technosolutionism that crypto-enthusiasts, evangelists of the web of the future or false shamans of programming offer us, we reject it out of hand. We are hacktivists and grassroots free software activists: we appropriate technology in pursuit of a collective construction according to our communities and not to the spurious designs of a hypercommercialized IT market. If today the byte worker plays the same role as the charcoal burner or workshop worker at the end of the 19th century, it is imperative that he politicizes and appropriates the means of production to build an alternative to this data violence. Only when this huge mass of computer workers awaken from their lethargy will we be able to take the next step towards the re-foundation of a cyberspace.

But we do not have to build on the empty ocean, as if we were lost overseas far from any coast; there is already a small but solid fleet of nomadic islands, which dodge and cut off the tentacles of the big tech kraken. Those islands are the computers of others, but real others, self-managed and organized in pursuit of personal, community and social needs. Self-hosting consists of materializing what is known as "the cloud", but stripped of the tyranny of data and the waste of energy to which the big tech companies have accustomed us. They are not organized to commoditize our identities, but to provide email, chat, file hosting, voice chat or any other existing digital need. Our small server-islands demonstrate that it is possible to stay active on the network without the violent tracking and theft, nor the imposed need to constantly replace our computer equipment: the self-hosted services, being thought by and for the community, are thought from the highest possible efficiency and not the immoral waste that directly collaborates with the climate crisis.

For this reason, we say to you, declassed byte workers, train yourself, question yourself, and appropriate the tools you use in order to form a commonwealth of hacktivists! Only between the union of computer workers and the communities of self-hosting and hacktivism we will be able to build alternatives for the refoundation of a cyberspace at the service of the people and not of the byte oligarchy.

But we need not only the working masses but also ordinary digital citizens, let's wake up from the generalized apathy to which we have been accustomed! No one can say anymore that technology is not their thing or that computing does not matter to them when all our lives are mediated through digital systems. That android phone that is still alive but no longer allows you to check your emails or chat with your family is simply the technological reality hitting you in the face; as much as the anxiety or dispersion that has existed in you for the last 15 years. Imagine the brain of a 14 year old teenager, totally moth-eaten by the violent algorithms of big tech!

Community digital needs are settled on the shores of our server-islands, not on the flagships of data refineries. Let's unite by building small servers in our homes, workplaces or cultural spaces; let's unite by building data networks that provide federated public instant messaging services that truly respect our freedoms and privacy. Let's publish robust, low-latency voice services; let's encourage the use of low computational consumption services to democratize voices whether you use a boat or a state-of-the-art racing boat. Let's create specialized forums and interconnect communities to unite us all, let's set our sails with the protocols and standards that exist, which allow us to dive the network using the device we want and not the one imposed on us. Let's lose the fear that prevents us from taking the first step and start this great learning path, which as an extra benefit will make us regain not only our technological sovereignty but also the control of our digital essence. It is not a matter of cutting off the private data networks of big tech but rather of building self-managed, self-hosted and self-administered spaces from the hacktivist bases, together with the workers of the byte and the digital citizenship: an Internet of the community for the community.

2
3
 
 

cross-posted from: https://lemmy.ml/post/26074446

I'm trying to see if I can host my own Fediverse instance for friends and family and I want to know what kind of device would be required. I'm an absolute beginner to self-hosting so I was wondering if I can start cheap (Raspberry Pi 2GB RAM is something I can definitely consider).

Also, can one device host multiple software? Like if I wanted both a WordPress instance and a Hubzilla instance or a Matrix/XMPP instance

4
5
 
 

cross-posted from: https://lemmy.selfhostcat.com/post/108211

  • my methods have been:

  • use trilium for any detailed notes and documentation

  • memos for random thoughts especially if shorter

  • pen and paper when offline or on mobile because mobile trilium and moememos both suck

  • zotero for citation and bibliography manager

  • backed up to nextcloud

  • i have paperless-ngx but found it randomly errors a ton of things and zotero is fine.

  • considering if it’s worth it to have so many different spread out methods

  • theyre fun to use but it creates more chaos then needed

6
 
 

I set up an instance of the ArchiveTeam Warrior on my home server with Docker in under 10 minutes. Feels like I'm doing my part to combat removal of information from the internet.

7
8
 
 

I've recently been able to set up Lemmy and PieFed instances on a Raspberry Pi 5 and wanted to share the process for anyone else interested in self hosting an instance.

The following instructions are based off using a used Raspberry Pi 5 (ARM64) plus a USB external hard drive for the hardware. I used the Raspberry Pi 5 image which is based off Debian 12. The following instructions should be similar enough for other Debian 12 distributions and should hopefully get the same results.

The only other purchase I've made was a domain name which was super cheap ($15 a year which includes hiding WHOIS information). Everything else is free.

My residential ISP service blocks incoming data on "business" ports such as Port 80 and 443. Users won't be able to access your site securely if these ports block incoming data. To work around this I used Cloudflare Tunnels. This allows users to access your site normally. Cloudflare Tunnel will send incoming data to a port of your choosing (between 1024-65,535) and users can access your self-hosted instance.

Cloudflare also has Top Layer Security (TLS) which encrypts traffic and protects connections. This also means your website goes from HTTP:// to HTTPS:// in the address bar. Federation will require TLS so this will be useful. Cloudflare Tunnel also introduces some complications which I'll address later.

Quick Links

Requirements

  • A purchased Domain Name
  • Set Cloudflare as your Domain Name's primary Domain Name Servers (DNS). See here
    • Do this up to a day before in advance as changes may take up to a day to take effect.
  • Raspberry Pi 5 with Raspberry Pi OS (64) image installed
    • You can use other hardware with Debian 12 Operating Systems but I can't guarantee these instructions will be the exact same
  • A USB external hard drive
    • Something with good read/write speeds will help
  • Access to any routers on your private network
    • You will need access to Port Forwarding options. You will need to read any router manuals or ask your Internet Service Provider since this is different for every device.
  • SSH remote access to Raspberry Pi 5. See here

Setup & Software Installation (LOCAL HOST)

The required software to host Lemmy or PieFed will include

  • Docker
  • Cloudflared
  • Lemmy or PieFed

Additional software I will also cover but aren't necessary are:

  • Nginx Proxy Manager (NPM)
  • UFW/GUFW - Simple Firewall
  • RSync - For making backups
  • Termux - Android terminal app that will be used for remote SSH access

Docker (LOCAL HOST)

The official Docker instructions are clear, quick and simple. The process will also add their repository information for quick and easy updates. This will be installed as a service on your operating system.

Port Forwarding/Reverse Proxy

Port Forwarding

Pick a port number between 1024-65,535. This is how Cloudflare will send data and remote connections to your instance without worrying about blocked ports. I like to use 5050 because it's simple, easy to remember and not used by any of my other self-hosted services. To be consistent, for the rest of this guide I will use port 5050 as an example. Feel free to replace it with any port number you feel like using.

Router settings are different for each device, refer to a manual or call your ISP for support depending on your situation.

  1. SSH login to your Raspberry Pi and enter the command hostname -I
    • This will print the IP addresses used by the host machine. The first IP address printed will be your local IP address. The rest of the addresses are NOT needed and can be ignored.
  2. Access your Port Forwarding settings in your private network router.
  3. Find your Raspberry Pi device by the Local IP address
  4. Add a rule to allow TCP connections on port 5050.
    • If your router port forwarding settings show Internal and External fields, simply add 5050 to both fields.
  5. Save

If you are only hosting a Lemmy or PieFed instance, you will be able to do that without the need of a Reverse Proxy which is described below. In this case you can simply use the default ports for Lemmy or PieFed. Replace my example port 5050 with the following depending on your needs:

  • Lemmy default port: 10633
  • PieFed default port: 8030

Reverse Proxy

A reverse proxy allows the local host machine to distribute incoming user connections to different services hosted on the local machine. For example, all data from Cloudflare comes in on port 5050 when accessing the DOMAINNAME.COM address. I can use Subdomains to redirect incoming connections on port 5050 to open ports on my local host machine.

For example, both Lemmy and PieFed can be hosted at the same time. We can use the subdomains lemmy. and piefed. to redirect traffic. When a user types lemmy.DOMAINNAME.COM into the address bar, Cloudflare will send the connection through 5050 to your home and private router which then continues to the Reverse Proxy. The Reverse Proxy running on the local host machine will catch the subdomain request and immediately switch to port 10633 where a connection to Lemmy will be completed. Typing in piefed.DOMAINNAME.COM will guide all requests to port 8030 where PieFed is running and complete that connection.

For simplicity, Nginx Proxy Manager is docker based with an easy to use web user interface that's accessible through your local network connected Web Browser. It has it's limitations but works fine for the current needs.

Nginx Proxy Manager (LOCAL HOST)

NPM is extremely simple to set up. Simply create a new folder, create a docker-compose.yml file filled with the necessary information and then run the container.

  1. mkdir ~/npm
  2. cd ~/npm
  3. nano docker-compose.yml
  4. Paste the following into the docker-compose.yml file and save:
  • docker-compose.yml
services:
  app:
    image: 'jc21/nginx-proxy-manager:latest'
    restart: always
    ports:
      - '5050:80'
      - '81:81'
      - '443:443'
    volumes:
      - ./data:/data
      - ./letsencrypt:/etc/letsencrypt

Note that port 5050: externally connects to NPM internally through port :80. Make sure 5050 matches the Cloudflare Tunnel port you have decided on using.

  1. docker compose up -d and wait for the services to start running
  2. In your Web Browser on any device connected to your private network, type your Raspberry Pi's local address followed by :81 into the address bar. For example 192.168.0.100:81. See Port Fowarding for help finding your local IP address.
  3. The login page will ask for account details. Enter the following:
    • Account = admin@example.com
    • Password = changeme
    • You will now be asked to create a new admin account
  4. Reverse Proxy Setup:
    1. After Login, click Hosts -> Proxy Hosts -> Add New Proxy
    2. Domain Names field: Your DOMAINNAME.COM
      • Press Enter to store that domain name, NPM won't store your domain name if you don't hit enter
    3. Forward Hostname/IP field: Your local host machine ip address (example 192.168.0.100). See Port Fowarding for help finding your local IP address.
    4. Forward Port field: I like to use port 81 to test Cloudflare Tunnels before installing Lemmy or PieFed. This is the login page for NPM. This can be quickly changed to the ports listed below after confirming a secure connection from Cloudflare Tunnels.
      • Lemmy: 10633
      • PieFed: 8030
    5. Block Common Exploits: Enabled
    6. Websockets Support: Enabled
    7. Save

Cloudflared (LOCAL HOST)

!!Only proceed with these instructions after setting Cloudflare as your Primary DNS provider. This process may take up to a day after changing nameservers!!

The following instructions do a few things. First you will install Cloudflared (with a 'd'). Then you will be asked to log in, create a tunnel, run a tunnel and then creating a service (while the current tunnel is running) so your tunnel can run automatically from startup.

I've noted that this will be installed on the local host (where you are hosting an instance), we will be installing Cloudflared on multiple devices for reasons I will cover later. Hopefully this reduces confusion later on.

  1. Service Install & Create Tunnel & Run Tunnel
    1. Select option -> Linux
    2. Step 4: Change -> credentials-file: /root/.cloudflared/<Tunnel-UUID>.json -> credentials-file: /home/USERNAME/.cloudflared/<Tunnel-UUID>.json
    3. Step 5: SKIP step 2, you will get an error and it's not important anyways.
    4. Step 6: Keep this window open after running the new tunnel
      • ONLY AFTER completing step 2.i.d. below (Run as a service), press CTRL + C to exit this tunnel
  • Example config.yml file (See above about Step 4)
tunnel: TUNNEL_ID
credentials-file: /home/USERNAME/.cloudflared/TUNNEL_ID.json
ingress:
  - hostname: DOMAINNAME.COM
    service: http://localhost:5050/
  - service: http_status:404
  1. Run as a service

    1. Open and SSH login to the Raspberry Pi in a new terminal window
      1. You will get an error if you do not copy your config.yml from your Home folder to /etc/cloudflared. You will need to copy this file again if you make any changes to the config.yml such as adding more tunnels. This will be covered later when setting up Remote SSH access.
        • sudo cp ~/.cloudflared/config.yml /etc/cloudflared/config.yml
      2. cloudflared service install
      3. systemctl start cloudflared
      4. systemctl status cloudflared
        • Check to see if things are green and working, then press CTRL + C when done to exit
        • You can now stop the running tunnel from the first step as previously stated (See Step 1.iv.)
      5. You can close this terminal window now
  2. Enable SSL connections on Cloudflare site

    • Log in to your account on Cloudflare and simply click on the following links
    1. From the main page -> Your DOMAINNAME.COM -> SSL/TLS -> Configure -> Full -> Save
    2. SSL/TLS -> Edge Certificates -> Change the following settings on Cloudflare to match what's listed below:
      • Always Use HTTPS: On
      • Opportunistic Encryption: On
      • Automatic HTTPS Rewrites: On
      • Universal SSL: Enabled

If you used NPM as a reverse proxy and it's set to port 81, go to any Web Browser and type in your DOMAINNAME.COM. You should be directed to NPM's login page. Check the address bar and your domain name should have a padlock symbol followed by https://domainname.com/. Note that it should read HTTPS:// (with an s) and not HTTP:// (without an s). HTTPS along with the padlock symbol means your connections are properly encrypted.

This is the most complicated step for self-hosting. If you can confirm your connection is encrypt, setting up other services and webapps are fairly straight forward.

Lemmy (LOCAL HOST)

The lemmy instructions are simple and straight forward. When changing the fields asked of you in the instructions, it's helpful to search and replace the required fields. In nano when editing a file, press CTRL + \ and follow the instructions at the bottom of the window. This will find and replace text.

The Lemmy instructions show text for editing with {{ Example }}. To avoid confusion, those curly braces must be removed and replaced with the expected data.

  • If you used NPM's login page to test Cloudflare Tunnels, you will need to login to NPM and change the Port Forward from 81 to 10633
    • Click Hosts -> Proxy Hosts -> Click the 3-Dots for your DOMAINNAME.COM proxy rule -> Edit & Save
  1. Follow Lemmy Install Instructions
    • IGNORE steps Reverse Proxy/Webserver & Let's Encrypt since we have addressed those steps earlier with NPM and Cloudflare Tunnels/Security.
  2. Through a Web Browser, type in your DOMAINNAME.COM and you should see an admin creation page. Complete that and the initial instance setup afterwards.
  3. Test federation, replace capitals with the required information
    • curl -H 'Accept: application/activity+json' https://domainname.com/u/LEMMY_USERNAME

      • If you see .json information, Lemmy is federated
      • If you see .html information, lemmy is NOT federated

Updating Lemmy Docker Container

See here for more information.

  1. docker compose down
  2. docker compose pull
  3. docker compose up -d

PieFed (LOCAL HOST)

The PieFed installation instructions will provide more detailed information about each step. This guide does NOT cover any email setup for PieFed.

  • If you used NPM's login page to test Cloudflare Tunnels, you will need to login to NPM and change the Port Forward from 81 to 8030

    • Click Hosts -> Proxy Hosts -> Click the 3-Dots for your DOMAINNAME.COM proxy rule -> Edit & Save
  • PieFed Install Instructions

  1. Download & Prepare files
    1. git clone https://codeberg.org/rimu/pyfedi.git
    2. cd pyfedi
    3. cp env.docker.sample .env.docker
  2. Edit & Save files
    1. nano .env.docker
      1. Change value for SECRET_KEY with random numbers and letters
      2. Change value for SERVER_NAME with your DOMAINNAME.COM
    2. nano compose.yaml
      • Note ports 8030:5000. You can change the external container port: 8030: if you are using a custom port. Do NOT touch the internal container port :5000.
        • ports:
        • - '8030:5000'
  3. Build
    1. export DOCKER_BUILDKIT=1
    2. sudo docker compose up --build
      • Wait until text stops scrolling
  4. Access your DOMAINNAME.COM from a Web Browser
    1. You may see a message that says database system is ready to accept connections in your terminal window after PieFed is done installing and loading. This means you are ready to attempt a connection through your Web Browser now.
      • If you see constant permission errors, Open and SSH login to the Raspberry Pi in a new terminal window and do the following to allow PieFed to access the required folders:
        1. cd ~/pyfedi
        2. chown -R USERNAME:USERNAME ./pgdata
          • You can leave this window open, it can be used for the step 5.
    2. You may see an "Internal Server Error" after your first connection attempt. This is normal. You will see movement in your terminal window on each attempt to connect to PieFed. Now you can proceed to initialize the database.
  5. Initialize Database
    1. Open and SSH login to the Raspberry Pi in a new terminal window
      1. sudo docker exec -it piefed_app1 sh
      2. export FLASK_APP=pyfedi.py
      3. flask init-db
        • Enter username/email/password. Email is optional.
      4. Access PieFed from your Web Browser again. PieFed should now display. You can log in as admin with the same username and password.
      5. exit
      6. You can close this terminal window now
  6. Return to the terminal with the running docker build and press CTRL + C to stop PieFed.
  7. Run PieFed in the background
    • docker-compose up -d
  8. Setup Cron (Automated) Tasks
    • This will set up automated tasks for daily maintenance, weekly maintenance and email notifications.
    • Change USERNAME to your username.
    1. Setup automated tasks
      1. sudo nano /etc/cron.d/piefed
        1. Paste and Save
  • /etc/cron.d/piefed file
5 2 * * * USERNAME docker exec piefed_app1 bash -c "cd /app && ./daily.sh"
5 4 * * 1 USERNAME docker exec piefed_app1 bash -c "cd /app && ./remove_orphan_files.sh"
1 */6 * * * USERNAME docker exec piefed_app1 bash -c "cd /app && ./email_notifs.sh"
  1. OPTIONAL: Environment Variables
    • Some functions such as email or captcha's won't work unless you add the necessary variables into the ~/pyfedi/.env.docker file. Look at ~/pyfedi/env.sample and add the other variables to ~/pyfedi/.env.docker according to your needs.
    1. View the sample file
      • nano ~/pyfedi/env.sample
    2. Edit & Save .env.docker file
      • nano ~/pyfedi/.env.docker
    3. Restart PieFed Docker container
      • docker compose down && docker compose up -d

Updating PieFed Docker Container

  1. docker compose down
  2. git pull
  3. docker compose up --build
  4. docker compose down && docker compose up -d

Cloudflare Website Settings

These settings are suggested to help manage traffic. See here for more detailed information.

  1. Exclude Settings
    1. From the main page -> Your DOMAINNAME.COM -> Security -> WAF -> Custom Rules -> Click Create Rule -> Change the following settings and values on Cloudflare to match what's listed below:
      • Rule Name: Allow Inbox
      • Field: URI Path
      • Operator: contains
      • Value: /inbox
      • Log matching requests: On
      • Then take action...: Skip
      • WAF components to skip: All remaining custom rules
    2. Click `Deploy' to complete
  2. Caching Settings
    1. From the main page -> Your DOMAINNAME.COM -> Caching -> Cache Rules -> Click Create rule -> Change the following settings on Cloudflare to match what's listed below:
      • Rule name: ActivityPub
      1. Custom filter expressions: On
        1. Field: URI Path
        2. Operator: Starts with
        3. Value: /activities/
      2. Click Or
      3. Repeat until you have values for 4 rules total containing the values:
        • /activities/
        • /api/
        • /nodeinfo/
        • /.well-known/webfinger
      • Cache Eligibility: On
      • Edge TTL -> Click + add setting
        • Click Ignore cache-control header and use this TTL
        • Input time-to-live (TTL): 2 hours
      • Click Deploy to complete
    2. Click Create rule again
      • Rule name: ActivityPub2
      1. Custom filter expressions: On
        1. Field: Request Header
        2. Name: accept
        3. Operator: contains
        4. Value: application/activity+json
      2. Click Or
      3. Repeat until you have 2 rules total containing the values:
        • application/activity+json
        • application/ld+json
      • Cache Eligibility: On
      • Edge TTL -> Click + add setting
        • Click Ignore cache-control header and use this TTL
        • Input time-to-live (TTL): Type 10 seconds
      • Click Deploy to complete
  3. Optimization Settings
    1. Speed -> Optimization -> Content Optimization -> Change the following settings on Cloudflare to match what's listed below:
      • Speed Brain: Off
      • Cloudflare Fonts: Off
      • Early Hints: Off
      • Rocket Loader: Off
  4. Cloudflare Tokens for .env.docker File
    1. Create an API "Zone.Cache Purge" token
      1. After logging in to Cloudflare, go to this page
      2. Click Create Token -> Click Get Started under Create Custom Token
      3. Token Name -> PieFed
      4. Under Permissions -> Change the following drop down menu's to match what's listed below
        • First drop down menu: Zone
        • Second drop down menu: Cache Purge
        • Third drop down menu: Purge
      5. Click Continue to summary -> Click Create Token
      6. Copy the generated API Token. This will be used for CLOUDFLARE_API_TOKEN in the .env.docker file. Note, once you leave this screen, the API token will remain but the generated code that can be copied will disappear forever.
    2. Copy API Zone ID
      1. From the main page -> Your DOMAINNAME.COM -> Scroll down and look for API Zone ID in the far right column
      2. Copy API Zone ID Token. This will be used for CLOUDFLARE_ZONE_ID in the .env.docker File.
    3. The following step must be completed on the Raspberry Pi (LOCAL HOST) where PieFed is running:
      1. nano ~/pyfedi/.env.docker
        1. Add the following lines with your copied API Tokens & Save
          • CLOUDFLARE_API_TOKEN = 'ZONE.CACHE_PURGE_TOKEN'
          • CLOUDFLARE_ZONE_ID = 'API_ZONE_ID_TOKEN'
      2. Restart PieFed Docker container
        • docker compose down && docker compose up -d

Troubleshooting

  • If you receive an error while posting images, the folder permissions will need to change. Change USERNAME with your username.
    1. cd ~/pyfedi
    2. sudo chown -R USERNAME:USERNAME ./media

Support and Services

Remote SSH Access Setup

With how Cloudflare works, SSH is not as simple and requires a bit more effort. I'm going to explain how to prepare Termux, an android terminal app, so you can access the Raspberry Pi remotely. The steps should be quite similar if you are using a Debian 12 distribution.

For remote SSH to work, you must provide a config file with some information. Fortunately, cloudflared will give you all the information you need for the config file.

A subdomain that will be used for the SSH connection will also need to be created. In this example I will simply use the subdomain ssh. which will look like this ssh.DOMAINNAME.COM.

The subdomain must be set up first before setting up the remote clients. You will use the Cloudflare Tunnel name (CLOUDFLARE_TUNNEL_NAME) that you previously created. Also note, the config file edited on the Raspberry Pi Local Host must be copied again to /etc/cloudflared before cloudflared is restarted.

The Cloudflare Tunnel name is the name you chose. Not to be confused with TUNNEL_ID which is a bunch of random letters and numbers that was generated for your tunnel.

For this example I'll use port 22 for SSH connections. This is the default port for SSH connections.

Raspberry Pi Setup (LOCAL HOST)

  1. SSH login to the Raspberry Pi
  2. cloudflared tunnel route dns CLOUDFLARE_TUNNEL_NAME ssh.DOMAINNAME.COM
  3. nano ~/.cloudflared/config.yml
    1. Paste the following and change TUNNEL_ID and DOMAINNAME.COM & Save
  • Example config.yml file
tunnel: TUNNEL_ID
credentials-file: /home/USERNAME/.cloudflared/TUNNEL_ID.json
ingress:
  - hostname: DOMAINNAME.COM
    service: http://localhost:5050/
  - hostname: ssh.DOMAINNAME.COM
    service: ssh://localhost:22
  - service: http_status:404
  1. sudo cp ~/.cloudflared/config.yml /etc/cloudflared/config.yml
  2. sudo systemctl restart cloudflared

Desktop: Install (REMOTE CLIENT)

Android Termux: Install (REMOTE CLIENT)

Termux does not have SSH installed by default, This will install both ssh and cloudflared

  • apt install openssh cloudflared -y

Login and Setup (REMOTE CLIENT)

!!Continue here after preparing the LOCAL HOST and REMOTE CLIENTs first!!

!!The following steps are to be completed on the REMOTE CLIENTs!!

  1. Login
    1. cloudflared tunnel login
    2. Complete login
  2. cloudflared access ssh-config --hostname ssh.DOMAINNAME.COM
    1. COPY suggested text
    2. nano ~/.ssh/config
      1. PASTE suggested text & Save
  3. ssh USERNAME@ssh.DOMAINNAME.COM

Backup/Restore Setup

I decided to keep it simple and use the rsync command which comes already installed on Raspberry Pi OS. The guide linked below does a good job of explaining rsync in a step by step process.

Below the linked guide I'll provide an example of the commands I use to Backup and Restore my raspberry Pi. This creates a copy of the /rootfs folders that make up your Raspberry Pi Operating System and User folders. The commands will exclude some folders that may cause issues when restoring a backup. The guide linked below has more details.

Since I am going to power down the Pi and physically connect it's hard drive to my computer, I don't have to worry about making backups on a live and running storage.

The below commands assume I also have an additional EXTERNAL_STORAGE hard drive connected to my computer. This means the backup command will copy the contents from the Raspberry Pi drive (/rootfs folder) to the EXTERNAL_STORAGE drive (/EXTERNAL_STORAGE/backup folder). The restore command will copy the contents from the EXTERNAL_STORAGE drive (/EXTERNAL_STORAGE/backup/rootfs folder) to the Raspberry Pi drive (/rootfs folder)

rsync WILL delete data on the target location to sync all files and folders from the source location. Be mindful of which direction you are going to avoid any losses. I suggest testing it out on some other folders before commiting to backing up and restoring the entire Raspberry Pi. The guide linked below also covers exclusions to minimize backup sizes.

The backup storage MUST be formatted in EXT4 to make sure file permissions and attributes remain the same.

  1. nano ~/.bash_aliases
    1. Add comments & Save
      • alias rsyncBACKUP="sudo rsync -avxhP --delete --exclude={'proc/','sys/','dev/','tmp/','run/','mnt/','media/','home/USERNAME/.cache','lost+found'} /media/USERNAME/rootfs /media/USERNAME/EXTERNAL_STORAGE/backup/"
      • rsyncRESTORE="sudo rsync -avxhP --delete --exclude={'proc/','sys/','dev/','tmp/','run/','mnt/','media/','home/USERNAME/.cache','lost+found'} /media/USERNAME/EXTERNAL_STORAGE/backup/rootfs/ /media/USERNAME/rootfs"
  2. Reset bash in terminal
    • . ~/.bashrc
  3. Backup system TO EXTERNAL_STORAGE
    • !!EXT4 file system only!!
    • rsBACKUP
  4. Restore system FROM EXTERNAL_STORAGE
    • rsRESTORE

Firewall (LOCAL HOST)

  1. Install: Choose ONE
    • Command line only
      • sudo apt install -y ufw
    • Graphical Interface with command line access
      • sudo apt install -y gufw

I haven't figured out how to properly set this up for myself yet, but I figure it's probably worth having for an additional layer of protection.

9
 
 

I just learned how to do a reverse proxy using Caddy, tailscale tunnel, and exposing Immich secured by OAuth all in a few hours. Now I'm no longer scared of exposing certain services to the Internet!

10
11
 
 

Consider using FreeTube, an open-source program for YouTube, because your privacy is important.

12
13
 
 

Main changes are the support for the Nextcloud Notes API and file thumbnail support.

The general compatibility with the Nextcloud Android app was also improved.

14
 
 

I've recently been able to set up Lemmy and PieFed instances on a Raspberry Pi 5 and wanted to share the process for anyone else interested in self hosting an instance.

The following instructions are based off using a used Raspberry Pi 5 (ARM64) plus a USB external hard drive for the hardware. I used the Raspberry Pi 5 image which is based off Debian 12. The following instructions should be similar enough for other Debian 12 distributions and should hopefully get the same results.

The only other purchase I've made was a domain name which was super cheap ($15 a year which includes hiding WHOIS information). Everything else is free.

My residential ISP service blocks incoming data on "business" ports such as Port 80 and 443. Users won't be able to access your site securely if these ports block incoming data. To work around this I used Cloudflare Tunnels. This allows users to access your site normally. Cloudflare Tunnel will send incoming data to a port of your choosing (between 1024-65,535) and users can access your self-hosted instance.

Cloudflare also has Top Layer Security (TLS) which encrypts traffic and protects connections. This also means your website goes from HTTP:// to HTTPS:// in the address bar. Federation will require TLS so this will be useful. Cloudflare Tunnel also introduces some complications which I'll address later.

Quick Links

Requirements

  • A purchased Domain Name
  • Set Cloudflare as your Domain Name's primary Domain Name Servers (DNS). See here
    • Do this up to a day before in advance as changes may take up to a day to take effect.
  • Raspberry Pi 5 with Raspberry Pi OS (64) image installed
    • You can use other hardware with Debian 12 Operating Systems but I can't guarantee these instructions will be the exact same
  • A USB external hard drive
    • Something with good read/write speeds will help
  • Access to any routers on your private network
    • You will need access to Port Forwarding options. You will need to read any router manuals or ask your Internet Service Provider since this is different for every device.
  • SSH remote access to Raspberry Pi 5. See here

Setup & Software Installation (LOCAL HOST)

The required software to host Lemmy or PieFed will include

  • Docker
  • Cloudflared
  • Lemmy or PieFed

Additional software I will also cover but aren't necessary are:

  • Nginx Proxy Manager (NPM)
  • UFW/GUFW - Simple Firewall
  • RSync - For making backups
  • Termux - Android terminal app that will be used for remote SSH access

Docker (LOCAL HOST)

The official Docker instructions are clear, quick and simple. The process will also add their repository information for quick and easy updates. This will be installed as a service on your operating system.

Port Forwarding/Reverse Proxy

Port Forwarding

Pick a port number between 1024-65,535. This is how Cloudflare will send data and remote connections to your instance without worrying about blocked ports. I like to use 5050 because it's simple, easy to remember and not used by any of my other self-hosted services. To be consistent, for the rest of this guide I will use port 5050 as an example. Feel free to replace it with any port number you feel like using.

Router settings are different for each device, refer to a manual or call your ISP for support depending on your situation.

  1. SSH login to your Raspberry Pi and enter the command hostname -I
    • This will print the IP addresses used by the host machine. The first IP address printed will be your local IP address. The rest of the addresses are NOT needed and can be ignored.
  2. Access your Port Forwarding settings in your private network router.
  3. Find your Raspberry Pi device by the Local IP address
  4. Add a rule to allow TCP connections on port 5050.
    • If your router port forwarding settings show Internal and External fields, simply add 5050 to both fields.
  5. Save

If you are only hosting a Lemmy or PieFed instance, you will be able to do that without the need of a Reverse Proxy which is described below. In this case you can simply use the default ports for Lemmy or PieFed. Replace my example port 5050 with the following depending on your needs:

  • Lemmy default port: 10633
  • PieFed default port: 8030

Reverse Proxy

A reverse proxy allows the local host machine to distribute incoming user connections to different services hosted on the local machine. For example, all data from Cloudflare comes in on port 5050 when accessing the DOMAINNAME.COM address. I can use Subdomains to redirect incoming connections on port 5050 to open ports on my local host machine.

For example, both Lemmy and PieFed can be hosted at the same time. We can use the subdomains lemmy. and piefed. to redirect traffic. When a user types lemmy.DOMAINNAME.COM into the address bar, Cloudflare will send the connection through 5050 to your home and private router which then continues to the Reverse Proxy. The Reverse Proxy running on the local host machine will catch the subdomain request and immediately switch to port 10633 where a connection to Lemmy will be completed. Typing in piefed.DOMAINNAME.COM will guide all requests to port 8030 where PieFed is running and complete that connection.

For simplicity, Nginx Proxy Manager is docker based with an easy to use web user interface that's accessible through your local network connected Web Browser. It has it's limitations but works fine for the current needs.

Nginx Proxy Manager (LOCAL HOST)

NPM is extremely simple to set up. Simply create a new folder, create a docker-compose.yml file filled with the necessary information and then run the container.

  1. mkdir ~/npm
  2. cd ~/npm
  3. nano docker-compose.yml
  4. Paste the following into the docker-compose.yml file and save:
  • docker-compose.yml
services:
  app:
    image: 'jc21/nginx-proxy-manager:latest'
    restart: always
    ports:
      - '5050:80'
      - '81:81'
      - '443:443'
    volumes:
      - ./data:/data
      - ./letsencrypt:/etc/letsencrypt

Note that port 5050: externally connects to NPM internally through port :80. Make sure 5050 matches the Cloudflare Tunnel port you have decided on using.

  1. docker compose up -d and wait for the services to start running
  2. In your Web Browser on any device connected to your private network, type your Raspberry Pi's local address followed by :81 into the address bar. For example 192.168.0.100:81. See Port Fowarding for help finding your local IP address.
  3. The login page will ask for account details. Enter the following:
    • Account = admin@example.com
    • Password = changeme
    • You will now be asked to create a new admin account
  4. Reverse Proxy Setup:
    1. After Login, click Hosts -> Proxy Hosts -> Add New Proxy
    2. Domain Names field: Your DOMAINNAME.COM
      • Press Enter to store that domain name, NPM won't store your domain name if you don't hit enter
    3. Forward Hostname/IP field: Your local host machine ip address (example 192.168.0.100). See Port Fowarding for help finding your local IP address.
    4. Forward Port field: I like to use port 81 to test Cloudflare Tunnels before installing Lemmy or PieFed. This is the login page for NPM. This can be quickly changed to the ports listed below after confirming a secure connection from Cloudflare Tunnels.
      • Lemmy: 10633
      • PieFed: 8030
    5. Block Common Exploits: Enabled
    6. Websockets Support: Enabled
    7. Save

Cloudflared (LOCAL HOST)

!!Only proceed with these instructions after setting Cloudflare as your Primary DNS provider. This process may take up to a day after changing nameservers!!

The following instructions do a few things. First you will install Cloudflared (with a 'd'). Then you will be asked to log in, create a tunnel, run a tunnel and then creating a service (while the current tunnel is running) so your tunnel can run automatically from startup.

I've noted that this will be installed on the local host (where you are hosting an instance), we will be installing Cloudflared on multiple devices for reasons I will cover later. Hopefully this reduces confusion later on.

  1. Service Install & Create Tunnel & Run Tunnel
    1. Select option -> Linux
    2. Step 4: Change -> credentials-file: /root/.cloudflared/<Tunnel-UUID>.json -> credentials-file: /home/USERNAME/.cloudflared/<Tunnel-UUID>.json
    3. Step 5: SKIP step 2, you will get an error and it's not important anyways.
    4. Step 6: Keep this window open after running the new tunnel
      • ONLY AFTER completing step 2.i.d. below (Run as a service), press CTRL + C to exit this tunnel
  • Example config.yml file (See above about Step 4)
tunnel: TUNNEL_ID
credentials-file: /home/USERNAME/.cloudflared/TUNNEL_ID.json
ingress:
  - hostname: DOMAINNAME.COM
    service: http://localhost:5050/
  - service: http_status:404
  1. Run as a service

    1. Open and SSH login to the Raspberry Pi in a new terminal window
      1. You will get an error if you do not copy your config.yml from your Home folder to /etc/cloudflared. You will need to copy this file again if you make any changes to the config.yml such as adding more tunnels. This will be covered later when setting up Remote SSH access.
        • sudo cp ~/.cloudflared/config.yml /etc/cloudflared/config.yml
      2. cloudflared service install
      3. systemctl start cloudflared
      4. systemctl status cloudflared
        • Check to see if things are green and working, then press CTRL + C when done to exit
        • You can now stop the running tunnel from the first step as previously stated (See Step 1.iv.)
      5. You can close this terminal window now
  2. Enable SSL connections on Cloudflare site

    • Log in to your account on Cloudflare and simply click on the following links
    1. From the main page -> Your DOMAINNAME.COM -> SSL/TLS -> Configure -> Full -> Save
    2. SSL/TLS -> Edge Certificates -> Change the following settings on Cloudflare to match what's listed below:
      • Always Use HTTPS: On
      • Opportunistic Encryption: On
      • Automatic HTTPS Rewrites: On
      • Universal SSL: Enabled

If you used NPM as a reverse proxy and it's set to port 81, go to any Web Browser and type in your DOMAINNAME.COM. You should be directed to NPM's login page. Check the address bar and your domain name should have a padlock symbol followed by https://domainname.com/. Note that it should read HTTPS:// (with an s) and not HTTP:// (without an s). HTTPS along with the padlock symbol means your connections are properly encrypted.

This is the most complicated step for self-hosting. If you can confirm your connection is encrypt, setting up other services and webapps are fairly straight forward.

Lemmy (LOCAL HOST)

The lemmy instructions are simple and straight forward. When changing the fields asked of you in the instructions, it's helpful to search and replace the required fields. In nano when editing a file, press CTRL + \ and follow the instructions at the bottom of the window. This will find and replace text.

The Lemmy instructions show text for editing with {{ Example }}. To avoid confusion, those curly braces must be removed and replaced with the expected data.

  • If you used NPM's login page to test Cloudflare Tunnels, you will need to login to NPM and change the Port Forward from 81 to 10633
    • Click Hosts -> Proxy Hosts -> Click the 3-Dots for your DOMAINNAME.COM proxy rule -> Edit & Save
  1. Follow Lemmy Install Instructions
    • IGNORE steps Reverse Proxy/Webserver & Let's Encrypt since we have addressed those steps earlier with NPM and Cloudflare Tunnels/Security.
  2. Through a Web Browser, type in your DOMAINNAME.COM and you should see an admin creation page. Complete that and the initial instance setup afterwards.
  3. Test federation, replace capitals with the required information
    • curl -H 'Accept: application/activity+json' https://domainname.com/u/LEMMY_USERNAME

      • If you see .json information, Lemmy is federated
      • If you see .html information, lemmy is NOT federated

Updating Lemmy Docker Container

See here for more information.

  1. docker compose down
  2. docker compose pull
  3. docker compose up -d

PieFed (LOCAL HOST)

The PieFed installation instructions will provide more detailed information about each step. This guide does NOT cover any email setup for PieFed.

  • If you used NPM's login page to test Cloudflare Tunnels, you will need to login to NPM and change the Port Forward from 81 to 8030

    • Click Hosts -> Proxy Hosts -> Click the 3-Dots for your DOMAINNAME.COM proxy rule -> Edit & Save
  • PieFed Install Instructions

  1. Download & Prepare files
    1. git clone https://codeberg.org/rimu/pyfedi.git
    2. cd pyfedi
    3. cp env.docker.sample .env.docker
  2. Edit & Save files
    1. nano .env.docker
      1. Change value for SECRET_KEY with random numbers and letters
      2. Change value for SERVER_NAME with your DOMAINNAME.COM
    2. nano compose.yaml
      • Note ports 8030:5000. You can change the external container port: 8030: if you are using a custom port. Do NOT touch the internal container port :5000.
        • ports:
        • - '8030:5000'
  3. Build
    1. export DOCKER_BUILDKIT=1
    2. sudo docker compose up --build
      • Wait until text stops scrolling
  4. Access your DOMAINNAME.COM from a Web Browser
    1. You may see a message that says database system is ready to accept connections in your terminal window after PieFed is done installing and loading. This means you are ready to attempt a connection through your Web Browser now.
      • If you see constant permission errors, Open and SSH login to the Raspberry Pi in a new terminal window and do the following to allow PieFed to access the required folders:
        1. cd ~/pyfedi
        2. chown -R USERNAME:USERNAME ./pgdata
          • You can leave this window open, it can be used for the step 5.
    2. You may see an "Internal Server Error" after your first connection attempt. This is normal. You will see movement in your terminal window on each attempt to connect to PieFed. Now you can proceed to initialize the database.
  5. Initialize Database
    1. Open and SSH login to the Raspberry Pi in a new terminal window
      1. sudo docker exec -it piefed_app1 sh
      2. export FLASK_APP=pyfedi.py
      3. flask init-db
        • Enter username/email/password. Email is optional.
      4. Access PieFed from your Web Browser again. PieFed should now display. You can log in as admin with the same username and password.
      5. exit
      6. You can close this terminal window now
  6. Return to the terminal with the running docker build and press CTRL + C to stop PieFed.
  7. Run PieFed in the background
    • docker-compose up -d
  8. Setup Cron (Automated) Tasks
    • This will set up automated tasks for daily maintenance, weekly maintenance and email notifications.
    • Change USERNAME to your username.
    1. Setup automated tasks
      1. sudo nano /etc/cron.d/piefed
        1. Paste and Save
  • /etc/cron.d/piefed file
5 2 * * * USERNAME docker exec piefed_app1 bash -c "cd /app && ./daily.sh"
5 4 * * 1 USERNAME docker exec piefed_app1 bash -c "cd /app && ./remove_orphan_files.sh"
1 */6 * * * USERNAME docker exec piefed_app1 bash -c "cd /app && ./email_notifs.sh"
  1. OPTIONAL: Environment Variables
    • Some functions such as email or captcha's won't work unless you add the necessary variables into the ~/pyfedi/.env.docker file. Look at ~/pyfedi/env.sample and add the other variables to ~/pyfedi/.env.docker according to your needs.
    1. View the sample file
      • nano ~/pyfedi/env.sample
    2. Edit & Save .env.docker file
      • nano ~/pyfedi/.env.docker
    3. Restart PieFed Docker container
      • docker compose down && docker compose up -d

Updating PieFed Docker Container

  1. docker compose down
  2. git pull
  3. docker compose up --build
  4. docker compose down && docker compose up -d

Cloudflare Website Settings

These settings are suggested to help manage traffic. See here for more detailed information.

  1. Exclude Settings
    1. From the main page -> Your DOMAINNAME.COM -> Security -> WAF -> Custom Rules -> Click Create Rule -> Change the following settings and values on Cloudflare to match what's listed below:
      • Rule Name: Allow Inbox
      • Field: URI Path
      • Operator: contains
      • Value: /inbox
      • Log matching requests: On
      • Then take action...: Skip
      • WAF components to skip: All remaining custom rules
    2. Click `Deploy' to complete
  2. Caching Settings
    1. From the main page -> Your DOMAINNAME.COM -> Caching -> Cache Rules -> Click Create rule -> Change the following settings on Cloudflare to match what's listed below:
      • Rule name: ActivityPub
      1. Custom filter expressions: On
        1. Field: URI Path
        2. Operator: Starts with
        3. Value: /activities/
      2. Click Or
      3. Repeat until you have values for 4 rules total containing the values:
        • /activities/
        • /api/
        • /nodeinfo/
        • /.well-known/webfinger
      • Cache Eligibility: On
      • Edge TTL -> Click + add setting
        • Click Ignore cache-control header and use this TTL
        • Input time-to-live (TTL): 2 hours
      • Click Deploy to complete
    2. Click Create rule again
      • Rule name: ActivityPub2
      1. Custom filter expressions: On
        1. Field: Request Header
        2. Name: accept
        3. Operator: contains
        4. Value: application/activity+json
      2. Click Or
      3. Repeat until you have 2 rules total containing the values:
        • application/activity+json
        • application/ld+json
      • Cache Eligibility: On
      • Edge TTL -> Click + add setting
        • Click Ignore cache-control header and use this TTL
        • Input time-to-live (TTL): Type 10 seconds
      • Click Deploy to complete
  3. Optimization Settings
    1. Speed -> Optimization -> Content Optimization -> Change the following settings on Cloudflare to match what's listed below:
      • Speed Brain: Off
      • Cloudflare Fonts: Off
      • Early Hints: Off
      • Rocket Loader: Off
  4. Cloudflare Tokens for .env.docker File
    1. Create an API "Zone.Cache Purge" token
      1. After logging in to Cloudflare, go to this page
      2. Click Create Token -> Click Get Started under Create Custom Token
      3. Token Name -> PieFed
      4. Under Permissions -> Change the following drop down menu's to match what's listed below
        • First drop down menu: Zone
        • Second drop down menu: Cache Purge
        • Third drop down menu: Purge
      5. Click Continue to summary -> Click Create Token
      6. Copy the generated API Token. This will be used for CLOUDFLARE_API_TOKEN in the .env.docker file. Note, once you leave this screen, the API token will remain but the generated code that can be copied will disappear forever.
    2. Copy API Zone ID
      1. From the main page -> Your DOMAINNAME.COM -> Scroll down and look for API Zone ID in the far right column
      2. Copy API Zone ID Token. This will be used for CLOUDFLARE_ZONE_ID in the .env.docker File.
    3. The following step must be completed on the Raspberry Pi (LOCAL HOST) where PieFed is running:
      1. nano ~/pyfedi/.env.docker
        1. Add the following lines with your copied API Tokens & Save
          • CLOUDFLARE_API_TOKEN = 'ZONE.CACHE_PURGE_TOKEN'
          • CLOUDFLARE_ZONE_ID = 'API_ZONE_ID_TOKEN'
      2. Restart PieFed Docker container
        • docker compose down && docker compose up -d

Troubleshooting

  • If you receive an error while posting images, the folder permissions will need to change. Change USERNAME with your username.
    1. cd ~/pyfedi
    2. sudo chown -R USERNAME:USERNAME ./media

Support and Services

Remote SSH Access Setup

With how Cloudflare works, SSH is not as simple and requires a bit more effort. I'm going to explain how to prepare Termux, an android terminal app, so you can access the Raspberry Pi remotely. The steps should be quite similar if you are using a Debian 12 distribution.

For remote SSH to work, you must provide a config file with some information. Fortunately, cloudflared will give you all the information you need for the config file.

A subdomain that will be used for the SSH connection will also need to be created. In this example I will simply use the subdomain ssh. which will look like this ssh.DOMAINNAME.COM.

The subdomain must be set up first before setting up the remote clients. You will use the Cloudflare Tunnel name (CLOUDFLARE_TUNNEL_NAME) that you previously created. Also note, the config file edited on the Raspberry Pi Local Host must be copied again to /etc/cloudflared before cloudflared is restarted.

The Cloudflare Tunnel name is the name you chose. Not to be confused with TUNNEL_ID which is a bunch of random letters and numbers that was generated for your tunnel.

For this example I'll use port 22 for SSH connections. This is the default port for SSH connections.

Raspberry Pi Setup (LOCAL HOST)

  1. SSH login to the Raspberry Pi
  2. cloudflared tunnel route dns CLOUDFLARE_TUNNEL_NAME ssh.DOMAINNAME.COM
  3. nano ~/.cloudflared/config.yml
    1. Paste the following and change TUNNEL_ID and DOMAINNAME.COM & Save
  • Example config.yml file
tunnel: TUNNEL_ID
credentials-file: /home/USERNAME/.cloudflared/TUNNEL_ID.json
ingress:
  - hostname: DOMAINNAME.COM
    service: http://localhost:5050/
  - hostname: ssh.DOMAINNAME.COM
    service: ssh://localhost:22
  - service: http_status:404
  1. sudo cp ~/.cloudflared/config.yml /etc/cloudflared/config.yml
  2. sudo systemctl restart cloudflared

Desktop: Install (REMOTE CLIENT)

Android Termux: Install (REMOTE CLIENT)

Termux does not have SSH installed by default, This will install both ssh and cloudflared

  • apt install openssh cloudflared -y

Login and Setup (REMOTE CLIENT)

!!Continue here after preparing the LOCAL HOST and REMOTE CLIENTs first!!

!!The following steps are to be completed on the REMOTE CLIENTs!!

  1. Login
    1. cloudflared tunnel login
    2. Complete login
  2. cloudflared access ssh-config --hostname ssh.DOMAINNAME.COM
    1. COPY suggested text
    2. nano ~/.ssh/config
      1. PASTE suggested text & Save
  3. ssh USERNAME@ssh.DOMAINNAME.COM

Backup/Restore Setup

I decided to keep it simple and use the rsync command which comes already installed on Raspberry Pi OS. The guide linked below does a good job of explaining rsync in a step by step process.

Below the linked guide I'll provide an example of the commands I use to Backup and Restore my raspberry Pi. This creates a copy of the /rootfs folders that make up your Raspberry Pi Operating System and User folders. The commands will exclude some folders that may cause issues when restoring a backup. The guide linked below has more details.

Since I am going to power down the Pi and physically connect it's hard drive to my computer, I don't have to worry about making backups on a live and running storage.

The below commands assume I also have an additional EXTERNAL_STORAGE hard drive connected to my computer. This means the backup command will copy the contents from the Raspberry Pi drive (/rootfs folder) to the EXTERNAL_STORAGE drive (/EXTERNAL_STORAGE/backup folder). The restore command will copy the contents from the EXTERNAL_STORAGE drive (/EXTERNAL_STORAGE/backup/rootfs folder) to the Raspberry Pi drive (/rootfs folder)

rsync WILL delete data on the target location to sync all files and folders from the source location. Be mindful of which direction you are going to avoid any losses. I suggest testing it out on some other folders before commiting to backing up and restoring the entire Raspberry Pi. The guide linked below also covers exclusions to minimize backup sizes.

The backup storage MUST be formatted in EXT4 to make sure file permissions and attributes remain the same.

  1. nano ~/.bash_aliases
    1. Add comments & Save
      • alias rsyncBACKUP="sudo rsync -avxhP --delete --exclude={'proc/','sys/','dev/','tmp/','run/','mnt/','media/','home/USERNAME/.cache','lost+found'} /media/USERNAME/rootfs /media/USERNAME/EXTERNAL_STORAGE/backup/"
      • rsyncRESTORE="sudo rsync -avxhP --delete --exclude={'proc/','sys/','dev/','tmp/','run/','mnt/','media/','home/USERNAME/.cache','lost+found'} /media/USERNAME/EXTERNAL_STORAGE/backup/rootfs/ /media/USERNAME/rootfs"
  2. Reset bash in terminal
    • . ~/.bashrc
  3. Backup system TO EXTERNAL_STORAGE
    • !!EXT4 file system only!!
    • rsBACKUP
  4. Restore system FROM EXTERNAL_STORAGE
    • rsRESTORE

Firewall (LOCAL HOST)

  1. Install: Choose ONE
    • Command line only
      • sudo apt install -y ufw
    • Graphical Interface with command line access
      • sudo apt install -y gufw

I haven't figured out how to properly set this up for myself yet, but I figure it's probably worth having for an additional layer of protection.

15
16
17
 
 

I recently got my hands on a lightly used Raspberry Pi 5 and have been playing around with it and breaking things while trying to learn my way around self hosting. I have a a couple questions now that I've hit a bit of a road block in learning.

  1. Is it possible to set up lemmy for local host on a local network only? I'm not worried about federated data from other instances. At this point I just want to experiment and break things before I commit to buying a Top Level Domain name.

  2. How exactly does a TLD work? I've tried searching up how to redirect traffic from a TLD to my raspberry pi. Since I don't know much about hosting or networking, I don't know what to search up to find the answer I'm looking for.

  3. How do I protect myself while self hosting? I know the Lemmy documentation suggests using Let's Encrypt, is that all I need to do in order to protect any private data being used?

My goal in the future is to have a local, text-only instance that may connect with a small number of whitelisted instances.

18
19
16
submitted 2 months ago* (last edited 2 months ago) by ptz@dubvee.org to c/selfhosting@slrpnk.net
 
 

I had to replace my UPSs a few weeks ago on short notice due to hardware failure, and I ended up getting a few LiFePO4 ones as a stopgap since they were on sale and I'd always wanted to try them. So far, so good. Curious if anyone else has switch to lithium UPSs from lead-acid and how that's going for you.

I have a big 20Ah, 48v e-bike battery that I've used with a sine-wave inverter for standby power, and it's a bit over 11 years old and going strong. So, as far as the batteries in these are concerned, I am cautiously optimistic that they'll last close to the 10 years they're advertised as. The electronics and inverter...we'll see, I guess.

Bonus question: While we're on the subject, has anybody tried those drop-in replacement 12V LiFePO4 batteries for regular UPS's? Supposedly, it says the BMS in them can work with the lead-acid chargers in UPSs and safely charge them, but I'm not sure I trust that.

20
9
submitted 2 months ago* (last edited 2 months ago) by muntedcrocodile@lemm.ee to c/selfhosting@slrpnk.net
 
 

I live in a rural aussie (with no fibre options) area with the worlds shittiest internet and especially bad upload. I been self hosting a bunch of things and simply just struggling through the shit connection.

Will be getting starlink to remedy the internet issue but it seems i need a business (priority) plan to get a public ip so i can access my services from the greater internet. This is however more expensive and i would like to avoid the additional cost if possible.

I was thinking i could wireguard proxy from my server at home to a cheap/free vps to bypass the restrictions but i suspect that would mess with how nginx on my home server manages ports etc. Plus i use my own hardware not just for security but also no recurring costs otehr than power so paying for a vps just to proxy seems like a waste.

Also been having dns issues with duckdns vos dynamic ip starlink seems not to support static ips so how should i resolve this issue.

Any advice or reccommendations?

21
22
23
 
 

First let me make sure it's clear that I am NOT trying to extend runtime by connecting two UPSs in series. That's been asked a million times on various forums, and is not what I'm trying to accomplish.

I've had 3 UPS units fail on me in the last 12-18 months, and I'm starting to wonder if it's the power flickers that are doing them in. My power rarely goes out for more than a minute or five, but before it does, it always violently flickers for a few seconds. Those flickers are hell on my unprotected equipment, and I'm wondering if that's what has caused my UPSs to die prematurely (the newest one barely lasted 5 months).

The old ones still function and still seem to do automatic voltage regulation, but none of them last more than 1-2 seconds once they switch to battery. I've tested the batteries, and they're fine; they were also all replaced about 9 months ago.

So, what I'm hoping is that the old ones can sit upstream of the new UPSs to take the brunt of any rapid brownout /surges and keep my new UPSs healthy. They're all pure sine wave and similarly rated.

Thoughts? Warnings/cautions?

24
 
 

publication croisée depuis : https://lemmy.pierre-couy.fr/post/805239

Happy birthday to Let's Encrypt !

Huge thanks to everyone involved in making HTTPS available to everyone for free !

25
 
 

Consider watching this video with FreeTube, a nifty open-source program that lets you watch YouTube videos without Google spying on your viewing habits!

Combined with Libredirect, which automatically opens youtube links in Freetube, it becomes really slick and effortless to use.

view more: next ›