confusedpuppy

joined 1 year ago
[–] confusedpuppy@lemmy.dbzer0.com 1 points 11 hours ago

A couple weeks ago I stumbled on to the fact that Docker pretty much ignores your firewall and manipulates iptables in the background. The way it sets itself up means the firewall has no idea the changes are made and won't show up when you look at all the firewall policies. You can check iptables itself to see what docker is doing but iptables isn't easy or simple to work with.

I noticed your list included firewalld but I have some concerns about that. The first is that the firewall backend has changed from iptables to nftables as the default. That means the guide you linked is missing a step to change backends. Also, when changing back ends by editing /etc/firewalld/firewalld.conf there will be a message saying iptables is deprecated and will be removed in the future:

# FirewallBackend
# Selects the firewall backend implementation.
# Choices are:
#	- nftables (default)
#	- iptables (iptables, ip6tables, ebtables and ipset)
# Note: The iptables backend is deprecated. It will be removed in a future
# release.
FirewallBackend=nftables

If following that guide works for other people, it may be okay for now. Although I think finding alternative firewalls for the future may be a thing to strongly consider.

I did stumble across some ways to help deal with opened docker ports. I currently have 3 docker services that all sit behind a docker reverse proxy. In this case I'm using Caddy as a reverse proxy. First thing to do is create a docker network, for example I created one called "reverse_proxy" with the command:

docker network create reverse_proxy

After that I add the following lines to each docker-compose.yml file for all three services plus Caddy.

services:
    networks:
      - reverse_proxy

networks:
  reverse_proxy:
    external: true

This will allow the three services plus Caddy to communicate together. Running the following command brings up all your currently running. The Name of the container will be used in the Caddyfile to set up the reverse proxy.

docker container ls --format "table {{.ID}}\t{{.Names}}\t{{.Ports}}" -a

Then you can add the following to the Caddyfile. Replace any capitalized parts with your own domain name and docker container name. Change #### to the Internal port number for your docker container. If your ports in your docker-compose.yml look like "5000:8000" 5000: is the external port, :8000 is the internal port.

SUBDOMAIN.DOMAINNAME.COM:80 {
        reverse_proxy DOCKER_CONTAINER_NAME:####
}

After starting the Caddy docker container, things should be working as normal, however the three services behind the reverse proxy are still accessible outside the reverse proxy by accessing their ports directly, for example Subdomain.domainname.com:5000 in your browser.

You can add 127.0.0.1: to the service's external port in docker-compose.yml to force those service containers ports to only be accessible through the localhost machine.

Before:

    ports:
      - 5000:8000

After:

    ports:
      - 127.0.0.1:5000:8000

After restarting the service, the only port that should be accessible from all your services should only be Caddy's port. You can check what ports are open with the command

netstat -tunpl

Below I'll leave a working example for Caddy and Kiwix (offline wikipedia)

Caddy: docker-compose.yml

services:
  caddy:
    container_name: caddy
    image: caddy:latest
    restart: unless-stopped
    ports:
      - 80:80
      - 443:443
    networks:
      - reverse_proxy
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
      - caddy_data:/data
      - caddy_config:/config

volumes:
  caddy_data:
  caddy_config:

networks:
  reverse_proxy:
    external: true

Caddy: Caddyfile

wiki.Domainname.com:80 {
        reverse_proxy kiwix:8080
}

Kiwix: docker-compose.yml (if you plan to use this setup, you MUST download a .zim file and place it in the /data/ folder. In this case /srv/kiwix/data) Kiwix Library .zim Files

services:
  kiwix:
    image: ghcr.io/kiwix/kiwix-serve
    container_name: kiwix
    ports:
      - 127.0.0.1:8080:8080
    volumes:
      - /srv/kiwix/data:/data
    command: "*.zim"
    restart: unless-stopped
    networks:
      - reverse_proxy

networks:
  reverse_proxy:
    external: true

What I'm interested in from a firewall is something that offers some sort of rate limiting feature. I would like to set it up as a simple last line of defense against DDOS situations. Even with my current setup with Docker and Caddy, I still have no control over the Caddy exposed port so anything done by the firewall will still be completely ignored still.

I may try out podman and see if I can get UFW or Awall to work as I would like it to. Hopefully that's not to deep or a rabbit hole.

Some people think I play by my rules but I don't even know what I'm doing

Apparently the hearts of 100+ people

 

When it finally came to the firewall, after realizing I was working with docker containers and my brain said "no more rabbit holes, friend." Thanks for the information.

Also gufw is just a simple graphical user window that that's built on top of ufw. I was using VNC when I began learning all this and planned on using gfuw. By the time I finished the guide, I had become comfortable handling everything from the terminal alone. It's was just kinda there in the guide at that point.

That's good to know about docker. I ran into issues modifying docker-compose.yml files while a container was up so I just made it a habit to shut containers down before making changes. I can see using pull while a container is up being more important for places concerned about unnecessary downtime though.

I'll be using whitelists to manage federation in order to keep things small. Also I am only interested in allowing people in my local community to join since that's the goal I am working towards.

I am also interested in seeing how it does hold up in the future but it's not a permanent solution. It's why I went through the process of learning RSync so I can hopefully have a simpler data migration process and setup whenever that time comes.

I wanted to share the process for everyone since a lot of what's in the guide could be useful for anyone with more appropriate server solutions, especially regarding Cloudflare's services.

The Pi itself was convenient for learning since wiping everything to start over is simple and quick.

 

I've recently been able to set up Lemmy and PieFed instances on a Raspberry Pi 5 and wanted to share the process for anyone else interested in self hosting an instance.

The following instructions are based off using a used Raspberry Pi 5 (ARM64) plus a USB external hard drive for the hardware. I used the Raspberry Pi 5 image which is based off Debian 12. The following instructions should be similar enough for other Debian 12 distributions and should hopefully get the same results.

The only other purchase I've made was a domain name which was super cheap ($15 a year which includes hiding WHOIS information). Everything else is free.

My residential ISP service blocks incoming data on "business" ports such as Port 80 and 443. Users won't be able to access your site securely if these ports block incoming data. To work around this I used Cloudflare Tunnels. This allows users to access your site normally. Cloudflare Tunnel will send incoming data to a port of your choosing (between 1024-65,535) and users can access your self-hosted instance.

Cloudflare also has Top Layer Security (TLS) which encrypts traffic and protects connections. This also means your website goes from HTTP:// to HTTPS:// in the address bar. Federation will require TLS so this will be useful. Cloudflare Tunnel also introduces some complications which I'll address later.

Quick Links

Requirements

  • A purchased Domain Name
  • Set Cloudflare as your Domain Name's primary Domain Name Servers (DNS). See here
    • Do this up to a day before in advance as changes may take up to a day to take effect.
  • Raspberry Pi 5 with Raspberry Pi OS (64) image installed
    • You can use other hardware with Debian 12 Operating Systems but I can't guarantee these instructions will be the exact same
  • A USB external hard drive
    • Something with good read/write speeds will help
  • Access to any routers on your private network
    • You will need access to Port Forwarding options. You will need to read any router manuals or ask your Internet Service Provider since this is different for every device.
  • SSH remote access to Raspberry Pi 5. See here

Setup & Software Installation (LOCAL HOST)

The required software to host Lemmy or PieFed will include

  • Docker
  • Cloudflared
  • Lemmy or PieFed

Additional software I will also cover but aren't necessary are:

  • Nginx Proxy Manager (NPM)
  • UFW/GUFW - Simple Firewall
  • RSync - For making backups
  • Termux - Android terminal app that will be used for remote SSH access

Docker (LOCAL HOST)

The official Docker instructions are clear, quick and simple. The process will also add their repository information for quick and easy updates. This will be installed as a service on your operating system.

Port Forwarding/Reverse Proxy

Port Forwarding

Pick a port number between 1024-65,535. This is how Cloudflare will send data and remote connections to your instance without worrying about blocked ports. I like to use 5050 because it's simple, easy to remember and not used by any of my other self-hosted services. To be consistent, for the rest of this guide I will use port 5050 as an example. Feel free to replace it with any port number you feel like using.

Router settings are different for each device, refer to a manual or call your ISP for support depending on your situation.

  1. SSH login to your Raspberry Pi and enter the command hostname -I
    • This will print the IP addresses used by the host machine. The first IP address printed will be your local IP address. The rest of the addresses are NOT needed and can be ignored.
  2. Access your Port Forwarding settings in your private network router.
  3. Find your Raspberry Pi device by the Local IP address
  4. Add a rule to allow TCP connections on port 5050.
    • If your router port forwarding settings show Internal and External fields, simply add 5050 to both fields.
  5. Save

If you are only hosting a Lemmy or PieFed instance, you will be able to do that without the need of a Reverse Proxy which is described below. In this case you can simply use the default ports for Lemmy or PieFed. Replace my example port 5050 with the following depending on your needs:

  • Lemmy default port: 10633
  • PieFed default port: 8030

Reverse Proxy

A reverse proxy allows the local host machine to distribute incoming user connections to different services hosted on the local machine. For example, all data from Cloudflare comes in on port 5050 when accessing the DOMAINNAME.COM address. I can use Subdomains to redirect incoming connections on port 5050 to open ports on my local host machine.

For example, both Lemmy and PieFed can be hosted at the same time. We can use the subdomains lemmy. and piefed. to redirect traffic. When a user types lemmy.DOMAINNAME.COM into the address bar, Cloudflare will send the connection through 5050 to your home and private router which then continues to the Reverse Proxy. The Reverse Proxy running on the local host machine will catch the subdomain request and immediately switch to port 10633 where a connection to Lemmy will be completed. Typing in piefed.DOMAINNAME.COM will guide all requests to port 8030 where PieFed is running and complete that connection.

For simplicity, Nginx Proxy Manager is docker based with an easy to use web user interface that's accessible through your local network connected Web Browser. It has it's limitations but works fine for the current needs.

Nginx Proxy Manager (LOCAL HOST)

NPM is extremely simple to set up. Simply create a new folder, create a docker-compose.yml file filled with the necessary information and then run the container.

  1. mkdir ~/npm
  2. cd ~/npm
  3. nano docker-compose.yml
  4. Paste the following into the docker-compose.yml file and save:
  • docker-compose.yml
services:
  app:
    image: 'jc21/nginx-proxy-manager:latest'
    restart: always
    ports:
      - '5050:80'
      - '81:81'
      - '443:443'
    volumes:
      - ./data:/data
      - ./letsencrypt:/etc/letsencrypt

Note that port 5050: externally connects to NPM internally through port :80. Make sure 5050 matches the Cloudflare Tunnel port you have decided on using.

  1. docker compose up -d and wait for the services to start running
  2. In your Web Browser on any device connected to your private network, type your Raspberry Pi's local address followed by :81 into the address bar. For example 192.168.0.100:81. See Port Fowarding for help finding your local IP address.
  3. The login page will ask for account details. Enter the following:
    • Account = admin@example.com
    • Password = changeme
    • You will now be asked to create a new admin account
  4. Reverse Proxy Setup:
    1. After Login, click Hosts -> Proxy Hosts -> Add New Proxy
    2. Domain Names field: Your DOMAINNAME.COM
      • Press Enter to store that domain name, NPM won't store your domain name if you don't hit enter
    3. Forward Hostname/IP field: Your local host machine ip address (example 192.168.0.100). See Port Fowarding for help finding your local IP address.
    4. Forward Port field: I like to use port 81 to test Cloudflare Tunnels before installing Lemmy or PieFed. This is the login page for NPM. This can be quickly changed to the ports listed below after confirming a secure connection from Cloudflare Tunnels.
      • Lemmy: 10633
      • PieFed: 8030
    5. Block Common Exploits: Enabled
    6. Websockets Support: Enabled
    7. Save

Cloudflared (LOCAL HOST)

!!Only proceed with these instructions after setting Cloudflare as your Primary DNS provider. This process may take up to a day after changing nameservers!!

The following instructions do a few things. First you will install Cloudflared (with a 'd'). Then you will be asked to log in, create a tunnel, run a tunnel and then creating a service (while the current tunnel is running) so your tunnel can run automatically from startup.

I've noted that this will be installed on the local host (where you are hosting an instance), we will be installing Cloudflared on multiple devices for reasons I will cover later. Hopefully this reduces confusion later on.

  1. Service Install & Create Tunnel & Run Tunnel
    1. Select option -> Linux
    2. Step 4: Change -> credentials-file: /root/.cloudflared/<Tunnel-UUID>.json -> credentials-file: /home/USERNAME/.cloudflared/<Tunnel-UUID>.json
    3. Step 5: SKIP step 2, you will get an error and it's not important anyways.
    4. Step 6: Keep this window open after running the new tunnel
      • ONLY AFTER completing step 2.i.d. below (Run as a service), press CTRL + C to exit this tunnel
  • Example config.yml file (See above about Step 4)
tunnel: TUNNEL_ID
credentials-file: /home/USERNAME/.cloudflared/TUNNEL_ID.json
ingress:
  - hostname: DOMAINNAME.COM
    service: http://localhost:5050/
  - service: http_status:404
  1. Run as a service

    1. Open and SSH login to the Raspberry Pi in a new terminal window
      1. You will get an error if you do not copy your config.yml from your Home folder to /etc/cloudflared. You will need to copy this file again if you make any changes to the config.yml such as adding more tunnels. This will be covered later when setting up Remote SSH access.
        • sudo cp ~/.cloudflared/config.yml /etc/cloudflared/config.yml
      2. cloudflared service install
      3. systemctl start cloudflared
      4. systemctl status cloudflared
        • Check to see if things are green and working, then press CTRL + C when done to exit
        • You can now stop the running tunnel from the first step as previously stated (See Step 1.iv.)
      5. You can close this terminal window now
  2. Enable SSL connections on Cloudflare site

    • Log in to your account on Cloudflare and simply click on the following links
    1. From the main page -> Your DOMAINNAME.COM -> SSL/TLS -> Configure -> Full -> Save
    2. SSL/TLS -> Edge Certificates -> Change the following settings on Cloudflare to match what's listed below:
      • Always Use HTTPS: On
      • Opportunistic Encryption: On
      • Automatic HTTPS Rewrites: On
      • Universal SSL: Enabled

If you used NPM as a reverse proxy and it's set to port 81, go to any Web Browser and type in your DOMAINNAME.COM. You should be directed to NPM's login page. Check the address bar and your domain name should have a padlock symbol followed by https://domainname.com/. Note that it should read HTTPS:// (with an s) and not HTTP:// (without an s). HTTPS along with the padlock symbol means your connections are properly encrypted.

This is the most complicated step for self-hosting. If you can confirm your connection is encrypt, setting up other services and webapps are fairly straight forward.

Lemmy (LOCAL HOST)

The lemmy instructions are simple and straight forward. When changing the fields asked of you in the instructions, it's helpful to search and replace the required fields. In nano when editing a file, press CTRL + \ and follow the instructions at the bottom of the window. This will find and replace text.

The Lemmy instructions show text for editing with {{ Example }}. To avoid confusion, those curly braces must be removed and replaced with the expected data.

  • If you used NPM's login page to test Cloudflare Tunnels, you will need to login to NPM and change the Port Forward from 81 to 10633
    • Click Hosts -> Proxy Hosts -> Click the 3-Dots for your DOMAINNAME.COM proxy rule -> Edit & Save
  1. Follow Lemmy Install Instructions
    • IGNORE steps Reverse Proxy/Webserver & Let's Encrypt since we have addressed those steps earlier with NPM and Cloudflare Tunnels/Security.
  2. Through a Web Browser, type in your DOMAINNAME.COM and you should see an admin creation page. Complete that and the initial instance setup afterwards.
  3. Test federation, replace capitals with the required information
    • curl -H 'Accept: application/activity+json' https://domainname.com/u/LEMMY_USERNAME

      • If you see .json information, Lemmy is federated
      • If you see .html information, lemmy is NOT federated

Updating Lemmy Docker Container

See here for more information.

  1. docker compose down
  2. docker compose pull
  3. docker compose up -d

PieFed (LOCAL HOST)

The PieFed installation instructions will provide more detailed information about each step. This guide does NOT cover any email setup for PieFed.

  • If you used NPM's login page to test Cloudflare Tunnels, you will need to login to NPM and change the Port Forward from 81 to 8030

    • Click Hosts -> Proxy Hosts -> Click the 3-Dots for your DOMAINNAME.COM proxy rule -> Edit & Save
  • PieFed Install Instructions

  1. Download & Prepare files
    1. git clone https://codeberg.org/rimu/pyfedi.git
    2. cd pyfedi
    3. cp env.docker.sample .env.docker
  2. Edit & Save files
    1. nano .env.docker
      1. Change value for SECRET_KEY with random numbers and letters
      2. Change value for SERVER_NAME with your DOMAINNAME.COM
    2. nano compose.yaml
      • Note ports 8030:5000. You can change the external container port: 8030: if you are using a custom port. Do NOT touch the internal container port :5000.
        • ports:
        • - '8030:5000'
  3. Build
    1. export DOCKER_BUILDKIT=1
    2. sudo docker compose up --build
      • Wait until text stops scrolling
  4. Access your DOMAINNAME.COM from a Web Browser
    1. You may see a message that says database system is ready to accept connections in your terminal window after PieFed is done installing and loading. This means you are ready to attempt a connection through your Web Browser now.
      • If you see constant permission errors, Open and SSH login to the Raspberry Pi in a new terminal window and do the following to allow PieFed to access the required folders:
        1. cd ~/pyfedi
        2. chown -R USERNAME:USERNAME ./pgdata
          • You can leave this window open, it can be used for the step 5.
    2. You may see an "Internal Server Error" after your first connection attempt. This is normal. You will see movement in your terminal window on each attempt to connect to PieFed. Now you can proceed to initialize the database.
  5. Initialize Database
    1. Open and SSH login to the Raspberry Pi in a new terminal window
      1. sudo docker exec -it piefed_app1 sh
      2. export FLASK_APP=pyfedi.py
      3. flask init-db
        • Enter username/email/password. Email is optional.
      4. Access PieFed from your Web Browser again. PieFed should now display. You can log in as admin with the same username and password.
      5. exit
      6. You can close this terminal window now
  6. Return to the terminal with the running docker build and press CTRL + C to stop PieFed.
  7. Run PieFed in the background
    • docker-compose up -d
  8. Setup Cron (Automated) Tasks
    • This will set up automated tasks for daily maintenance, weekly maintenance and email notifications.
    • Change USERNAME to your username.
    1. Setup automated tasks
      1. sudo nano /etc/cron.d/piefed
        1. Paste and Save
  • /etc/cron.d/piefed file
5 2 * * * USERNAME docker exec piefed_app1 bash -c "cd /app && ./daily.sh"
5 4 * * 1 USERNAME docker exec piefed_app1 bash -c "cd /app && ./remove_orphan_files.sh"
1 */6 * * * USERNAME docker exec piefed_app1 bash -c "cd /app && ./email_notifs.sh"
  1. OPTIONAL: Environment Variables
    • Some functions such as email or captcha's won't work unless you add the necessary variables into the ~/pyfedi/.env.docker file. Look at ~/pyfedi/env.sample and add the other variables to ~/pyfedi/.env.docker according to your needs.
    1. View the sample file
      • nano ~/pyfedi/env.sample
    2. Edit & Save .env.docker file
      • nano ~/pyfedi/.env.docker
    3. Restart PieFed Docker container
      • docker compose down && docker compose up -d

Updating PieFed Docker Container

  1. docker compose down
  2. git pull
  3. docker compose up --build
  4. docker compose down && docker compose up -d

Cloudflare Website Settings

These settings are suggested to help manage traffic. See here for more detailed information.

  1. Exclude Settings
    1. From the main page -> Your DOMAINNAME.COM -> Security -> WAF -> Custom Rules -> Click Create Rule -> Change the following settings and values on Cloudflare to match what's listed below:
      • Rule Name: Allow Inbox
      • Field: URI Path
      • Operator: contains
      • Value: /inbox
      • Log matching requests: On
      • Then take action...: Skip
      • WAF components to skip: All remaining custom rules
    2. Click `Deploy' to complete
  2. Caching Settings
    1. From the main page -> Your DOMAINNAME.COM -> Caching -> Cache Rules -> Click Create rule -> Change the following settings on Cloudflare to match what's listed below:
      • Rule name: ActivityPub
      1. Custom filter expressions: On
        1. Field: URI Path
        2. Operator: Starts with
        3. Value: /activities/
      2. Click Or
      3. Repeat until you have values for 4 rules total containing the values:
        • /activities/
        • /api/
        • /nodeinfo/
        • /.well-known/webfinger
      • Cache Eligibility: On
      • Edge TTL -> Click + add setting
        • Click Ignore cache-control header and use this TTL
        • Input time-to-live (TTL): 2 hours
      • Click Deploy to complete
    2. Click Create rule again
      • Rule name: ActivityPub2
      1. Custom filter expressions: On
        1. Field: Request Header
        2. Name: accept
        3. Operator: contains
        4. Value: application/activity+json
      2. Click Or
      3. Repeat until you have 2 rules total containing the values:
        • application/activity+json
        • application/ld+json
      • Cache Eligibility: On
      • Edge TTL -> Click + add setting
        • Click Ignore cache-control header and use this TTL
        • Input time-to-live (TTL): Type 10 seconds
      • Click Deploy to complete
  3. Optimization Settings
    1. Speed -> Optimization -> Content Optimization -> Change the following settings on Cloudflare to match what's listed below:
      • Speed Brain: Off
      • Cloudflare Fonts: Off
      • Early Hints: Off
      • Rocket Loader: Off
  4. Cloudflare Tokens for .env.docker File
    1. Create an API "Zone.Cache Purge" token
      1. After logging in to Cloudflare, go to this page
      2. Click Create Token -> Click Get Started under Create Custom Token
      3. Token Name -> PieFed
      4. Under Permissions -> Change the following drop down menu's to match what's listed below
        • First drop down menu: Zone
        • Second drop down menu: Cache Purge
        • Third drop down menu: Purge
      5. Click Continue to summary -> Click Create Token
      6. Copy the generated API Token. This will be used for CLOUDFLARE_API_TOKEN in the .env.docker file. Note, once you leave this screen, the API token will remain but the generated code that can be copied will disappear forever.
    2. Copy API Zone ID
      1. From the main page -> Your DOMAINNAME.COM -> Scroll down and look for API Zone ID in the far right column
      2. Copy API Zone ID Token. This will be used for CLOUDFLARE_ZONE_ID in the .env.docker File.
    3. The following step must be completed on the Raspberry Pi (LOCAL HOST) where PieFed is running:
      1. nano ~/pyfedi/.env.docker
        1. Add the following lines with your copied API Tokens & Save
          • CLOUDFLARE_API_TOKEN = 'ZONE.CACHE_PURGE_TOKEN'
          • CLOUDFLARE_ZONE_ID = 'API_ZONE_ID_TOKEN'
      2. Restart PieFed Docker container
        • docker compose down && docker compose up -d

Troubleshooting

  • If you receive an error while posting images, the folder permissions will need to change. Change USERNAME with your username.
    1. cd ~/pyfedi
    2. sudo chown -R USERNAME:USERNAME ./media

Support and Services

Remote SSH Access Setup

With how Cloudflare works, SSH is not as simple and requires a bit more effort. I'm going to explain how to prepare Termux, an android terminal app, so you can access the Raspberry Pi remotely. The steps should be quite similar if you are using a Debian 12 distribution.

For remote SSH to work, you must provide a config file with some information. Fortunately, cloudflared will give you all the information you need for the config file.

A subdomain that will be used for the SSH connection will also need to be created. In this example I will simply use the subdomain ssh. which will look like this ssh.DOMAINNAME.COM.

The subdomain must be set up first before setting up the remote clients. You will use the Cloudflare Tunnel name (CLOUDFLARE_TUNNEL_NAME) that you previously created. Also note, the config file edited on the Raspberry Pi Local Host must be copied again to /etc/cloudflared before cloudflared is restarted.

The Cloudflare Tunnel name is the name you chose. Not to be confused with TUNNEL_ID which is a bunch of random letters and numbers that was generated for your tunnel.

For this example I'll use port 22 for SSH connections. This is the default port for SSH connections.

Raspberry Pi Setup (LOCAL HOST)

  1. SSH login to the Raspberry Pi
  2. cloudflared tunnel route dns CLOUDFLARE_TUNNEL_NAME ssh.DOMAINNAME.COM
  3. nano ~/.cloudflared/config.yml
    1. Paste the following and change TUNNEL_ID and DOMAINNAME.COM & Save
  • Example config.yml file
tunnel: TUNNEL_ID
credentials-file: /home/USERNAME/.cloudflared/TUNNEL_ID.json
ingress:
  - hostname: DOMAINNAME.COM
    service: http://localhost:5050/
  - hostname: ssh.DOMAINNAME.COM
    service: ssh://localhost:22
  - service: http_status:404
  1. sudo cp ~/.cloudflared/config.yml /etc/cloudflared/config.yml
  2. sudo systemctl restart cloudflared

Desktop: Install (REMOTE CLIENT)

Android Termux: Install (REMOTE CLIENT)

Termux does not have SSH installed by default, This will install both ssh and cloudflared

  • apt install openssh cloudflared -y

Login and Setup (REMOTE CLIENT)

!!Continue here after preparing the LOCAL HOST and REMOTE CLIENTs first!!

!!The following steps are to be completed on the REMOTE CLIENTs!!

  1. Login
    1. cloudflared tunnel login
    2. Complete login
  2. cloudflared access ssh-config --hostname ssh.DOMAINNAME.COM
    1. COPY suggested text
    2. nano ~/.ssh/config
      1. PASTE suggested text & Save
  3. ssh USERNAME@ssh.DOMAINNAME.COM

Backup/Restore Setup

I decided to keep it simple and use the rsync command which comes already installed on Raspberry Pi OS. The guide linked below does a good job of explaining rsync in a step by step process.

Below the linked guide I'll provide an example of the commands I use to Backup and Restore my raspberry Pi. This creates a copy of the /rootfs folders that make up your Raspberry Pi Operating System and User folders. The commands will exclude some folders that may cause issues when restoring a backup. The guide linked below has more details.

Since I am going to power down the Pi and physically connect it's hard drive to my computer, I don't have to worry about making backups on a live and running storage.

The below commands assume I also have an additional EXTERNAL_STORAGE hard drive connected to my computer. This means the backup command will copy the contents from the Raspberry Pi drive (/rootfs folder) to the EXTERNAL_STORAGE drive (/EXTERNAL_STORAGE/backup folder). The restore command will copy the contents from the EXTERNAL_STORAGE drive (/EXTERNAL_STORAGE/backup/rootfs folder) to the Raspberry Pi drive (/rootfs folder)

rsync WILL delete data on the target location to sync all files and folders from the source location. Be mindful of which direction you are going to avoid any losses. I suggest testing it out on some other folders before commiting to backing up and restoring the entire Raspberry Pi. The guide linked below also covers exclusions to minimize backup sizes.

The backup storage MUST be formatted in EXT4 to make sure file permissions and attributes remain the same.

  1. nano ~/.bash_aliases
    1. Add comments & Save
      • alias rsyncBACKUP="sudo rsync -avxhP --delete --exclude={'proc/','sys/','dev/','tmp/','run/','mnt/','media/','home/USERNAME/.cache','lost+found'} /media/USERNAME/rootfs /media/USERNAME/EXTERNAL_STORAGE/backup/"
      • rsyncRESTORE="sudo rsync -avxhP --delete --exclude={'proc/','sys/','dev/','tmp/','run/','mnt/','media/','home/USERNAME/.cache','lost+found'} /media/USERNAME/EXTERNAL_STORAGE/backup/rootfs/ /media/USERNAME/rootfs"
  2. Reset bash in terminal
    • . ~/.bashrc
  3. Backup system TO EXTERNAL_STORAGE
    • !!EXT4 file system only!!
    • rsBACKUP
  4. Restore system FROM EXTERNAL_STORAGE
    • rsRESTORE

Firewall (LOCAL HOST)

  1. Install: Choose ONE
    • Command line only
      • sudo apt install -y ufw
    • Graphical Interface with command line access
      • sudo apt install -y gufw

I haven't figured out how to properly set this up for myself yet, but I figure it's probably worth having for an additional layer of protection.

I haven't had a chance to really test how Lemmy and PieFed work long term on the Pi 5 yet. So far it's been quick and responsive and I'm still using wifi instead of a direct ethernet connection to the main modem. Ethernet is for the future. I still have more work to finish on the Pi 5.

The Pi 5 is also running Kiwix, Dufs for file sharing and a static page. All run through their own docker containers. With only me using it, everything seems to run just quite smoothly.

My goals with the Pi 5 aren't long term. I'm using it more as a working example until I can get better equipment for hosting but that involves other plans for a local project I want to put my energy into now.

You'll definitely want to use a reliable type of USB media storage with good read and write speeds. An SD card won't do well considering these webapps are database heavy and will be constantly writing stuff.

Lemmy easy deploy seems interesting, if you can get caddy in that script to handle TLS encryption certificates, It should do nicely. I struggled with Let's Encrypt and went a different route for now.

 

I've recently been able to set up Lemmy and PieFed instances on a Raspberry Pi 5 and wanted to share the process for anyone else interested in self hosting an instance.

The following instructions are based off using a used Raspberry Pi 5 (ARM64) plus a USB external hard drive for the hardware. I used the Raspberry Pi 5 image which is based off Debian 12. The following instructions should be similar enough for other Debian 12 distributions and should hopefully get the same results.

The only other purchase I've made was a domain name which was super cheap ($15 a year which includes hiding WHOIS information). Everything else is free.

My residential ISP service blocks incoming data on "business" ports such as Port 80 and 443. Users won't be able to access your site securely if these ports block incoming data. To work around this I used Cloudflare Tunnels. This allows users to access your site normally. Cloudflare Tunnel will send incoming data to a port of your choosing (between 1024-65,535) and users can access your self-hosted instance.

Cloudflare also has Top Layer Security (TLS) which encrypts traffic and protects connections. This also means your website goes from HTTP:// to HTTPS:// in the address bar. Federation will require TLS so this will be useful. Cloudflare Tunnel also introduces some complications which I'll address later.

Quick Links

Requirements

  • A purchased Domain Name
  • Set Cloudflare as your Domain Name's primary Domain Name Servers (DNS). See here
    • Do this up to a day before in advance as changes may take up to a day to take effect.
  • Raspberry Pi 5 with Raspberry Pi OS (64) image installed
    • You can use other hardware with Debian 12 Operating Systems but I can't guarantee these instructions will be the exact same
  • A USB external hard drive
    • Something with good read/write speeds will help
  • Access to any routers on your private network
    • You will need access to Port Forwarding options. You will need to read any router manuals or ask your Internet Service Provider since this is different for every device.
  • SSH remote access to Raspberry Pi 5. See here

Setup & Software Installation (LOCAL HOST)

The required software to host Lemmy or PieFed will include

  • Docker
  • Cloudflared
  • Lemmy or PieFed

Additional software I will also cover but aren't necessary are:

  • Nginx Proxy Manager (NPM)
  • UFW/GUFW - Simple Firewall
  • RSync - For making backups
  • Termux - Android terminal app that will be used for remote SSH access

Docker (LOCAL HOST)

The official Docker instructions are clear, quick and simple. The process will also add their repository information for quick and easy updates. This will be installed as a service on your operating system.

Port Forwarding/Reverse Proxy

Port Forwarding

Pick a port number between 1024-65,535. This is how Cloudflare will send data and remote connections to your instance without worrying about blocked ports. I like to use 5050 because it's simple, easy to remember and not used by any of my other self-hosted services. To be consistent, for the rest of this guide I will use port 5050 as an example. Feel free to replace it with any port number you feel like using.

Router settings are different for each device, refer to a manual or call your ISP for support depending on your situation.

  1. SSH login to your Raspberry Pi and enter the command hostname -I
    • This will print the IP addresses used by the host machine. The first IP address printed will be your local IP address. The rest of the addresses are NOT needed and can be ignored.
  2. Access your Port Forwarding settings in your private network router.
  3. Find your Raspberry Pi device by the Local IP address
  4. Add a rule to allow TCP connections on port 5050.
    • If your router port forwarding settings show Internal and External fields, simply add 5050 to both fields.
  5. Save

If you are only hosting a Lemmy or PieFed instance, you will be able to do that without the need of a Reverse Proxy which is described below. In this case you can simply use the default ports for Lemmy or PieFed. Replace my example port 5050 with the following depending on your needs:

  • Lemmy default port: 10633
  • PieFed default port: 8030

Reverse Proxy

A reverse proxy allows the local host machine to distribute incoming user connections to different services hosted on the local machine. For example, all data from Cloudflare comes in on port 5050 when accessing the DOMAINNAME.COM address. I can use Subdomains to redirect incoming connections on port 5050 to open ports on my local host machine.

For example, both Lemmy and PieFed can be hosted at the same time. We can use the subdomains lemmy. and piefed. to redirect traffic. When a user types lemmy.DOMAINNAME.COM into the address bar, Cloudflare will send the connection through 5050 to your home and private router which then continues to the Reverse Proxy. The Reverse Proxy running on the local host machine will catch the subdomain request and immediately switch to port 10633 where a connection to Lemmy will be completed. Typing in piefed.DOMAINNAME.COM will guide all requests to port 8030 where PieFed is running and complete that connection.

For simplicity, Nginx Proxy Manager is docker based with an easy to use web user interface that's accessible through your local network connected Web Browser. It has it's limitations but works fine for the current needs.

Nginx Proxy Manager (LOCAL HOST)

NPM is extremely simple to set up. Simply create a new folder, create a docker-compose.yml file filled with the necessary information and then run the container.

  1. mkdir ~/npm
  2. cd ~/npm
  3. nano docker-compose.yml
  4. Paste the following into the docker-compose.yml file and save:
  • docker-compose.yml
services:
  app:
    image: 'jc21/nginx-proxy-manager:latest'
    restart: always
    ports:
      - '5050:80'
      - '81:81'
      - '443:443'
    volumes:
      - ./data:/data
      - ./letsencrypt:/etc/letsencrypt

Note that port 5050: externally connects to NPM internally through port :80. Make sure 5050 matches the Cloudflare Tunnel port you have decided on using.

  1. docker compose up -d and wait for the services to start running
  2. In your Web Browser on any device connected to your private network, type your Raspberry Pi's local address followed by :81 into the address bar. For example 192.168.0.100:81. See Port Fowarding for help finding your local IP address.
  3. The login page will ask for account details. Enter the following:
    • Account = admin@example.com
    • Password = changeme
    • You will now be asked to create a new admin account
  4. Reverse Proxy Setup:
    1. After Login, click Hosts -> Proxy Hosts -> Add New Proxy
    2. Domain Names field: Your DOMAINNAME.COM
      • Press Enter to store that domain name, NPM won't store your domain name if you don't hit enter
    3. Forward Hostname/IP field: Your local host machine ip address (example 192.168.0.100). See Port Fowarding for help finding your local IP address.
    4. Forward Port field: I like to use port 81 to test Cloudflare Tunnels before installing Lemmy or PieFed. This is the login page for NPM. This can be quickly changed to the ports listed below after confirming a secure connection from Cloudflare Tunnels.
      • Lemmy: 10633
      • PieFed: 8030
    5. Block Common Exploits: Enabled
    6. Websockets Support: Enabled
    7. Save

Cloudflared (LOCAL HOST)

!!Only proceed with these instructions after setting Cloudflare as your Primary DNS provider. This process may take up to a day after changing nameservers!!

The following instructions do a few things. First you will install Cloudflared (with a 'd'). Then you will be asked to log in, create a tunnel, run a tunnel and then creating a service (while the current tunnel is running) so your tunnel can run automatically from startup.

I've noted that this will be installed on the local host (where you are hosting an instance), we will be installing Cloudflared on multiple devices for reasons I will cover later. Hopefully this reduces confusion later on.

  1. Service Install & Create Tunnel & Run Tunnel
    1. Select option -> Linux
    2. Step 4: Change -> credentials-file: /root/.cloudflared/<Tunnel-UUID>.json -> credentials-file: /home/USERNAME/.cloudflared/<Tunnel-UUID>.json
    3. Step 5: SKIP step 2, you will get an error and it's not important anyways.
    4. Step 6: Keep this window open after running the new tunnel
      • ONLY AFTER completing step 2.i.d. below (Run as a service), press CTRL + C to exit this tunnel
  • Example config.yml file (See above about Step 4)
tunnel: TUNNEL_ID
credentials-file: /home/USERNAME/.cloudflared/TUNNEL_ID.json
ingress:
  - hostname: DOMAINNAME.COM
    service: http://localhost:5050/
  - service: http_status:404
  1. Run as a service

    1. Open and SSH login to the Raspberry Pi in a new terminal window
      1. You will get an error if you do not copy your config.yml from your Home folder to /etc/cloudflared. You will need to copy this file again if you make any changes to the config.yml such as adding more tunnels. This will be covered later when setting up Remote SSH access.
        • sudo cp ~/.cloudflared/config.yml /etc/cloudflared/config.yml
      2. cloudflared service install
      3. systemctl start cloudflared
      4. systemctl status cloudflared
        • Check to see if things are green and working, then press CTRL + C when done to exit
        • You can now stop the running tunnel from the first step as previously stated (See Step 1.iv.)
      5. You can close this terminal window now
  2. Enable SSL connections on Cloudflare site

    • Log in to your account on Cloudflare and simply click on the following links
    1. From the main page -> Your DOMAINNAME.COM -> SSL/TLS -> Configure -> Full -> Save
    2. SSL/TLS -> Edge Certificates -> Change the following settings on Cloudflare to match what's listed below:
      • Always Use HTTPS: On
      • Opportunistic Encryption: On
      • Automatic HTTPS Rewrites: On
      • Universal SSL: Enabled

If you used NPM as a reverse proxy and it's set to port 81, go to any Web Browser and type in your DOMAINNAME.COM. You should be directed to NPM's login page. Check the address bar and your domain name should have a padlock symbol followed by https://domainname.com/. Note that it should read HTTPS:// (with an s) and not HTTP:// (without an s). HTTPS along with the padlock symbol means your connections are properly encrypted.

This is the most complicated step for self-hosting. If you can confirm your connection is encrypt, setting up other services and webapps are fairly straight forward.

Lemmy (LOCAL HOST)

The lemmy instructions are simple and straight forward. When changing the fields asked of you in the instructions, it's helpful to search and replace the required fields. In nano when editing a file, press CTRL + \ and follow the instructions at the bottom of the window. This will find and replace text.

The Lemmy instructions show text for editing with {{ Example }}. To avoid confusion, those curly braces must be removed and replaced with the expected data.

  • If you used NPM's login page to test Cloudflare Tunnels, you will need to login to NPM and change the Port Forward from 81 to 10633
    • Click Hosts -> Proxy Hosts -> Click the 3-Dots for your DOMAINNAME.COM proxy rule -> Edit & Save
  1. Follow Lemmy Install Instructions
    • IGNORE steps Reverse Proxy/Webserver & Let's Encrypt since we have addressed those steps earlier with NPM and Cloudflare Tunnels/Security.
  2. Through a Web Browser, type in your DOMAINNAME.COM and you should see an admin creation page. Complete that and the initial instance setup afterwards.
  3. Test federation, replace capitals with the required information
    • curl -H 'Accept: application/activity+json' https://domainname.com/u/LEMMY_USERNAME

      • If you see .json information, Lemmy is federated
      • If you see .html information, lemmy is NOT federated

Updating Lemmy Docker Container

See here for more information.

  1. docker compose down
  2. docker compose pull
  3. docker compose up -d

PieFed (LOCAL HOST)

The PieFed installation instructions will provide more detailed information about each step. This guide does NOT cover any email setup for PieFed.

  • If you used NPM's login page to test Cloudflare Tunnels, you will need to login to NPM and change the Port Forward from 81 to 8030

    • Click Hosts -> Proxy Hosts -> Click the 3-Dots for your DOMAINNAME.COM proxy rule -> Edit & Save
  • PieFed Install Instructions

  1. Download & Prepare files
    1. git clone https://codeberg.org/rimu/pyfedi.git
    2. cd pyfedi
    3. cp env.docker.sample .env.docker
  2. Edit & Save files
    1. nano .env.docker
      1. Change value for SECRET_KEY with random numbers and letters
      2. Change value for SERVER_NAME with your DOMAINNAME.COM
    2. nano compose.yaml
      • Note ports 8030:5000. You can change the external container port: 8030: if you are using a custom port. Do NOT touch the internal container port :5000.
        • ports:
        • - '8030:5000'
  3. Build
    1. export DOCKER_BUILDKIT=1
    2. sudo docker compose up --build
      • Wait until text stops scrolling
  4. Access your DOMAINNAME.COM from a Web Browser
    1. You may see a message that says database system is ready to accept connections in your terminal window after PieFed is done installing and loading. This means you are ready to attempt a connection through your Web Browser now.
      • If you see constant permission errors, Open and SSH login to the Raspberry Pi in a new terminal window and do the following to allow PieFed to access the required folders:
        1. cd ~/pyfedi
        2. chown -R USERNAME:USERNAME ./pgdata
          • You can leave this window open, it can be used for the step 5.
    2. You may see an "Internal Server Error" after your first connection attempt. This is normal. You will see movement in your terminal window on each attempt to connect to PieFed. Now you can proceed to initialize the database.
  5. Initialize Database
    1. Open and SSH login to the Raspberry Pi in a new terminal window
      1. sudo docker exec -it piefed_app1 sh
      2. export FLASK_APP=pyfedi.py
      3. flask init-db
        • Enter username/email/password. Email is optional.
      4. Access PieFed from your Web Browser again. PieFed should now display. You can log in as admin with the same username and password.
      5. exit
      6. You can close this terminal window now
  6. Return to the terminal with the running docker build and press CTRL + C to stop PieFed.
  7. Run PieFed in the background
    • docker-compose up -d
  8. Setup Cron (Automated) Tasks
    • This will set up automated tasks for daily maintenance, weekly maintenance and email notifications.
    • Change USERNAME to your username.
    1. Setup automated tasks
      1. sudo nano /etc/cron.d/piefed
        1. Paste and Save
  • /etc/cron.d/piefed file
5 2 * * * USERNAME docker exec piefed_app1 bash -c "cd /app && ./daily.sh"
5 4 * * 1 USERNAME docker exec piefed_app1 bash -c "cd /app && ./remove_orphan_files.sh"
1 */6 * * * USERNAME docker exec piefed_app1 bash -c "cd /app && ./email_notifs.sh"
  1. OPTIONAL: Environment Variables
    • Some functions such as email or captcha's won't work unless you add the necessary variables into the ~/pyfedi/.env.docker file. Look at ~/pyfedi/env.sample and add the other variables to ~/pyfedi/.env.docker according to your needs.
    1. View the sample file
      • nano ~/pyfedi/env.sample
    2. Edit & Save .env.docker file
      • nano ~/pyfedi/.env.docker
    3. Restart PieFed Docker container
      • docker compose down && docker compose up -d

Updating PieFed Docker Container

  1. docker compose down
  2. git pull
  3. docker compose up --build
  4. docker compose down && docker compose up -d

Cloudflare Website Settings

These settings are suggested to help manage traffic. See here for more detailed information.

  1. Exclude Settings
    1. From the main page -> Your DOMAINNAME.COM -> Security -> WAF -> Custom Rules -> Click Create Rule -> Change the following settings and values on Cloudflare to match what's listed below:
      • Rule Name: Allow Inbox
      • Field: URI Path
      • Operator: contains
      • Value: /inbox
      • Log matching requests: On
      • Then take action...: Skip
      • WAF components to skip: All remaining custom rules
    2. Click `Deploy' to complete
  2. Caching Settings
    1. From the main page -> Your DOMAINNAME.COM -> Caching -> Cache Rules -> Click Create rule -> Change the following settings on Cloudflare to match what's listed below:
      • Rule name: ActivityPub
      1. Custom filter expressions: On
        1. Field: URI Path
        2. Operator: Starts with
        3. Value: /activities/
      2. Click Or
      3. Repeat until you have values for 4 rules total containing the values:
        • /activities/
        • /api/
        • /nodeinfo/
        • /.well-known/webfinger
      • Cache Eligibility: On
      • Edge TTL -> Click + add setting
        • Click Ignore cache-control header and use this TTL
        • Input time-to-live (TTL): 2 hours
      • Click Deploy to complete
    2. Click Create rule again
      • Rule name: ActivityPub2
      1. Custom filter expressions: On
        1. Field: Request Header
        2. Name: accept
        3. Operator: contains
        4. Value: application/activity+json
      2. Click Or
      3. Repeat until you have 2 rules total containing the values:
        • application/activity+json
        • application/ld+json
      • Cache Eligibility: On
      • Edge TTL -> Click + add setting
        • Click Ignore cache-control header and use this TTL
        • Input time-to-live (TTL): Type 10 seconds
      • Click Deploy to complete
  3. Optimization Settings
    1. Speed -> Optimization -> Content Optimization -> Change the following settings on Cloudflare to match what's listed below:
      • Speed Brain: Off
      • Cloudflare Fonts: Off
      • Early Hints: Off
      • Rocket Loader: Off
  4. Cloudflare Tokens for .env.docker File
    1. Create an API "Zone.Cache Purge" token
      1. After logging in to Cloudflare, go to this page
      2. Click Create Token -> Click Get Started under Create Custom Token
      3. Token Name -> PieFed
      4. Under Permissions -> Change the following drop down menu's to match what's listed below
        • First drop down menu: Zone
        • Second drop down menu: Cache Purge
        • Third drop down menu: Purge
      5. Click Continue to summary -> Click Create Token
      6. Copy the generated API Token. This will be used for CLOUDFLARE_API_TOKEN in the .env.docker file. Note, once you leave this screen, the API token will remain but the generated code that can be copied will disappear forever.
    2. Copy API Zone ID
      1. From the main page -> Your DOMAINNAME.COM -> Scroll down and look for API Zone ID in the far right column
      2. Copy API Zone ID Token. This will be used for CLOUDFLARE_ZONE_ID in the .env.docker File.
    3. The following step must be completed on the Raspberry Pi (LOCAL HOST) where PieFed is running:
      1. nano ~/pyfedi/.env.docker
        1. Add the following lines with your copied API Tokens & Save
          • CLOUDFLARE_API_TOKEN = 'ZONE.CACHE_PURGE_TOKEN'
          • CLOUDFLARE_ZONE_ID = 'API_ZONE_ID_TOKEN'
      2. Restart PieFed Docker container
        • docker compose down && docker compose up -d

Troubleshooting

  • If you receive an error while posting images, the folder permissions will need to change. Change USERNAME with your username.
    1. cd ~/pyfedi
    2. sudo chown -R USERNAME:USERNAME ./media

Support and Services

Remote SSH Access Setup

With how Cloudflare works, SSH is not as simple and requires a bit more effort. I'm going to explain how to prepare Termux, an android terminal app, so you can access the Raspberry Pi remotely. The steps should be quite similar if you are using a Debian 12 distribution.

For remote SSH to work, you must provide a config file with some information. Fortunately, cloudflared will give you all the information you need for the config file.

A subdomain that will be used for the SSH connection will also need to be created. In this example I will simply use the subdomain ssh. which will look like this ssh.DOMAINNAME.COM.

The subdomain must be set up first before setting up the remote clients. You will use the Cloudflare Tunnel name (CLOUDFLARE_TUNNEL_NAME) that you previously created. Also note, the config file edited on the Raspberry Pi Local Host must be copied again to /etc/cloudflared before cloudflared is restarted.

The Cloudflare Tunnel name is the name you chose. Not to be confused with TUNNEL_ID which is a bunch of random letters and numbers that was generated for your tunnel.

For this example I'll use port 22 for SSH connections. This is the default port for SSH connections.

Raspberry Pi Setup (LOCAL HOST)

  1. SSH login to the Raspberry Pi
  2. cloudflared tunnel route dns CLOUDFLARE_TUNNEL_NAME ssh.DOMAINNAME.COM
  3. nano ~/.cloudflared/config.yml
    1. Paste the following and change TUNNEL_ID and DOMAINNAME.COM & Save
  • Example config.yml file
tunnel: TUNNEL_ID
credentials-file: /home/USERNAME/.cloudflared/TUNNEL_ID.json
ingress:
  - hostname: DOMAINNAME.COM
    service: http://localhost:5050/
  - hostname: ssh.DOMAINNAME.COM
    service: ssh://localhost:22
  - service: http_status:404
  1. sudo cp ~/.cloudflared/config.yml /etc/cloudflared/config.yml
  2. sudo systemctl restart cloudflared

Desktop: Install (REMOTE CLIENT)

Android Termux: Install (REMOTE CLIENT)

Termux does not have SSH installed by default, This will install both ssh and cloudflared

  • apt install openssh cloudflared -y

Login and Setup (REMOTE CLIENT)

!!Continue here after preparing the LOCAL HOST and REMOTE CLIENTs first!!

!!The following steps are to be completed on the REMOTE CLIENTs!!

  1. Login
    1. cloudflared tunnel login
    2. Complete login
  2. cloudflared access ssh-config --hostname ssh.DOMAINNAME.COM
    1. COPY suggested text
    2. nano ~/.ssh/config
      1. PASTE suggested text & Save
  3. ssh USERNAME@ssh.DOMAINNAME.COM

Backup/Restore Setup

I decided to keep it simple and use the rsync command which comes already installed on Raspberry Pi OS. The guide linked below does a good job of explaining rsync in a step by step process.

Below the linked guide I'll provide an example of the commands I use to Backup and Restore my raspberry Pi. This creates a copy of the /rootfs folders that make up your Raspberry Pi Operating System and User folders. The commands will exclude some folders that may cause issues when restoring a backup. The guide linked below has more details.

Since I am going to power down the Pi and physically connect it's hard drive to my computer, I don't have to worry about making backups on a live and running storage.

The below commands assume I also have an additional EXTERNAL_STORAGE hard drive connected to my computer. This means the backup command will copy the contents from the Raspberry Pi drive (/rootfs folder) to the EXTERNAL_STORAGE drive (/EXTERNAL_STORAGE/backup folder). The restore command will copy the contents from the EXTERNAL_STORAGE drive (/EXTERNAL_STORAGE/backup/rootfs folder) to the Raspberry Pi drive (/rootfs folder)

rsync WILL delete data on the target location to sync all files and folders from the source location. Be mindful of which direction you are going to avoid any losses. I suggest testing it out on some other folders before commiting to backing up and restoring the entire Raspberry Pi. The guide linked below also covers exclusions to minimize backup sizes.

The backup storage MUST be formatted in EXT4 to make sure file permissions and attributes remain the same.

  1. nano ~/.bash_aliases
    1. Add comments & Save
      • alias rsyncBACKUP="sudo rsync -avxhP --delete --exclude={'proc/','sys/','dev/','tmp/','run/','mnt/','media/','home/USERNAME/.cache','lost+found'} /media/USERNAME/rootfs /media/USERNAME/EXTERNAL_STORAGE/backup/"
      • rsyncRESTORE="sudo rsync -avxhP --delete --exclude={'proc/','sys/','dev/','tmp/','run/','mnt/','media/','home/USERNAME/.cache','lost+found'} /media/USERNAME/EXTERNAL_STORAGE/backup/rootfs/ /media/USERNAME/rootfs"
  2. Reset bash in terminal
    • . ~/.bashrc
  3. Backup system TO EXTERNAL_STORAGE
    • !!EXT4 file system only!!
    • rsBACKUP
  4. Restore system FROM EXTERNAL_STORAGE
    • rsRESTORE

Firewall (LOCAL HOST)

  1. Install: Choose ONE
    • Command line only
      • sudo apt install -y ufw
    • Graphical Interface with command line access
      • sudo apt install -y gufw

I haven't figured out how to properly set this up for myself yet, but I figure it's probably worth having for an additional layer of protection.

I had a nice weekend which was needed. Met up with a friend to go to a techno party. One guy who came and danced with us for a while called us cute. I'm guessing he saw us having a good time enjoying the music and talking to people and it seemed like he enjoyed our vibes. It was a super nice compliment for both of us though.

After the party my friend and I went back to her friend's apartment to chill until the morning when I could catch a train back home. We talked and shared music while she sketched away. It was so chill and a nice way to unwind.

When she dropped me off at the station, she gave me a hug that felt a little extra, like there was a little appreciation behind it. I think she was happy to have someone who was able talk and laugh about some small mistakes which she was able to learn from throughout the night.

I treat her like a person just as I would with anyone else. It makes me feel good to have that affect on people. It also makes me a little sad that this type of treatment towards other people seems to be rare... It really takes far less energy to be accepting than it does to wake up angry and bitter at innocent people.

Other than that, I'm really growing tired and frustrated with technology dependence we are being cornered into using. Technology is a constant source of frustration and yet it feels like the majority have normalized the use of technology and headaches it comes with. It feels absurd and it's exhausting.

I'm trying hard to enjoy the moments and people that bring me happiness but there are times where my mind wanders towards the future. It gets so hard to breath in those moments...

When I get the motivation again I will give this a try. A while ago I was wondering if a tool like this existed so it's nice to see it pop up now. Thank you for this.

[–] confusedpuppy@lemmy.dbzer0.com 1 points 1 month ago (1 children)

For verification I used the built in certificate manager in Nginx Proxy Manager. I generate an API key from Cloudflare for a DNS zone:zone:edit key with the domain I am using. Then I chose DNS verification in Proxy Manager and put the API key in the edit box. This has been successful every time.

Do you use Cloudflare Tunnel or are you using Cloudflare as a Dynamic DNS? I've had issues with certbot but I think I just wasn't using it properly, what process did you use for DNS verification?

[–] confusedpuppy@lemmy.dbzer0.com 2 points 1 month ago* (last edited 1 month ago)

I'll give your suggestions a try when I get the motivation to try again. Sort of burnt myself out at the moment and would like to continue with other stuff.

I am actually using the Cloudflare Tunnel with SSL enabled which is how I was able to achieve that in the first place.

For the curious here are the steps I took to get that to work:

This is on a Raspberry Pi 5 (arm64, Raspberry Pi OS/Debian 12)

# Cloudflared -> Install & Create Tunnel & Run Tunnel
                 -> https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/get-started/create-local-tunnel/
                    -> Select option -> Linux
                    -> Step 4: Change -> credentials-file: /root/.cloudflared/<Tunnel-UUID>.json -> credentials-file: /home/USERNAME/.cloudflared/<Tunnel-UUID>.json
              -> Run as a service
                 -> Open new terminal
                 -> sudo cp ~/.cloudflared/config.yml /etc/cloudflared/config.yml
                 -> https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/configure-tunnels/local-management/as-a-service/
              -> Configuration (Optional) -> https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/configure-tunnels/local-management/configuration-file/
                 -> sudo systemctl restart cloudflared
              -> Enable SSL connections on Cloudflare site
                 -> Main Page -> Websites -> DOMAINNAME.COM -> SSL/TLS -> Configure -> Full -> Save
                    -> SSL/TLS -> Edge Certificates -> Always Use HTTPS: On -> Opportunistic Encryption: On -> Automatic HTTPS Rewrites: On -> Universal SSL: Enabled

Cloudflared complains about ~/.cloudflared/config.yml and /etc/cloudflared/config.yml not matching. I just edit ~/.cloudflared/config.yml and run sudo cp ~/.cloudflared/config.yml /etc/cloudflared/config.yml again followed by sudo systemctl restart cloudflared whenever I make any changes.

The configuration step is just there as reference for myself, it's not necessary for a simple setup.

The tunnel is nice and convenient. It does the job well. I just have a strong personal preference to not depend on large organizations. I've installed Timeshift as a backup management for myself so I can easily revisit this topic later when my brain is ready.

Nginx Proxy Manager has been handling certs for me, I'm not sure how it handles certs since it's packaged in a docker container. I can only assume it does something similar to Caddy which also automatically handles certificate registration and renewals. So probably certbot.

All I know is that NPM has an option for DNS challenges which is how I got my certs in the first place.

That's what I thought. NPM is handling the certs just fine.

Could it be that I'm setting up the reverse proxy wrong? Whenever I enable SSL on that reverse proxy, the connection just hangs and drops after a minute. I'm not understanding why it's doing that.

 

At the moment I am currently using Cloudflare as a way to provide SSL to my self-hosted site. The site sits behind a residential connection that blocks incoming data on commonly used ports including 80 and 443. It's a perfectly fine and reasonable solution which does what I want. But I'm looking to try something different.

What I would like to try is using Let's Encrypt on a non standard port. I understand there are plenty of good reasons not do this, mainly that some places such as workplaces may block higher number ports for security reasons. That's fair but I am still interested in learning how to encrypt uncommon ports with Let's Encrypt.

Currently I am using Nginx Proxy Manager to handle Let's Encrypt certificates. It's able to complete the DNS Challenge required to prove I own the domain name and handles automated certificate renewals as well. Currently I have NPM acting as a reverse proxy guiding outside connections from Cloudflare on port 5050 to port 80 on NPM. Then the connection gets sent out locally to port 81 which is the admin web page for NPM (I'm just using it as a page to test if the connection is secured).

Whenever I enable Let's Encrypt SSL and try to connect to my site, the connection times out and nothing happens. I'm not sure if Let's Encrypt is expecting to reach ports 80/443 or if there is something wrong with my reverse proxy settings that breaks the encryption along the way. Most discussions just assume ports 80/443 are open which is fair since that's the most common situation. The few sites discussing the use of uncommon ports are either many years dated or people talking about success without sharing any details. I'm sort of at the end of what I can search at this point.

What I'm hoping to learn out of all this is how encryption and reverse proxies work together because those two things have been a struggle for me to understand as a whole throughout this whole learning process. I would appreciate it a lot of anyone had any resources or experiences to share about this.

 

I've recently been able to set up Lemmy and PieFed instances on a Raspberry Pi 5 and wanted to share the process for anyone else interested in self hosting an instance.

The following instructions are based off using a used Raspberry Pi 5 (ARM64) plus a USB external hard drive for the hardware. I used the Raspberry Pi 5 image which is based off Debian 12. The following instructions should be similar enough for other Debian 12 distributions and should hopefully get the same results.

The only other purchase I've made was a domain name which was super cheap ($15 a year which includes hiding WHOIS information). Everything else is free.

My residential ISP service blocks incoming data on "business" ports such as Port 80 and 443. Users won't be able to access your site securely if these ports block incoming data. To work around this I used Cloudflare Tunnels. This allows users to access your site normally. Cloudflare Tunnel will send incoming data to a port of your choosing (between 1024-65,535) and users can access your self-hosted instance.

Cloudflare also has Top Layer Security (TLS) which encrypts traffic and protects connections. This also means your website goes from HTTP:// to HTTPS:// in the address bar. Federation will require TLS so this will be useful. Cloudflare Tunnel also introduces some complications which I'll address later.

Quick Links

Requirements

  • A purchased Domain Name
  • Set Cloudflare as your Domain Name's primary Domain Name Servers (DNS). See here
    • Do this up to a day before in advance as changes may take up to a day to take effect.
  • Raspberry Pi 5 with Raspberry Pi OS (64) image installed
    • You can use other hardware with Debian 12 Operating Systems but I can't guarantee these instructions will be the exact same
  • A USB external hard drive
    • Something with good read/write speeds will help
  • Access to any routers on your private network
    • You will need access to Port Forwarding options. You will need to read any router manuals or ask your Internet Service Provider since this is different for every device.
  • SSH remote access to Raspberry Pi 5. See here

Setup & Software Installation (LOCAL HOST)

The required software to host Lemmy or PieFed will include

  • Docker
  • Cloudflared
  • Lemmy or PieFed

Additional software I will also cover but aren't necessary are:

  • Nginx Proxy Manager (NPM)
  • UFW/GUFW - Simple Firewall
  • RSync - For making backups
  • Termux - Android terminal app that will be used for remote SSH access

Docker (LOCAL HOST)

The official Docker instructions are clear, quick and simple. The process will also add their repository information for quick and easy updates. This will be installed as a service on your operating system.

Port Forwarding/Reverse Proxy

Port Forwarding

Pick a port number between 1024-65,535. This is how Cloudflare will send data and remote connections to your instance without worrying about blocked ports. I like to use 5050 because it's simple, easy to remember and not used by any of my other self-hosted services. To be consistent, for the rest of this guide I will use port 5050 as an example. Feel free to replace it with any port number you feel like using.

Router settings are different for each device, refer to a manual or call your ISP for support depending on your situation.

  1. SSH login to your Raspberry Pi and enter the command hostname -I
    • This will print the IP addresses used by the host machine. The first IP address printed will be your local IP address. The rest of the addresses are NOT needed and can be ignored.
  2. Access your Port Forwarding settings in your private network router.
  3. Find your Raspberry Pi device by the Local IP address
  4. Add a rule to allow TCP connections on port 5050.
    • If your router port forwarding settings show Internal and External fields, simply add 5050 to both fields.
  5. Save

If you are only hosting a Lemmy or PieFed instance, you will be able to do that without the need of a Reverse Proxy which is described below. In this case you can simply use the default ports for Lemmy or PieFed. Replace my example port 5050 with the following depending on your needs:

  • Lemmy default port: 10633
  • PieFed default port: 8030

Reverse Proxy

A reverse proxy allows the local host machine to distribute incoming user connections to different services hosted on the local machine. For example, all data from Cloudflare comes in on port 5050 when accessing the DOMAINNAME.COM address. I can use Subdomains to redirect incoming connections on port 5050 to open ports on my local host machine.

For example, both Lemmy and PieFed can be hosted at the same time. We can use the subdomains lemmy. and piefed. to redirect traffic. When a user types lemmy.DOMAINNAME.COM into the address bar, Cloudflare will send the connection through 5050 to your home and private router which then continues to the Reverse Proxy. The Reverse Proxy running on the local host machine will catch the subdomain request and immediately switch to port 10633 where a connection to Lemmy will be completed. Typing in piefed.DOMAINNAME.COM will guide all requests to port 8030 where PieFed is running and complete that connection.

For simplicity, Nginx Proxy Manager is docker based with an easy to use web user interface that's accessible through your local network connected Web Browser. It has it's limitations but works fine for the current needs.

Nginx Proxy Manager (LOCAL HOST)

NPM is extremely simple to set up. Simply create a new folder, create a docker-compose.yml file filled with the necessary information and then run the container.

  1. mkdir ~/npm
  2. cd ~/npm
  3. nano docker-compose.yml
  4. Paste the following into the docker-compose.yml file and save:
  • docker-compose.yml
services:
  app:
    image: 'jc21/nginx-proxy-manager:latest'
    restart: always
    ports:
      - '5050:80'
      - '81:81'
      - '443:443'
    volumes:
      - ./data:/data
      - ./letsencrypt:/etc/letsencrypt

Note that port 5050: externally connects to NPM internally through port :80. Make sure 5050 matches the Cloudflare Tunnel port you have decided on using.

  1. docker compose up -d and wait for the services to start running
  2. In your Web Browser on any device connected to your private network, type your Raspberry Pi's local address followed by :81 into the address bar. For example 192.168.0.100:81. See Port Fowarding for help finding your local IP address.
  3. The login page will ask for account details. Enter the following:
    • Account = admin@example.com
    • Password = changeme
    • You will now be asked to create a new admin account
  4. Reverse Proxy Setup:
    1. After Login, click Hosts -> Proxy Hosts -> Add New Proxy
    2. Domain Names field: Your DOMAINNAME.COM
      • Press Enter to store that domain name, NPM won't store your domain name if you don't hit enter
    3. Forward Hostname/IP field: Your local host machine ip address (example 192.168.0.100). See Port Fowarding for help finding your local IP address.
    4. Forward Port field: I like to use port 81 to test Cloudflare Tunnels before installing Lemmy or PieFed. This is the login page for NPM. This can be quickly changed to the ports listed below after confirming a secure connection from Cloudflare Tunnels.
      • Lemmy: 10633
      • PieFed: 8030
    5. Block Common Exploits: Enabled
    6. Websockets Support: Enabled
    7. Save

Cloudflared (LOCAL HOST)

!!Only proceed with these instructions after setting Cloudflare as your Primary DNS provider. This process may take up to a day after changing nameservers!!

The following instructions do a few things. First you will install Cloudflared (with a 'd'). Then you will be asked to log in, create a tunnel, run a tunnel and then creating a service (while the current tunnel is running) so your tunnel can run automatically from startup.

I've noted that this will be installed on the local host (where you are hosting an instance), we will be installing Cloudflared on multiple devices for reasons I will cover later. Hopefully this reduces confusion later on.

  1. Service Install & Create Tunnel & Run Tunnel
    1. Select option -> Linux
    2. Step 4: Change -> credentials-file: /root/.cloudflared/<Tunnel-UUID>.json -> credentials-file: /home/USERNAME/.cloudflared/<Tunnel-UUID>.json
    3. Step 5: SKIP step 2, you will get an error and it's not important anyways.
    4. Step 6: Keep this window open after running the new tunnel
      • ONLY AFTER completing step 2.i.d. below (Run as a service), press CTRL + C to exit this tunnel
  • Example config.yml file (See above about Step 4)
tunnel: TUNNEL_ID
credentials-file: /home/USERNAME/.cloudflared/TUNNEL_ID.json
ingress:
  - hostname: DOMAINNAME.COM
    service: http://localhost:5050/
  - service: http_status:404
  1. Run as a service

    1. Open and SSH login to the Raspberry Pi in a new terminal window
      1. You will get an error if you do not copy your config.yml from your Home folder to /etc/cloudflared. You will need to copy this file again if you make any changes to the config.yml such as adding more tunnels. This will be covered later when setting up Remote SSH access.
        • sudo cp ~/.cloudflared/config.yml /etc/cloudflared/config.yml
      2. cloudflared service install
      3. systemctl start cloudflared
      4. systemctl status cloudflared
        • Check to see if things are green and working, then press CTRL + C when done to exit
        • You can now stop the running tunnel from the first step as previously stated (See Step 1.iv.)
      5. You can close this terminal window now
  2. Enable SSL connections on Cloudflare site

    • Log in to your account on Cloudflare and simply click on the following links
    1. From the main page -> Your DOMAINNAME.COM -> SSL/TLS -> Configure -> Full -> Save
    2. SSL/TLS -> Edge Certificates -> Change the following settings on Cloudflare to match what's listed below:
      • Always Use HTTPS: On
      • Opportunistic Encryption: On
      • Automatic HTTPS Rewrites: On
      • Universal SSL: Enabled

If you used NPM as a reverse proxy and it's set to port 81, go to any Web Browser and type in your DOMAINNAME.COM. You should be directed to NPM's login page. Check the address bar and your domain name should have a padlock symbol followed by https://domainname.com/. Note that it should read HTTPS:// (with an s) and not HTTP:// (without an s). HTTPS along with the padlock symbol means your connections are properly encrypted.

This is the most complicated step for self-hosting. If you can confirm your connection is encrypt, setting up other services and webapps are fairly straight forward.

Lemmy (LOCAL HOST)

The lemmy instructions are simple and straight forward. When changing the fields asked of you in the instructions, it's helpful to search and replace the required fields. In nano when editing a file, press CTRL + \ and follow the instructions at the bottom of the window. This will find and replace text.

The Lemmy instructions show text for editing with {{ Example }}. To avoid confusion, those curly braces must be removed and replaced with the expected data.

  • If you used NPM's login page to test Cloudflare Tunnels, you will need to login to NPM and change the Port Forward from 81 to 10633
    • Click Hosts -> Proxy Hosts -> Click the 3-Dots for your DOMAINNAME.COM proxy rule -> Edit & Save
  1. Follow Lemmy Install Instructions
    • IGNORE steps Reverse Proxy/Webserver & Let's Encrypt since we have addressed those steps earlier with NPM and Cloudflare Tunnels/Security.
  2. Through a Web Browser, type in your DOMAINNAME.COM and you should see an admin creation page. Complete that and the initial instance setup afterwards.
  3. Test federation, replace capitals with the required information
    • curl -H 'Accept: application/activity+json' https://domainname.com/u/LEMMY_USERNAME

      • If you see .json information, Lemmy is federated
      • If you see .html information, lemmy is NOT federated

Updating Lemmy Docker Container

See here for more information.

  1. docker compose down
  2. docker compose pull
  3. docker compose up -d

PieFed (LOCAL HOST)

The PieFed installation instructions will provide more detailed information about each step. This guide does NOT cover any email setup for PieFed.

  • If you used NPM's login page to test Cloudflare Tunnels, you will need to login to NPM and change the Port Forward from 81 to 8030

    • Click Hosts -> Proxy Hosts -> Click the 3-Dots for your DOMAINNAME.COM proxy rule -> Edit & Save
  • PieFed Install Instructions

  1. Download & Prepare files
    1. git clone https://codeberg.org/rimu/pyfedi.git
    2. cd pyfedi
    3. cp env.docker.sample .env.docker
  2. Edit & Save files
    1. nano .env.docker
      1. Change value for SECRET_KEY with random numbers and letters
      2. Change value for SERVER_NAME with your DOMAINNAME.COM
    2. nano compose.yaml
      • Note ports 8030:5000. You can change the external container port: 8030: if you are using a custom port. Do NOT touch the internal container port :5000.
        • ports:
        • - '8030:5000'
  3. Build
    1. export DOCKER_BUILDKIT=1
    2. sudo docker compose up --build
      • Wait until text stops scrolling
  4. Access your DOMAINNAME.COM from a Web Browser
    1. You may see a message that says database system is ready to accept connections in your terminal window after PieFed is done installing and loading. This means you are ready to attempt a connection through your Web Browser now.
      • If you see constant permission errors, Open and SSH login to the Raspberry Pi in a new terminal window and do the following to allow PieFed to access the required folders:
        1. cd ~/pyfedi
        2. chown -R USERNAME:USERNAME ./pgdata
          • You can leave this window open, it can be used for the step 5.
    2. You may see an "Internal Server Error" after your first connection attempt. This is normal. You will see movement in your terminal window on each attempt to connect to PieFed. Now you can proceed to initialize the database.
  5. Initialize Database
    1. Open and SSH login to the Raspberry Pi in a new terminal window
      1. sudo docker exec -it piefed_app1 sh
      2. export FLASK_APP=pyfedi.py
      3. flask init-db
        • Enter username/email/password. Email is optional.
      4. Access PieFed from your Web Browser again. PieFed should now display. You can log in as admin with the same username and password.
      5. exit
      6. You can close this terminal window now
  6. Return to the terminal with the running docker build and press CTRL + C to stop PieFed.
  7. Run PieFed in the background
    • docker-compose up -d
  8. Setup Cron (Automated) Tasks
    • This will set up automated tasks for daily maintenance, weekly maintenance and email notifications.
    • Change USERNAME to your username.
    1. Setup automated tasks
      1. sudo nano /etc/cron.d/piefed
        1. Paste and Save
  • /etc/cron.d/piefed file
5 2 * * * USERNAME docker exec piefed_app1 bash -c "cd /app && ./daily.sh"
5 4 * * 1 USERNAME docker exec piefed_app1 bash -c "cd /app && ./remove_orphan_files.sh"
1 */6 * * * USERNAME docker exec piefed_app1 bash -c "cd /app && ./email_notifs.sh"
  1. OPTIONAL: Environment Variables
    • Some functions such as email or captcha's won't work unless you add the necessary variables into the ~/pyfedi/.env.docker file. Look at ~/pyfedi/env.sample and add the other variables to ~/pyfedi/.env.docker according to your needs.
    1. View the sample file
      • nano ~/pyfedi/env.sample
    2. Edit & Save .env.docker file
      • nano ~/pyfedi/.env.docker
    3. Restart PieFed Docker container
      • docker compose down && docker compose up -d

Updating PieFed Docker Container

  1. docker compose down
  2. git pull
  3. docker compose up --build
  4. docker compose down && docker compose up -d

Cloudflare Website Settings

These settings are suggested to help manage traffic. See here for more detailed information.

  1. Exclude Settings
    1. From the main page -> Your DOMAINNAME.COM -> Security -> WAF -> Custom Rules -> Click Create Rule -> Change the following settings and values on Cloudflare to match what's listed below:
      • Rule Name: Allow Inbox
      • Field: URI Path
      • Operator: contains
      • Value: /inbox
      • Log matching requests: On
      • Then take action...: Skip
      • WAF components to skip: All remaining custom rules
    2. Click `Deploy' to complete
  2. Caching Settings
    1. From the main page -> Your DOMAINNAME.COM -> Caching -> Cache Rules -> Click Create rule -> Change the following settings on Cloudflare to match what's listed below:
      • Rule name: ActivityPub
      1. Custom filter expressions: On
        1. Field: URI Path
        2. Operator: Starts with
        3. Value: /activities/
      2. Click Or
      3. Repeat until you have values for 4 rules total containing the values:
        • /activities/
        • /api/
        • /nodeinfo/
        • /.well-known/webfinger
      • Cache Eligibility: On
      • Edge TTL -> Click + add setting
        • Click Ignore cache-control header and use this TTL
        • Input time-to-live (TTL): 2 hours
      • Click Deploy to complete
    2. Click Create rule again
      • Rule name: ActivityPub2
      1. Custom filter expressions: On
        1. Field: Request Header
        2. Name: accept
        3. Operator: contains
        4. Value: application/activity+json
      2. Click Or
      3. Repeat until you have 2 rules total containing the values:
        • application/activity+json
        • application/ld+json
      • Cache Eligibility: On
      • Edge TTL -> Click + add setting
        • Click Ignore cache-control header and use this TTL
        • Input time-to-live (TTL): Type 10 seconds
      • Click Deploy to complete
  3. Optimization Settings
    1. Speed -> Optimization -> Content Optimization -> Change the following settings on Cloudflare to match what's listed below:
      • Speed Brain: Off
      • Cloudflare Fonts: Off
      • Early Hints: Off
      • Rocket Loader: Off
  4. Cloudflare Tokens for .env.docker File
    1. Create an API "Zone.Cache Purge" token
      1. After logging in to Cloudflare, go to this page
      2. Click Create Token -> Click Get Started under Create Custom Token
      3. Token Name -> PieFed
      4. Under Permissions -> Change the following drop down menu's to match what's listed below
        • First drop down menu: Zone
        • Second drop down menu: Cache Purge
        • Third drop down menu: Purge
      5. Click Continue to summary -> Click Create Token
      6. Copy the generated API Token. This will be used for CLOUDFLARE_API_TOKEN in the .env.docker file. Note, once you leave this screen, the API token will remain but the generated code that can be copied will disappear forever.
    2. Copy API Zone ID
      1. From the main page -> Your DOMAINNAME.COM -> Scroll down and look for API Zone ID in the far right column
      2. Copy API Zone ID Token. This will be used for CLOUDFLARE_ZONE_ID in the .env.docker File.
    3. The following step must be completed on the Raspberry Pi (LOCAL HOST) where PieFed is running:
      1. nano ~/pyfedi/.env.docker
        1. Add the following lines with your copied API Tokens & Save
          • CLOUDFLARE_API_TOKEN = 'ZONE.CACHE_PURGE_TOKEN'
          • CLOUDFLARE_ZONE_ID = 'API_ZONE_ID_TOKEN'
      2. Restart PieFed Docker container
        • docker compose down && docker compose up -d

Troubleshooting

  • If you receive an error while posting images, the folder permissions will need to change. Change USERNAME with your username.
    1. cd ~/pyfedi
    2. sudo chown -R USERNAME:USERNAME ./media

Support and Services

Remote SSH Access Setup

With how Cloudflare works, SSH is not as simple and requires a bit more effort. I'm going to explain how to prepare Termux, an android terminal app, so you can access the Raspberry Pi remotely. The steps should be quite similar if you are using a Debian 12 distribution.

For remote SSH to work, you must provide a config file with some information. Fortunately, cloudflared will give you all the information you need for the config file.

A subdomain that will be used for the SSH connection will also need to be created. In this example I will simply use the subdomain ssh. which will look like this ssh.DOMAINNAME.COM.

The subdomain must be set up first before setting up the remote clients. You will use the Cloudflare Tunnel name (CLOUDFLARE_TUNNEL_NAME) that you previously created. Also note, the config file edited on the Raspberry Pi Local Host must be copied again to /etc/cloudflared before cloudflared is restarted.

The Cloudflare Tunnel name is the name you chose. Not to be confused with TUNNEL_ID which is a bunch of random letters and numbers that was generated for your tunnel.

For this example I'll use port 22 for SSH connections. This is the default port for SSH connections.

Raspberry Pi Setup (LOCAL HOST)

  1. SSH login to the Raspberry Pi
  2. cloudflared tunnel route dns CLOUDFLARE_TUNNEL_NAME ssh.DOMAINNAME.COM
  3. nano ~/.cloudflared/config.yml
    1. Paste the following and change TUNNEL_ID and DOMAINNAME.COM & Save
  • Example config.yml file
tunnel: TUNNEL_ID
credentials-file: /home/USERNAME/.cloudflared/TUNNEL_ID.json
ingress:
  - hostname: DOMAINNAME.COM
    service: http://localhost:5050/
  - hostname: ssh.DOMAINNAME.COM
    service: ssh://localhost:22
  - service: http_status:404
  1. sudo cp ~/.cloudflared/config.yml /etc/cloudflared/config.yml
  2. sudo systemctl restart cloudflared

Desktop: Install (REMOTE CLIENT)

Android Termux: Install (REMOTE CLIENT)

Termux does not have SSH installed by default, This will install both ssh and cloudflared

  • apt install openssh cloudflared -y

Login and Setup (REMOTE CLIENT)

!!Continue here after preparing the LOCAL HOST and REMOTE CLIENTs first!!

!!The following steps are to be completed on the REMOTE CLIENTs!!

  1. Login
    1. cloudflared tunnel login
    2. Complete login
  2. cloudflared access ssh-config --hostname ssh.DOMAINNAME.COM
    1. COPY suggested text
    2. nano ~/.ssh/config
      1. PASTE suggested text & Save
  3. ssh USERNAME@ssh.DOMAINNAME.COM

Backup/Restore Setup

I decided to keep it simple and use the rsync command which comes already installed on Raspberry Pi OS. The guide linked below does a good job of explaining rsync in a step by step process.

Below the linked guide I'll provide an example of the commands I use to Backup and Restore my raspberry Pi. This creates a copy of the /rootfs folders that make up your Raspberry Pi Operating System and User folders. The commands will exclude some folders that may cause issues when restoring a backup. The guide linked below has more details.

Since I am going to power down the Pi and physically connect it's hard drive to my computer, I don't have to worry about making backups on a live and running storage.

The below commands assume I also have an additional EXTERNAL_STORAGE hard drive connected to my computer. This means the backup command will copy the contents from the Raspberry Pi drive (/rootfs folder) to the EXTERNAL_STORAGE drive (/EXTERNAL_STORAGE/backup folder). The restore command will copy the contents from the EXTERNAL_STORAGE drive (/EXTERNAL_STORAGE/backup/rootfs folder) to the Raspberry Pi drive (/rootfs folder)

rsync WILL delete data on the target location to sync all files and folders from the source location. Be mindful of which direction you are going to avoid any losses. I suggest testing it out on some other folders before commiting to backing up and restoring the entire Raspberry Pi. The guide linked below also covers exclusions to minimize backup sizes.

The backup storage MUST be formatted in EXT4 to make sure file permissions and attributes remain the same.

  1. nano ~/.bash_aliases
    1. Add comments & Save
      • alias rsyncBACKUP="sudo rsync -avxhP --delete --exclude={'proc/','sys/','dev/','tmp/','run/','mnt/','media/','home/USERNAME/.cache','lost+found'} /media/USERNAME/rootfs /media/USERNAME/EXTERNAL_STORAGE/backup/"
      • rsyncRESTORE="sudo rsync -avxhP --delete --exclude={'proc/','sys/','dev/','tmp/','run/','mnt/','media/','home/USERNAME/.cache','lost+found'} /media/USERNAME/EXTERNAL_STORAGE/backup/rootfs/ /media/USERNAME/rootfs"
  2. Reset bash in terminal
    • . ~/.bashrc
  3. Backup system TO EXTERNAL_STORAGE
    • !!EXT4 file system only!!
    • rsBACKUP
  4. Restore system FROM EXTERNAL_STORAGE
    • rsRESTORE

Firewall (LOCAL HOST)

  1. Install: Choose ONE
    • Command line only
      • sudo apt install -y ufw
    • Graphical Interface with command line access
      • sudo apt install -y gufw

I haven't figured out how to properly set this up for myself yet, but I figure it's probably worth having for an additional layer of protection.

 

I recently got my hands on a lightly used Raspberry Pi 5 and have been playing around with it and breaking things while trying to learn my way around self hosting. I have a a couple questions now that I've hit a bit of a road block in learning.

  1. Is it possible to set up lemmy for local host on a local network only? I'm not worried about federated data from other instances. At this point I just want to experiment and break things before I commit to buying a Top Level Domain name.

  2. How exactly does a TLD work? I've tried searching up how to redirect traffic from a TLD to my raspberry pi. Since I don't know much about hosting or networking, I don't know what to search up to find the answer I'm looking for.

  3. How do I protect myself while self hosting? I know the Lemmy documentation suggests using Let's Encrypt, is that all I need to do in order to protect any private data being used?

My goal in the future is to have a local, text-only instance that may connect with a small number of whitelisted instances.

 

I hope this is the appropriate community for this question, if there is a better community for this, I can post it there.

It's been a bit over a week and I've had time to accept what has happened and what will happen as a result of America's recent decision. Even though I am from Canada, the news has many direct and indirect consequences which still has me concerned for the near future.

I feel that right now is the time for me to start and build a local community. I just don't know how to do that or where to begin.

I'm not the most social person so networking and leading will be a huge hurdle for me. I'm not creative enough with drawing or writing so creating flyers or propaganda would also be a challenge for me. I've always been more comfortable working and building things with my hands and have a pretty deep interest in land management and sustainability.

I also have the additional issue of being a person of colour in a mainly white town. Lifted trucks, SUVs, unwelcoming stares and plenty of entitled behaviours. The population in this town is mostly young families with younger children or old white folks which I doubt have any care for the future ahead of us. There's not much in between.

Over the past couple years, I have been buying various types of seeds and collecting seeds from my garden as plants mature. I've been trying to create a seed library for myself. Lately I've been thinking of trying to start a local seed library as a way to start some sort of community. Maybe even use that as a way to teach more local, sustainable habits.

I just don't know where to begin and starting feels quite overwhelming for one person.

I'm hoping to start a discussion or even brainstorm some ideas on what people can do, how to begin and how to follow through with building local communities.

Any idea outside of a seed library is welcome. It would help to have a nice, broad spread of ideas to draw from. I believe that would help keep the progress of the local communities adaptable as time goes on.

 

I wanted to share all the mushroom parties I came across in October on some local hiking trails I visit regularly. No idea what any of them are but are located in Southern Ontario.

 
 

Probably scouting out a veggie heist from my garden...

 

A nice little surprise :)

 

I'm thinking about adding a rain collector to use in my garden but I have some concerns about construction materials.

One concern is that I'm not a huge fan of using a plastic container to store water. The idea of water sitting in a plastic barrel that could be exposed to heat from direct sunlight doesn't fill me with excitement. I was wondering what other materials or containers I could use that might be better for storing rain water. One idea I had was to modify a metal keg to collect water. They would be smaller but I could use multiple if I wanted.

The other concern I have is about roofing materials. Is it safe to use water collected from a roof with shingles in a garden for vegetables? I'm wondering if there might be any run off from the materials used for roofing.

view more: next ›