confusedpuppy

joined 2 years ago

I created my own script/tool using rsync to handle backups and transferring data.

My needs are quite smaller with just a computer and two Raspberry Pi's but I found rsync to be really useful overall.

My backup strategy is to make a complete backup on the local device (Computer / RPi4 / RPi5) then copy all those backups to a Storage partition on my computer, then make a whole backup from the partition to an externally attached SSD.

The RPi's both use docker/podman containers so I make sure any persistent data is in mounted directories. I usually stop all containers before performing a backup, especially things with databases.

Everything in the docker containers is either hit or miss when it comes to restoring. The simple docker images restore as it they were untouched and will launch like nothing happened. I have a PieFed instance that must be rebuilt after restoring a backup. Since PieFed's persistent data is in mount points, everything works perfectly after a fresh build.

I can send a link to my rsync tool if that's any interest to anyone. I've found it super useful for backups and minimizes so much headache for myself when it comes to transferring files between different network connected devices.

[–] confusedpuppy@lemmy.dbzer0.com 2 points 3 days ago (1 children)

Aah, I just noticed that they were eating all the beans sprouting in my garden. I had some beans from last year that went moldy on the bottom of the container because I didn't let them dry properly. I just threw a bunch of good ones into the garden and lawn randomly.

Also, I don't remember buying 4 kilograms of clover seeds but I found them in a bin in my closet. I've randomly tossed those out into the lawn and garden to attract more pollinators in general.

I've also done the same with some native chickweed seeds too. I'm secretly at war with everyone's silly, plain, green lawns.

I made a garden with a bunch of seeds I picked from a nearby hiking trail and the bunnies seem to really enjoy whatever is growing in there too. They at least have some variety.

Maybe it's something sightly outside no js/ccs/html but I am curious if there are any super minimal social media sites.

I want to do something locally within my town and it would be nice to host something simple and tiny with my raspberry pi as the server.

I'm assuming bulletin boards are quite minimal in comparison to other types of social media but I've never been a fan of how they handle previous replies with those boxed quotes.

I've also been nostalgic for irc lately. Everything on the internet these days has become overwhelming. Over the past 1.5 years I've been turning to simplicity and it's a craving I that's hard to ignore.

[–] confusedpuppy@lemmy.dbzer0.com 3 points 5 days ago (3 children)

I am able to walk around this one without it hopping away. I just have to give it a good 3 meters of space as well as not looking in it's general direction as I move around.

I've been secretly spreading clover seeds and beans in certain areas of the lawn to keep the bunnies happy and coming back.

127
Butt (lemmy.dbzer0.com)
submitted 5 days ago* (last edited 5 days ago) by confusedpuppy@lemmy.dbzer0.com to c/bun_alert_system@lemmy.sdf.org
 

Bonus claw

Am I numb or am I exhausted? Maybe I'll find out after another nap.

I have a computer and 3 devices I wanted to transfer files between but every available solution was either too awkward which made things annoying, or too bulky with more than what I needed.

I ended up writing a long script (around 1000 lines but I'm generous with spacing so I can read my own code easily) using rsync to deal with transferring files and whole directories with a single command. I can even chain together multiple rsync commands back to back so that I can quickly transfer multiple files or directories in one command. Instead of trying to refer to a wall of text full of rsync commands, I can make something like this:

alias rtPHONEmedia="doas rtransfer /home/dell-pc/.sync/phone/.sync-phone_02_playlists /home/dell-pc/.sync/phone/.sync-phone_03_arbeit /home/dell-pc/.sync/phone/.sync-phone_04_albums /home/dell-pc/.sync/phone/.sync-phone_05_soulseek /home/dell-pc/.sync/phone/.sync-phone_06_youtube"

This will copy everything from a specific folders on my phone, and store them neatly organized into my storage partition on my computer SSD. This also includes all the necessary information including SSH username, address and ID keys.

I can then run alias rtARCHIVEfull="doas rtransfer /home/dell-pc/.sync/computer/.sync-computer_01_archive-full" to quickly copy that storage partition on my computer to my external backup SSD.

I use it so often. It's especially nice because I can work on a file on my computer and quickly update the file to the remote address location, putting it directly where I need it to be immediately.

 
[–] confusedpuppy@lemmy.dbzer0.com 60 points 1 week ago (5 children)

Politics is just a bunch of old men helicoptering at each other while the rest of us watch, suffer and die.

I started self-hosting as a hobby and while I enjoy it, I was getting frustrated with file transfers between my computer, phone and two raspberry pi's. Since I was already using rsync, I created a tool for myself to help sort rsync commands into sortable files.

I can now lump together those files into a single command and run several rsync commands in one go.

It's definitely saved me some sanity by not having to refer to a wall of text full of rsync aliases.

I posted it on codeberg.

It is random code on the internet and it involves file transfers so if anyone uses it, those are the risks unless you care to read the code itself.

I took sudo out of the examples and disabled it by default within the script. I still have a personal use for it.

I keep a local backup on each device then transfer that backup to my desktop. Rsync requires root access to transfer files or directories with certain attributes over ssh. Otherwise the backup copy to my desktop is incomplete.

Fortunately I already coded in a toggle for requesting root since Termux on Android has no root by default. I just won't note that in the readme file. That can be left for anyone who cares to read the code itself.

[–] confusedpuppy@lemmy.dbzer0.com 2 points 1 week ago* (last edited 1 week ago)

I use a lot of commands that either use the --delete option or require remote root access in order to preserve hard links and other attributes.

I didn't know that was an issue. I was going from my own limited experience with linux.

I already set an option to disable the root requirement at the beginning of the script. Simply changing the value to 0 will disable it and will let rsync display it's own errors.

What exactly makes it suspect so I know what I'm doing?

 

Ever since I started down the self-hosting rabbit hole one issue I've been constantly annoyed with is transferring files between machines. Especially files which I am currently working on and that I want to periodically transfer to a specific remote location.

Rsync has been my go to tool for file transfers and backups but I absolutely hate making commands for it. I was getting lost in a wall of text with all my rsync aliases.

The script I created helps me organize those commands by placing all the information for an rsync command into an easier to read dot-file. The script will read the dot-file, update the rsync command and run it. It's also set up to preview file transfers with rsync's dry run option. I've also added a few other features to make the script more flexible including being able to quickly backup a directory that I'm currently working in.

For self-hosting, I find working with files a bit easier to organize than a text full of aliases although I can see it easy to get overwhelmed with dot-files if some people aren't organized.

I've also made it as POSIX shell friendly as possible. I know a couple things I did aren't fully POSIX compliant currently but I that's something to work towards. This is also the first programming thing I ever completed so I'm quite happy with how it turned out.

If at least one other person finds this script useful, that would be pretty neat :)

[–] confusedpuppy@lemmy.dbzer0.com 49 points 2 weeks ago (3 children)

I had an interaction once where I thought I used double quotes around a word to imply something obvious related to the posted article. A random person got mad at me and claimed I knew nothing about solidarity.

I felt insulted, they didn't know my life experiences up to that point. I chose to ignore my feelings and pressed them to teach my why I was so wrong. They eventually disappeared from replies because they had nothing behind that image of righteousness. Rare win but I'll take it.

If someone put themselves in harms way to punch an authoritarian follower in the face in my defence and also uses slurs I could find offensive to myself, that's not my enemy. That's someone awesome who could use a little more education. Later. When the current situation isn't so wild.

Words are just words. That's not as effective as punching a fascist in face.

[–] confusedpuppy@lemmy.dbzer0.com 11 points 2 weeks ago (7 children)

I think I've worked in automation long enough to feel super uncomfortable with the idea of a tattoo print machine being anywhere near my body.

Even if I had a kill switch in hand, it still makes me uncomfortable. In general machines don't care about fleshy bits at all. If something happens, for example a sensor ages and becomes defective, the printer has the potential to cause serious harm.

I probably also hold a bit of bias, I prefer the imperfections of human, hand made art over digitized perfection from machines.

 
 

For fun, convenience and a chance to learn shell scripting for myself, I've been working on a collection of scripts to semi-automate the backup of my computer, two network connected Raspberry Pi's and my Android phone with Termux.

The main script basically runs a bunch of remote backup scripts, then copies those remote backups to a dedicated partition on my computer and finally creates a copy of that partition to an external drive connected to my computer. I use rsync and it's dry run feature so I am able to examine any file changes which has been super useful to me for catching mistakes and issues as I've been learning how to self-host over the past half year.

I have a simplified version of those backup scripts that makes a copy of my /home directory:

#!/bin/sh



# VARIABLES
## ENABLE FILE TRANSFER: Change variable `DRY_RUN` to `#DRY_RUN` to enable file tranfers
DRY_RUN="--dry-run" # Disables all file transfers

## PATHS (USER DEFINED)
SOURCE_DIRECTORY_PATH="/home"
SOURCE_DIRECTORY_PATH_EXCLUSIONS="--exclude=lost+found --exclude=.cache/*"
BACKUP_NAME="home"
BACKUP_BASE_PATH="/backup"

## PATHS (SCRIPT DEFINED/DO NOT TOUCH)
SOURCE_DIR="${SOURCE_DIRECTORY_PATH}/"
DESTINATION_DIR="${BACKUP_BASE_PATH}/${BACKUP_NAME}/"

## EXCLUSIONS (SCRIPT DEFINED/DO NOT TOUCH)
EXCLUDE_DIR="${SOURCE_DIRECTORY_PATH_EXCLUSIONS}"

## OPTIONS (SCRIPT DEFINED/DO NOT TOUCH)
OPTIONS="--archive --acls --one-file-system --xattrs --hard-links --sparse --verbose --human-readable --partial --progress --compress"
OPTIONS_EXTRA="--delete --numeric-ids"



# FUNCTIONS
## SPACER
SPACER() {
    printf "\n\n\n\n\n"
}

## RSYNC ERROR WARNINGS
ERROR_WARNINGS() {
    if [ "$RSYNC_STATUS" -eq 0 ]; then

        # SUCCESSFUL
        printf "\nSync successful"
        printf "\nExit status(0): %s\n" "$RSYNC_STATUS"

    else
        # ERRORS OCCURED
        printf "\nSome error occurred"
        printf "\nExit status(0): %s\n" "$RSYNC_STATUS"
    fi
}

## CONFIRMATION (YES/NO)
CONFIRM_YESNO() {
    while true; do
        prompt="${1}"
        printf "%s (Yes/No): " "${prompt}" >&2 # FUNCTION CALL REQUIRES TEXT PROMPT ARGUMENT
        read -r reply
        case $reply in
            [Yy]* ) return 0;; # YES
            [Nn]* ) return 1;; # NO
            * ) printf "Options: y / n\n";;
        esac
    done
}



##### START

# CHECK FOR ROOT
if ! [ "$(id -u)" = 0 ]; then

    # EXIT WITH NO ACTIONS TAKEN
    printf "\nRoot access required\n\n"
    return

else
    printf "\nStarting backup process..."

    # ${SOURCE_DIR} TO ${DESTINATION_DIR} DRY RUN
    SPACER
    printf "\nStarting %s dry run\n" "${SOURCE_DIR}"
    rsync --dry-run ${OPTIONS} ${OPTIONS_EXTRA} ${EXCLUDE_DIR} "${SOURCE_DIR}" "${DESTINATION_DIR}"
    RSYNC_STATUS=$?
    ERROR_WARNINGS

    # CONFIRM ${SOURCE_DIR} TO ${DESTINATION_DIR} BACKUP
    SPACER
    if CONFIRM_YESNO "Proceed with ${SOURCE_DIR} backup?"; then

        # CONTINUE ${SOURCE_DIR} TO ${DESTINATION_DIR} BACKUP & EXIT
        printf "\nContinuing %s backup\n" "${SOURCE_DIR}"
        rsync ${DRY_RUN} ${OPTIONS} ${OPTIONS_EXTRA} ${EXCLUDE_DIR} "${SOURCE_DIR}" "${DESTINATION_DIR}"
        RSYNC_STATUS=$?
        ERROR_WARNINGS

        printf "\n%s backup completed\n\n" "${SOURCE_DIR}"
        return

    else
        # SKIP ${SOURCE_DIR} TO ${DESTINATION_DIR} BACKUP & EXIT
        printf "\n%s backup skipped\n\n" "${SOURCE_DIR}"
        return
    fi
fi

##### FINISH

I would like to adapt this script so that I can add multiple copies of the following variables:

## PATHS (USER DEFINED)
SOURCE_DIRECTORY_PATH="/home"
SOURCE_DIRECTORY_PATH_EXCLUSIONS="--exclude=lost+found --exclude=.cache/*"
BACKUP_NAME="home"
BACKUP_BASE_PATH="/backup"

without having to make multiple copies of the following commands within the running script:

    # ${SOURCE_DIR} TO ${DESTINATION_DIR} DRY RUN
    SPACER
    printf "\nStarting %s dry run\n" "${SOURCE_DIR}"
    rsync --dry-run ${OPTIONS} ${OPTIONS_EXTRA} ${EXCLUDE_DIR} "${SOURCE_DIR}" "${DESTINATION_DIR}"
    RSYNC_STATUS=$?
    ERROR_WARNINGS

I'm mainly just looking for a way to avoid touching the script commands itself so I don't have to change the variable names for each additional directory I want to add. I'm not sure what that would be called or where to look. Any help would be greatly appreciated.

 

This is an insightful short video essay that talks about how we cope as people during these difficult times we are all facing.

I really enjoy the artistic style and editing of his videos as well which alone I think is worth sharing.

 
 

At the moment I am currently using Cloudflare as a way to provide SSL to my self-hosted site. The site sits behind a residential connection that blocks incoming data on commonly used ports including 80 and 443. It's a perfectly fine and reasonable solution which does what I want. But I'm looking to try something different.

What I would like to try is using Let's Encrypt on a non standard port. I understand there are plenty of good reasons not do this, mainly that some places such as workplaces may block higher number ports for security reasons. That's fair but I am still interested in learning how to encrypt uncommon ports with Let's Encrypt.

Currently I am using Nginx Proxy Manager to handle Let's Encrypt certificates. It's able to complete the DNS Challenge required to prove I own the domain name and handles automated certificate renewals as well. Currently I have NPM acting as a reverse proxy guiding outside connections from Cloudflare on port 5050 to port 80 on NPM. Then the connection gets sent out locally to port 81 which is the admin web page for NPM (I'm just using it as a page to test if the connection is secured).

Whenever I enable Let's Encrypt SSL and try to connect to my site, the connection times out and nothing happens. I'm not sure if Let's Encrypt is expecting to reach ports 80/443 or if there is something wrong with my reverse proxy settings that breaks the encryption along the way. Most discussions just assume ports 80/443 are open which is fair since that's the most common situation. The few sites discussing the use of uncommon ports are either many years dated or people talking about success without sharing any details. I'm sort of at the end of what I can search at this point.

What I'm hoping to learn out of all this is how encryption and reverse proxies work together because those two things have been a struggle for me to understand as a whole throughout this whole learning process. I would appreciate it a lot of anyone had any resources or experiences to share about this.

 

I've recently been able to set up Lemmy and PieFed instances on a Raspberry Pi 5 and wanted to share the process for anyone else interested in self hosting an instance.

The following instructions are based off using a used Raspberry Pi 5 (ARM64) plus a USB external hard drive for the hardware. I used the Raspberry Pi 5 image which is based off Debian 12. The following instructions should be similar enough for other Debian 12 distributions and should hopefully get the same results.

The only other purchase I've made was a domain name which was super cheap ($15 a year which includes hiding WHOIS information). Everything else is free.

My residential ISP service blocks incoming data on "business" ports such as Port 80 and 443. Users won't be able to access your site securely if these ports block incoming data. To work around this I used Cloudflare Tunnels. This allows users to access your site normally. Cloudflare Tunnel will send incoming data to a port of your choosing (between 1024-65,535) and users can access your self-hosted instance.

Cloudflare also has Top Layer Security (TLS) which encrypts traffic and protects connections. This also means your website goes from HTTP:// to HTTPS:// in the address bar. Federation will require TLS so this will be useful. Cloudflare Tunnel also introduces some complications which I'll address later.

Quick Links

Requirements

  • A purchased Domain Name
  • Set Cloudflare as your Domain Name's primary Domain Name Servers (DNS). See here
    • Do this up to a day before in advance as changes may take up to a day to take effect.
  • Raspberry Pi 5 with Raspberry Pi OS (64) image installed
    • You can use other hardware with Debian 12 Operating Systems but I can't guarantee these instructions will be the exact same
  • A USB external hard drive
    • Something with good read/write speeds will help
  • Access to any routers on your private network
    • You will need access to Port Forwarding options. You will need to read any router manuals or ask your Internet Service Provider since this is different for every device.
  • SSH remote access to Raspberry Pi 5. See here

Setup & Software Installation (LOCAL HOST)

The required software to host Lemmy or PieFed will include

  • Docker
  • Cloudflared
  • Lemmy or PieFed

Additional software I will also cover but aren't necessary are:

  • Nginx Proxy Manager (NPM)
  • UFW/GUFW - Simple Firewall
  • RSync - For making backups
  • Termux - Android terminal app that will be used for remote SSH access

Docker (LOCAL HOST)

The official Docker instructions are clear, quick and simple. The process will also add their repository information for quick and easy updates. This will be installed as a service on your operating system.

Port Forwarding/Reverse Proxy

Port Forwarding

Pick a port number between 1024-65,535. This is how Cloudflare will send data and remote connections to your instance without worrying about blocked ports. I like to use 5050 because it's simple, easy to remember and not used by any of my other self-hosted services. To be consistent, for the rest of this guide I will use port 5050 as an example. Feel free to replace it with any port number you feel like using.

Router settings are different for each device, refer to a manual or call your ISP for support depending on your situation.

  1. SSH login to your Raspberry Pi and enter the command hostname -I
    • This will print the IP addresses used by the host machine. The first IP address printed will be your local IP address. The rest of the addresses are NOT needed and can be ignored.
  2. Access your Port Forwarding settings in your private network router.
  3. Find your Raspberry Pi device by the Local IP address
  4. Add a rule to allow TCP connections on port 5050.
    • If your router port forwarding settings show Internal and External fields, simply add 5050 to both fields.
  5. Save

If you are only hosting a Lemmy or PieFed instance, you will be able to do that without the need of a Reverse Proxy which is described below. In this case you can simply use the default ports for Lemmy or PieFed. Replace my example port 5050 with the following depending on your needs:

  • Lemmy default port: 10633
  • PieFed default port: 8030

Reverse Proxy

A reverse proxy allows the local host machine to distribute incoming user connections to different services hosted on the local machine. For example, all data from Cloudflare comes in on port 5050 when accessing the DOMAINNAME.COM address. I can use Subdomains to redirect incoming connections on port 5050 to open ports on my local host machine.

For example, both Lemmy and PieFed can be hosted at the same time. We can use the subdomains lemmy. and piefed. to redirect traffic. When a user types lemmy.DOMAINNAME.COM into the address bar, Cloudflare will send the connection through 5050 to your home and private router which then continues to the Reverse Proxy. The Reverse Proxy running on the local host machine will catch the subdomain request and immediately switch to port 10633 where a connection to Lemmy will be completed. Typing in piefed.DOMAINNAME.COM will guide all requests to port 8030 where PieFed is running and complete that connection.

For simplicity, Nginx Proxy Manager is docker based with an easy to use web user interface that's accessible through your local network connected Web Browser. It has it's limitations but works fine for the current needs.

Nginx Proxy Manager (LOCAL HOST)

NPM is extremely simple to set up. Simply create a new folder, create a docker-compose.yml file filled with the necessary information and then run the container.

  1. mkdir ~/npm
  2. cd ~/npm
  3. nano docker-compose.yml
  4. Paste the following into the docker-compose.yml file and save:
  • docker-compose.yml
services:
  app:
    image: 'jc21/nginx-proxy-manager:latest'
    restart: always
    ports:
      - '5050:80'
      - '81:81'
      - '443:443'
    volumes:
      - ./data:/data
      - ./letsencrypt:/etc/letsencrypt

Note that port 5050: externally connects to NPM internally through port :80. Make sure 5050 matches the Cloudflare Tunnel port you have decided on using.

  1. docker compose up -d and wait for the services to start running
  2. In your Web Browser on any device connected to your private network, type your Raspberry Pi's local address followed by :81 into the address bar. For example 192.168.0.100:81. See Port Fowarding for help finding your local IP address.
  3. The login page will ask for account details. Enter the following:
    • Account = admin@example.com
    • Password = changeme
    • You will now be asked to create a new admin account
  4. Reverse Proxy Setup:
    1. After Login, click Hosts -> Proxy Hosts -> Add New Proxy
    2. Domain Names field: Your DOMAINNAME.COM
      • Press Enter to store that domain name, NPM won't store your domain name if you don't hit enter
    3. Forward Hostname/IP field: Your local host machine ip address (example 192.168.0.100). See Port Fowarding for help finding your local IP address.
    4. Forward Port field: I like to use port 81 to test Cloudflare Tunnels before installing Lemmy or PieFed. This is the login page for NPM. This can be quickly changed to the ports listed below after confirming a secure connection from Cloudflare Tunnels.
      • Lemmy: 10633
      • PieFed: 8030
    5. Block Common Exploits: Enabled
    6. Websockets Support: Enabled
    7. Save

Cloudflared (LOCAL HOST)

!!Only proceed with these instructions after setting Cloudflare as your Primary DNS provider. This process may take up to a day after changing nameservers!!

The following instructions do a few things. First you will install Cloudflared (with a 'd'). Then you will be asked to log in, create a tunnel, run a tunnel and then creating a service (while the current tunnel is running) so your tunnel can run automatically from startup.

I've noted that this will be installed on the local host (where you are hosting an instance), we will be installing Cloudflared on multiple devices for reasons I will cover later. Hopefully this reduces confusion later on.

  1. Service Install & Create Tunnel & Run Tunnel
    1. Select option -> Linux
    2. Step 4: Change -> credentials-file: /root/.cloudflared/<Tunnel-UUID>.json -> credentials-file: /home/USERNAME/.cloudflared/<Tunnel-UUID>.json
    3. Step 5: SKIP step 2, you will get an error and it's not important anyways.
    4. Step 6: Keep this window open after running the new tunnel
      • ONLY AFTER completing step 2.i.d. below (Run as a service), press CTRL + C to exit this tunnel
  • Example config.yml file (See above about Step 4)
tunnel: TUNNEL_ID
credentials-file: /home/USERNAME/.cloudflared/TUNNEL_ID.json
ingress:
  - hostname: DOMAINNAME.COM
    service: http://localhost:5050/
  - service: http_status:404
  1. Run as a service

    1. Open and SSH login to the Raspberry Pi in a new terminal window
      1. You will get an error if you do not copy your config.yml from your Home folder to /etc/cloudflared. You will need to copy this file again if you make any changes to the config.yml such as adding more tunnels. This will be covered later when setting up Remote SSH access.
        • sudo cp ~/.cloudflared/config.yml /etc/cloudflared/config.yml
      2. cloudflared service install
      3. systemctl start cloudflared
      4. systemctl status cloudflared
        • Check to see if things are green and working, then press CTRL + C when done to exit
        • You can now stop the running tunnel from the first step as previously stated (See Step 1.iv.)
      5. You can close this terminal window now
  2. Enable SSL connections on Cloudflare site

    • Log in to your account on Cloudflare and simply click on the following links
    1. From the main page -> Your DOMAINNAME.COM -> SSL/TLS -> Configure -> Full -> Save
    2. SSL/TLS -> Edge Certificates -> Change the following settings on Cloudflare to match what's listed below:
      • Always Use HTTPS: On
      • Opportunistic Encryption: On
      • Automatic HTTPS Rewrites: On
      • Universal SSL: Enabled

If you used NPM as a reverse proxy and it's set to port 81, go to any Web Browser and type in your DOMAINNAME.COM. You should be directed to NPM's login page. Check the address bar and your domain name should have a padlock symbol followed by https://domainname.com/. Note that it should read HTTPS:// (with an s) and not HTTP:// (without an s). HTTPS along with the padlock symbol means your connections are properly encrypted.

This is the most complicated step for self-hosting. If you can confirm your connection is encrypt, setting up other services and webapps are fairly straight forward.

Lemmy (LOCAL HOST)

The lemmy instructions are simple and straight forward. When changing the fields asked of you in the instructions, it's helpful to search and replace the required fields. In nano when editing a file, press CTRL + \ and follow the instructions at the bottom of the window. This will find and replace text.

The Lemmy instructions show text for editing with {{ Example }}. To avoid confusion, those curly braces must be removed and replaced with the expected data.

  • If you used NPM's login page to test Cloudflare Tunnels, you will need to login to NPM and change the Port Forward from 81 to 10633
    • Click Hosts -> Proxy Hosts -> Click the 3-Dots for your DOMAINNAME.COM proxy rule -> Edit & Save
  1. Follow Lemmy Install Instructions
    • IGNORE steps Reverse Proxy/Webserver & Let's Encrypt since we have addressed those steps earlier with NPM and Cloudflare Tunnels/Security.
  2. Through a Web Browser, type in your DOMAINNAME.COM and you should see an admin creation page. Complete that and the initial instance setup afterwards.
  3. Test federation, replace capitals with the required information
    • curl -H 'Accept: application/activity+json' https://domainname.com/u/LEMMY_USERNAME

      • If you see .json information, Lemmy is federated
      • If you see .html information, lemmy is NOT federated

Updating Lemmy Docker Container

See here for more information.

  1. docker compose down
  2. docker compose pull
  3. docker compose up -d

PieFed (LOCAL HOST)

The PieFed installation instructions will provide more detailed information about each step. This guide does NOT cover any email setup for PieFed.

  • If you used NPM's login page to test Cloudflare Tunnels, you will need to login to NPM and change the Port Forward from 81 to 8030

    • Click Hosts -> Proxy Hosts -> Click the 3-Dots for your DOMAINNAME.COM proxy rule -> Edit & Save
  • PieFed Install Instructions

  1. Download & Prepare files
    1. git clone https://codeberg.org/rimu/pyfedi.git
    2. cd pyfedi
    3. cp env.docker.sample .env.docker
  2. Edit & Save files
    1. nano .env.docker
      1. Change value for SECRET_KEY with random numbers and letters
      2. Change value for SERVER_NAME with your DOMAINNAME.COM
    2. nano compose.yaml
      • Note ports 8030:5000. You can change the external container port: 8030: if you are using a custom port. Do NOT touch the internal container port :5000.
        • ports:
        • - '8030:5000'
  3. Build
    1. export DOCKER_BUILDKIT=1
    2. sudo docker compose up --build
      • Wait until text stops scrolling
  4. Access your DOMAINNAME.COM from a Web Browser
    1. You may see a message that says database system is ready to accept connections in your terminal window after PieFed is done installing and loading. This means you are ready to attempt a connection through your Web Browser now.
      • If you see constant permission errors, Open and SSH login to the Raspberry Pi in a new terminal window and do the following to allow PieFed to access the required folders:
        1. cd ~/pyfedi
        2. chown -R USERNAME:USERNAME ./pgdata
          • You can leave this window open, it can be used for the step 5.
    2. You may see an "Internal Server Error" after your first connection attempt. This is normal. You will see movement in your terminal window on each attempt to connect to PieFed. Now you can proceed to initialize the database.
  5. Initialize Database
    1. Open and SSH login to the Raspberry Pi in a new terminal window
      1. sudo docker exec -it piefed_app1 sh
      2. export FLASK_APP=pyfedi.py
      3. flask init-db
        • Enter username/email/password. Email is optional.
      4. Access PieFed from your Web Browser again. PieFed should now display. You can log in as admin with the same username and password.
      5. exit
      6. You can close this terminal window now
  6. Return to the terminal with the running docker build and press CTRL + C to stop PieFed.
  7. Run PieFed in the background
    • docker-compose up -d
  8. Setup Cron (Automated) Tasks
    • This will set up automated tasks for daily maintenance, weekly maintenance and email notifications.
    • Change USERNAME to your username.
    1. Setup automated tasks
      1. sudo nano /etc/cron.d/piefed
        1. Paste and Save
  • /etc/cron.d/piefed file
5 2 * * * USERNAME docker exec piefed_app1 bash -c "cd /app && ./daily.sh"
5 4 * * 1 USERNAME docker exec piefed_app1 bash -c "cd /app && ./remove_orphan_files.sh"
1 */6 * * * USERNAME docker exec piefed_app1 bash -c "cd /app && ./email_notifs.sh"
  1. OPTIONAL: Environment Variables
    • Some functions such as email or captcha's won't work unless you add the necessary variables into the ~/pyfedi/.env.docker file. Look at ~/pyfedi/env.sample and add the other variables to ~/pyfedi/.env.docker according to your needs.
    1. View the sample file
      • nano ~/pyfedi/env.sample
    2. Edit & Save .env.docker file
      • nano ~/pyfedi/.env.docker
    3. Restart PieFed Docker container
      • docker compose down && docker compose up -d

Updating PieFed Docker Container

  1. docker compose down
  2. git pull
  3. docker compose up --build
  4. docker compose down && docker compose up -d

Cloudflare Website Settings

These settings are suggested to help manage traffic. See here for more detailed information.

  1. Exclude Settings
    1. From the main page -> Your DOMAINNAME.COM -> Security -> WAF -> Custom Rules -> Click Create Rule -> Change the following settings and values on Cloudflare to match what's listed below:
      • Rule Name: Allow Inbox
      • Field: URI Path
      • Operator: contains
      • Value: /inbox
      • Log matching requests: On
      • Then take action...: Skip
      • WAF components to skip: All remaining custom rules
    2. Click `Deploy' to complete
  2. Caching Settings
    1. From the main page -> Your DOMAINNAME.COM -> Caching -> Cache Rules -> Click Create rule -> Change the following settings on Cloudflare to match what's listed below:
      • Rule name: ActivityPub
      1. Custom filter expressions: On
        1. Field: URI Path
        2. Operator: Starts with
        3. Value: /activities/
      2. Click Or
      3. Repeat until you have values for 4 rules total containing the values:
        • /activities/
        • /api/
        • /nodeinfo/
        • /.well-known/webfinger
      • Cache Eligibility: On
      • Edge TTL -> Click + add setting
        • Click Ignore cache-control header and use this TTL
        • Input time-to-live (TTL): 2 hours
      • Click Deploy to complete
    2. Click Create rule again
      • Rule name: ActivityPub2
      1. Custom filter expressions: On
        1. Field: Request Header
        2. Name: accept
        3. Operator: contains
        4. Value: application/activity+json
      2. Click Or
      3. Repeat until you have 2 rules total containing the values:
        • application/activity+json
        • application/ld+json
      • Cache Eligibility: On
      • Edge TTL -> Click + add setting
        • Click Ignore cache-control header and use this TTL
        • Input time-to-live (TTL): Type 10 seconds
      • Click Deploy to complete
  3. Optimization Settings
    1. Speed -> Optimization -> Content Optimization -> Change the following settings on Cloudflare to match what's listed below:
      • Speed Brain: Off
      • Cloudflare Fonts: Off
      • Early Hints: Off
      • Rocket Loader: Off
  4. Cloudflare Tokens for .env.docker File
    1. Create an API "Zone.Cache Purge" token
      1. After logging in to Cloudflare, go to this page
      2. Click Create Token -> Click Get Started under Create Custom Token
      3. Token Name -> PieFed
      4. Under Permissions -> Change the following drop down menu's to match what's listed below
        • First drop down menu: Zone
        • Second drop down menu: Cache Purge
        • Third drop down menu: Purge
      5. Click Continue to summary -> Click Create Token
      6. Copy the generated API Token. This will be used for CLOUDFLARE_API_TOKEN in the .env.docker file. Note, once you leave this screen, the API token will remain but the generated code that can be copied will disappear forever.
    2. Copy API Zone ID
      1. From the main page -> Your DOMAINNAME.COM -> Scroll down and look for API Zone ID in the far right column
      2. Copy API Zone ID Token. This will be used for CLOUDFLARE_ZONE_ID in the .env.docker File.
    3. The following step must be completed on the Raspberry Pi (LOCAL HOST) where PieFed is running:
      1. nano ~/pyfedi/.env.docker
        1. Add the following lines with your copied API Tokens & Save
          • CLOUDFLARE_API_TOKEN = 'ZONE.CACHE_PURGE_TOKEN'
          • CLOUDFLARE_ZONE_ID = 'API_ZONE_ID_TOKEN'
      2. Restart PieFed Docker container
        • docker compose down && docker compose up -d

Troubleshooting

  • If you receive an error while posting images, the folder permissions will need to change. Change USERNAME with your username.
    1. cd ~/pyfedi
    2. sudo chown -R USERNAME:USERNAME ./media

Support and Services

Remote SSH Access Setup

With how Cloudflare works, SSH is not as simple and requires a bit more effort. I'm going to explain how to prepare Termux, an android terminal app, so you can access the Raspberry Pi remotely. The steps should be quite similar if you are using a Debian 12 distribution.

For remote SSH to work, you must provide a config file with some information. Fortunately, cloudflared will give you all the information you need for the config file.

A subdomain that will be used for the SSH connection will also need to be created. In this example I will simply use the subdomain ssh. which will look like this ssh.DOMAINNAME.COM.

The subdomain must be set up first before setting up the remote clients. You will use the Cloudflare Tunnel name (CLOUDFLARE_TUNNEL_NAME) that you previously created. Also note, the config file edited on the Raspberry Pi Local Host must be copied again to /etc/cloudflared before cloudflared is restarted.

The Cloudflare Tunnel name is the name you chose. Not to be confused with TUNNEL_ID which is a bunch of random letters and numbers that was generated for your tunnel.

For this example I'll use port 22 for SSH connections. This is the default port for SSH connections.

Raspberry Pi Setup (LOCAL HOST)

  1. SSH login to the Raspberry Pi
  2. cloudflared tunnel route dns CLOUDFLARE_TUNNEL_NAME ssh.DOMAINNAME.COM
  3. nano ~/.cloudflared/config.yml
    1. Paste the following and change TUNNEL_ID and DOMAINNAME.COM & Save
  • Example config.yml file
tunnel: TUNNEL_ID
credentials-file: /home/USERNAME/.cloudflared/TUNNEL_ID.json
ingress:
  - hostname: DOMAINNAME.COM
    service: http://localhost:5050/
  - hostname: ssh.DOMAINNAME.COM
    service: ssh://localhost:22
  - service: http_status:404
  1. sudo cp ~/.cloudflared/config.yml /etc/cloudflared/config.yml
  2. sudo systemctl restart cloudflared

Desktop: Install (REMOTE CLIENT)

Android Termux: Install (REMOTE CLIENT)

Termux does not have SSH installed by default, This will install both ssh and cloudflared

  • apt install openssh cloudflared -y

Login and Setup (REMOTE CLIENT)

!!Continue here after preparing the LOCAL HOST and REMOTE CLIENTs first!!

!!The following steps are to be completed on the REMOTE CLIENTs!!

  1. Login
    1. cloudflared tunnel login
    2. Complete login
  2. cloudflared access ssh-config --hostname ssh.DOMAINNAME.COM
    1. COPY suggested text
    2. nano ~/.ssh/config
      1. PASTE suggested text & Save
  3. ssh USERNAME@ssh.DOMAINNAME.COM

Backup/Restore Setup

I decided to keep it simple and use the rsync command which comes already installed on Raspberry Pi OS. The guide linked below does a good job of explaining rsync in a step by step process.

Below the linked guide I'll provide an example of the commands I use to Backup and Restore my raspberry Pi. This creates a copy of the /rootfs folders that make up your Raspberry Pi Operating System and User folders. The commands will exclude some folders that may cause issues when restoring a backup. The guide linked below has more details.

Since I am going to power down the Pi and physically connect it's hard drive to my computer, I don't have to worry about making backups on a live and running storage.

The below commands assume I also have an additional EXTERNAL_STORAGE hard drive connected to my computer. This means the backup command will copy the contents from the Raspberry Pi drive (/rootfs folder) to the EXTERNAL_STORAGE drive (/EXTERNAL_STORAGE/backup folder). The restore command will copy the contents from the EXTERNAL_STORAGE drive (/EXTERNAL_STORAGE/backup/rootfs folder) to the Raspberry Pi drive (/rootfs folder)

rsync WILL delete data on the target location to sync all files and folders from the source location. Be mindful of which direction you are going to avoid any losses. I suggest testing it out on some other folders before commiting to backing up and restoring the entire Raspberry Pi. The guide linked below also covers exclusions to minimize backup sizes.

The backup storage MUST be formatted in EXT4 to make sure file permissions and attributes remain the same.

  1. nano ~/.bash_aliases
    1. Add comments & Save
      • alias rsyncBACKUP="sudo rsync -avxhP --delete --exclude={'proc/','sys/','dev/','tmp/','run/','mnt/','media/','home/USERNAME/.cache','lost+found'} /media/USERNAME/rootfs /media/USERNAME/EXTERNAL_STORAGE/backup/"
      • rsyncRESTORE="sudo rsync -avxhP --delete --exclude={'proc/','sys/','dev/','tmp/','run/','mnt/','media/','home/USERNAME/.cache','lost+found'} /media/USERNAME/EXTERNAL_STORAGE/backup/rootfs/ /media/USERNAME/rootfs"
  2. Reset bash in terminal
    • . ~/.bashrc
  3. Backup system TO EXTERNAL_STORAGE
    • !!EXT4 file system only!!
    • rsBACKUP
  4. Restore system FROM EXTERNAL_STORAGE
    • rsRESTORE

Firewall (LOCAL HOST)

  1. Install: Choose ONE
    • Command line only
      • sudo apt install -y ufw
    • Graphical Interface with command line access
      • sudo apt install -y gufw

I haven't figured out how to properly set this up for myself yet, but I figure it's probably worth having for an additional layer of protection.

 

I recently got my hands on a lightly used Raspberry Pi 5 and have been playing around with it and breaking things while trying to learn my way around self hosting. I have a a couple questions now that I've hit a bit of a road block in learning.

  1. Is it possible to set up lemmy for local host on a local network only? I'm not worried about federated data from other instances. At this point I just want to experiment and break things before I commit to buying a Top Level Domain name.

  2. How exactly does a TLD work? I've tried searching up how to redirect traffic from a TLD to my raspberry pi. Since I don't know much about hosting or networking, I don't know what to search up to find the answer I'm looking for.

  3. How do I protect myself while self hosting? I know the Lemmy documentation suggests using Let's Encrypt, is that all I need to do in order to protect any private data being used?

My goal in the future is to have a local, text-only instance that may connect with a small number of whitelisted instances.

 

I hope this is the appropriate community for this question, if there is a better community for this, I can post it there.

It's been a bit over a week and I've had time to accept what has happened and what will happen as a result of America's recent decision. Even though I am from Canada, the news has many direct and indirect consequences which still has me concerned for the near future.

I feel that right now is the time for me to start and build a local community. I just don't know how to do that or where to begin.

I'm not the most social person so networking and leading will be a huge hurdle for me. I'm not creative enough with drawing or writing so creating flyers or propaganda would also be a challenge for me. I've always been more comfortable working and building things with my hands and have a pretty deep interest in land management and sustainability.

I also have the additional issue of being a person of colour in a mainly white town. Lifted trucks, SUVs, unwelcoming stares and plenty of entitled behaviours. The population in this town is mostly young families with younger children or old white folks which I doubt have any care for the future ahead of us. There's not much in between.

Over the past couple years, I have been buying various types of seeds and collecting seeds from my garden as plants mature. I've been trying to create a seed library for myself. Lately I've been thinking of trying to start a local seed library as a way to start some sort of community. Maybe even use that as a way to teach more local, sustainable habits.

I just don't know where to begin and starting feels quite overwhelming for one person.

I'm hoping to start a discussion or even brainstorm some ideas on what people can do, how to begin and how to follow through with building local communities.

Any idea outside of a seed library is welcome. It would help to have a nice, broad spread of ideas to draw from. I believe that would help keep the progress of the local communities adaptable as time goes on.

 

I wanted to share all the mushroom parties I came across in October on some local hiking trails I visit regularly. No idea what any of them are but are located in Southern Ontario.

view more: next ›