Lemmling

joined 2 years ago
[–] Lemmling@lemm.ee 5 points 3 days ago

I used them in parallel for a while before switching to AdGuard. The key features that mattered to me were support for upstream DNS servers via DoH, detailed query logs, and wildcard domain rewriting. Also a better looking UI is a plus.

[–] Lemmling@lemm.ee 5 points 3 days ago (1 children)

Good news! Hope they implement detailed query log and support for upstream DoH DNS next.

 
[–] Lemmling@lemm.ee 2 points 1 month ago (1 children)

Nice flower indeed.

[–] Lemmling@lemm.ee 3 points 1 month ago

There is a discussion about immich stacking here https://github.com/immich-app/immich/discussions/2479 Automatic stacking is on their roadmap. There is a high chance that the APIs will be broken by then. I always prefer the native features over third party tools.

[–] Lemmling@lemm.ee 2 points 1 month ago (2 children)

Sorry for that, Updated the title.

 

Dear fellow selfhosters,

If you use immich and have a digital camera that shoots JPG+RAW, you must have noticed the duplicate images taking up your screen space. I recently found out that immich has a neat feature called stacking where you can group images in the timeline. I wrote a very simple Python script to search and stack the JPG and RAW images in my instance and thought I would share it with the community. Make sure you edit the search parameters and API key and also read the whole script before running it.

For advanced immich stacking use this https://github.com/tenekev/immich-auto-stack

NOTE: I did not know this project existed before I wrote the script :)

Happy Holidays..

Immich version : v1.123.0

import json
import requests
from pathlib import Path
from collections import defaultdict

#
***
Configuration & Constants
***
API_KEY = "API_KEY"
BASE_URL = "https://immich.local.website.tld/"
RAW_FILE_EXT = ".RAF"
HEADERS = {
    "Content-Type": "application/json",
    "Accept": "application/json",
    "x-api-key": API_KEY
}
STACKS_URL = f"{BASE_URL}/api/stacks"
SEARCH_URL = f"{BASE_URL}/api/search/metadata"
ASSETS_URL = f"{BASE_URL}/api/assets"  # For checking if an asset is already stacked

# ---------------------------------
# 1. CREATE SEARCH PAYLOAD
# ---------------------------------
def create_search_payload(page: int) -> str:
    """
    Build the JSON payload to send with the search request.
    Modify search settings for your camera
    """
    payload = {
        "make": "FUJIFILM",
        "size": 1000,
        "page": page,
        "model": "X-S20",
        "takenAfter": "2024-12-20T00:00:00.000Z"
    }
    return json.dumps(payload)

# ---------------------------------
# 2. FETCH SEARCH RESULTS
# ---------------------------------
def fetch_search_results(page: int) -> dict:
    """
    Send a POST request to the search metadata endpoint 
    and return the parsed JSON response.
    """
    payload = create_search_payload(page)
    response = requests.request("POST", SEARCH_URL, headers=HEADERS, data=payload)
    response.raise_for_status()  # raises an exception if the request fails
    return response.json()

# ---------------------------------
# 3. PROCESS SEARCH RESULTS
# ---------------------------------
def process_search_results(search_results: dict, assets: defaultdict) -> None:
    """
    Parse the items in the search results and store them in the assets dict.
    The key is the file stem (without suffix), and the value is a list of items.
    """
    for item in search_results["assets"]["items"]:
        original_file_name = Path(item["originalFileName"])
        assets[original_file_name.stem].append(item)

# ---------------------------------
# 4a. HELPER: Check if a single asset is already stacked
# ---------------------------------
def is_asset_stacked(asset_id: str) -> bool:
    """
    Perform a GET request on /api/assets/:id to determine if 
    that asset is already part of a stack.

    Returns True if 'stack' is present (and not None) in the response.
    """
    url = f"{ASSETS_URL}/{asset_id}"
    response = requests.get(url, headers=HEADERS)
    response.raise_for_status()
    data = response.json()

    # If the 'stack' key exists and is not None, the asset is stacked
    return bool(data.get("stack"))

# ---------------------------------
# 4b. STACK IMAGES
# ---------------------------------
def stack_images(image: str, items: list) -> None:
    """
    For each image group (stem), determine if it should be stacked. 
    1) Check if any item in the group is already stacked. If yes, skip.
    2) Order/reverse items if needed based on suffix. To ensure the first item is a JPG, which will be the primary image in the immich stack
    3) If the group meets the criteria, send a POST request to stack them.
    """
    ids = [item["id"] for item in items]
    name_suffixes = [Path(item["originalFileName"]).suffix.upper() for item in items]

    # Skip stacking if any asset is already stacked
    if any(is_asset_stacked(asset_id) for asset_id in ids):
        print(f"Skipping '{image}' because one or more assets are already stacked.")
        return

    # If the first suffix is RAW_FILE_EXT, reverse the order
    if name_suffixes and name_suffixes[0] == RAW_FILE_EXT:
        ids.reverse()
        name_suffixes.reverse()

    # Ensure there's at least one .RAF if the group includes a .RAF
    if RAW_FILE_EXT in name_suffixes:
        assert name_suffixes.count(RAW_FILE_EXT) >= 1

    # Stack if more than one file and the first is .JPG
    if len(name_suffixes) > 1 and name_suffixes[0] == ".JPG":
        payload = json.dumps({"assetIds": ids})
        response = requests.request("POST", STACKS_URL, headers=HEADERS, data=payload)
        print(f"{response.status_code}: {image} - Stacked {len(ids)} images")

# ---------------------------------
# 5. MAIN LOGIC
# ---------------------------------
def main():
    assets = defaultdict(list)
    page = 1

    # Paginate until no nextPage
    while True:
        search_results = fetch_search_results(page)
        items_on_page = search_results["assets"]["items"]
        print(f"Page {page} - Retrieved {len(items_on_page)} items")

        # Store items by grouping them by file stem
        process_search_results(search_results, assets)

        next_page = search_results["assets"]["nextPage"]
        page += 1
        if next_page is None:
            break

    # Process each group to optionally stack images
    for image, items in assets.items():
        stack_images(image, items)

if __name__ == "__main__":
    main()
[–] Lemmling@lemm.ee 16 points 6 months ago (4 children)
[–] Lemmling@lemm.ee 15 points 6 months ago (1 children)

It used to work last year 😀 . The debian yt-dlp version: stable@2023.03.04 is giving a different error

ERROR: [Einthusan] 9r1H: 9r1H: Failed to parse JSON (caused by JSONDecodeError("Expecting value in '': line 1 column 1 (char 0)")); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U

 

I am getting the following error after upgrading yt-dlp

ERROR: [Piracy] This website is no longer supported since it has been determined to be primarily used for piracy. DO NOT open issues for it

Does anyone know any forks that still works.

[–] Lemmling@lemm.ee 4 points 6 months ago* (last edited 6 months ago) (1 children)

this update broke my installation :(. I have not updated it in a while. Now I have to rollback until I fix this. Hope the backup will work. EDIT: It was the reverse proxy. Check the developer notes before updating.

[–] Lemmling@lemm.ee 1 points 10 months ago

Hmm strange, I am also in the EU

[–] Lemmling@lemm.ee 1 points 10 months ago

normal reddit is working fine for me. The error page even redirects to the main site.

[–] Lemmling@lemm.ee 2 points 11 months ago (1 children)

I am behind CGNAT and I have been trying to set up a WireGuard mesh network to connect my local devices, such as a Raspberry Pi and Proxmox server, as well as my mobile devices, using a VPS as the central point. The goal is to expose locally running applications to the internet without relying on Cloudflare, as they do not allow video streaming and remote access to my local devices. I have looked at many tutorials on this topic, but they often left me confused due to the varying iptables rules and configurations. Some tutorials include specific device names like eth0 in the iptables rules, while others use variables like %i. Additionally, some examples have special rules for SSH access like this one. Apart from that, I am unsure about what additional steps I need to take when I want to run one of the peers as an internet gateway. Despite the confusion, I managed to achieve the basic mesh network setup without implementing any iptables rules for PostUp/Down. Each device in the network receives an IP address within the WireGuard subnet (10.0.0.x) and can ping one another. However, I believe that the iptables rules mentioned in the tutorials would allow accessing other subnets, such as my local LAN, through the WireGuard VPN. I am still uncertain about the exact mechanism behind how these rules work in that context and how to properly configure them for my specific use case, especially considering the CGNAT situation

[–] Lemmling@lemm.ee 3 points 11 months ago (3 children)

Thanks for the nice writeup. Can you explain why you have these rules.

PostUp = iptables -t nat -A PREROUTING -p tcp -i eth0 '!' --dport 22 -j DNAT --to-destination 10.0.0.2; iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to-source SERVER-IP PostUp = iptables -t nat -A PREROUTING -p udp -i eth0 '!' --dport 55107 -j DNAT --to-destination 10.0.0.2;

What happens if you remove it ?

view more: next ›