dan

joined 2 years ago
[–] dan@upvote.au 6 points 14 hours ago* (last edited 8 hours ago)

This depends a lot on if your employer is good or not. I get 20 days bereavement leave per year for close family (spouse, kids, parents) and 10 days for extended family (grandparents)

[–] dan@upvote.au 2 points 15 hours ago

This is a reason why people should feel safer taking a plane or train, which is my point.

[–] dan@upvote.au 6 points 15 hours ago* (last edited 15 hours ago)

Even if flying gets a bit less safe, there would have to be far, far more plane crashes (at least three orders of magnitude more) for it to become anywhere near as dangerous as driving.

[–] dan@upvote.au 2 points 15 hours ago

Of course they're pre 2025... It's only February so there's no full year stats for 2025 yet.

[–] dan@upvote.au 5 points 20 hours ago* (last edited 19 hours ago) (7 children)

Flying is still the safest form of transport.

There's 1.17 deaths and 42 injuries per 100 million miles travelled by car in the USA. In comparison, there's only 0.007 injuries per 100 million miles flown in commercial planes in the USA. Even trains are more dangerous at 0.1 injuries per 100 million miles.

You're far, far more likely to be in a car crash on your way to the airport compared to being involved in a plane crash.

[–] dan@upvote.au 3 points 20 hours ago

Too bad the California high-speed rail project is being threatened by President Musk.

[–] dan@upvote.au 6 points 20 hours ago (1 children)

I can never seem to get used to that 10,000ft standard.

The standard is 8,000 feet, not 10,000. Some planes, like the Boeing 787, are pressurized to 6,000ft instead.

[–] dan@upvote.au 2 points 1 day ago* (last edited 1 day ago)

TIL EXCEPTION JOIN. I thought the SQL dialect I usually use at work for data warehouse queries (Presto) didn't have anything like this, but it does, and calls it EXCEPT: https://prestodb.io/docs/current/sql/select.html#union-intersect-except-clause

Good to know. Beats my usual approach of using WHERE x NOT IN (SELECT ...). I'm mostly a front end developer so these things are outside my comfort zone sometimes.

When I worked for my state this is how we had some data. A master table that you then had to join like five or six exception tables to remove the “questionable” entries from the master.

At my workplace, we'd have a data pipeline (think something like Apache Airflow) that pulls the master table once the daily partition lands, joins the exception tables, then produces a "clean" output table which is the one that people would actually query. At a previous employer, we would have used a materialized/indexed view (we used SQL Server for both OLTP and OLAP). Is that not common in government?

[–] dan@upvote.au 2 points 2 days ago (1 children)

I haven’t looked into paperless-ai yet, but I hope my machine would be beefy enough for this task

You need a GPU with a decent amount of VRAM to get LLMs working well locally. I don't have a new enough GPU to be useful - my server just has the Intel iGPU, and my desktop PC only has a GTX1080, which is from before Nvidia added Tensor cores for AI.

[–] dan@upvote.au 3 points 2 days ago* (last edited 2 days ago) (3 children)

And that sticker also has the ASN in human readable form?

Yes! They look like this:

So you would then add many documents at once to the feeder, and Paperless will read the QR and also split documents whenever a new code appears? What about documents you don’t want to keep physically? Is there a way to get Paperless to split them automatically as well if you add many to the feeder?

Paperless supports two different splitting methods:

  • If it encounters an ASN QR code, it'll split at that point and keep the page with the barcode
  • If it encounters a special barcode that's used as a separator sheet, it'll split at that point and delete the page with the barcode. By default it looks for a "Patch T" barcode, and you can a page with a Patch T barcode from https://www.alliancegroup.co.uk/patch-codes.htm

so all you need to do is have a "Patch T" page between each document and it'll split them automatically.

Docs: https://docs.paperless-ngx.com/advanced_usage/#document-splitting

I'm also using paperless-ai to automatically tag and set a title for scanned documents. Very useful. I'd love to run my own AI locally using ollama, but I don't have good enough hardware so for now I'm using Google's Gemini 2.0 Flash. I trust Google's privacy policy far more than OpenAI's, Google Gemini is very cheap, and if you use the paid version they don't retain any of your data nor use it for training.

[–] dan@upvote.au 4 points 2 days ago* (last edited 2 days ago) (1 children)

a VM with torrent client and a killswitched VPN

You can use Docker for the same setup using the --network container:vpn flag to docker run or network_mode: "container:vpn" option in docker-compose.yml where vpn is the name of the container to route through. This makes one Docker container use the network of another (the VPN one), so both containers will share the same internal IP address, and you'll have to map any ports on the VPN container rather than the torrent/whatever one. This is just as safe as a killswitched VPN.

Unraid has a nice UI for it when editing a Docker container:

also meant if it ever got virused I could just roll it back

Consider using a file system that has snapshots, like ZFS. Then you can get this same behaviour for your whole system rather than just a VM :)

is it ok to sit on the perpetual license (for a few years at a time), or are the updates really required?

I'm not sure, as the new licensing model is pretty new. I purchased Unraid in 2023, and back then, all licenses included lifetime updates. They switched to a subscription mode to make the business more viable long-term and afford to hire more developers, which I definitely understand.

It supports GPU passthrough right

It does. You can pass through any PCIe devices, so for example if you have multiple network cards, you can pass one directly to a VM (it's a bit more efficient compared to using a virtual Ethernet adapter)

[–] dan@upvote.au 5 points 2 days ago* (last edited 2 days ago) (6 children)

ScanSnap iX1600. I bought mine from B&H: https://www.bhphotovideo.com/c/product/1615326-REG/fujitsu_pa03770_b635_scansnap_ix1600_document_scanner.html. There's two scanners that usually get recommended for paperless: this one, and a cheaper (but not as nice) Brother one.

It's a really compact unit - smaller than I thought it'd be! You can put up to 50 sheets in the feeder and it scans them all, on both sides (no need to manually flip the pages). Can scan 40 pages per minute.

I've combined it with ASN (archive serial number) QR code stickers for documents that I need to keep a physical copy of. I'm using Avery 5267 stickers + Avery's online designer site to design and print them. If I need to keep a physical copy of the document, I stick a sticker on the document, scan it, and Paperless automatically detects the QR code and sets the ASN. Then I keep all the physical copies in a binder, ordered by ASN. If I need to locate a physical document, I find it in Paperless, check the ASN, then go to the right document in the binder (easy to find the right place since they're all in order).

There's just a few minor issues with the scanner, but otherwise it's perfect:

  • It was a bit expensive, at $400 in the USA.
  • You need a Windows or MacOS system to do the initial setup. Setting it up is done through a desktop app rather than through the touchscreen on the device.
  • Some of the options need a computer connected to the scanner via USB, or signing up to their cloud service. However, it does support scanning to a SMB share without a computer connected, which is all I needed. I have my paperless-ngx "consume" directory shared via Samba. You just need to delete the default scanning profiles and add a network scan (SMB) one.
 

I noticed that Spectacle has an option to upload to Imgur and Nextcloud. Is there a way to allow it to upload to an SFTP server?

Ideally I'd like for it to upload the file via SFTP then put the URL on my clipboard, which is what I do with ShareX on Windows.

 

I love Sentry, but it's very heavy. It runs close to 50 Docker containers, some of which use more than 1GB RAM each. I'm running it on a VPS with 10GB RAM and it barely fits on there. They used to say 8GB RAM is required but bumped it to 16GB RAM after I started using it.

It's built for large-scale deployments and has a nice scalable enterprise-ready design using things like Apache Kafka, but I just don't need that since all I'm using it for is tracking bugs in some relatively small C# and JavaScript projects, which may amount to a few hundred events per week if that. I don't use any of the fancier features in Sentry, like the live session recording / replay or the performance analytics.

I could move it to one of my 16GB or 24GB RAM systems, but instead I'm looking to evaluate some lighter-weight systems to replace it. What I need is:

  • Support for C# and JavaScript, including mapping stack traces to original source code using debug symbols for C# and source maps for JavaScript.
    • Ideally supports React component stack traces in JS.
  • Automatically group the same bugs together, if multiple people hit the same issue
    • See how many users are affected by a bug
  • Ignore particular errors
  • Mark a bug as "fixed in next release" and reopen it if it's logged again in a new release
  • Associate bugs with GitHub issues
  • Ideally supports login via OpenID Connect

Any suggestions?

Thanks!

 

On a small form factor PC with an i5-9500, Debian 12, 6.2.16 kernel, running Proxmox, powertop shows the following idle stats:

PowerTOP 2.14     Overview   Idle stats   Frequency stats   Device stats   Tunables   WakeUp


           Pkg(HW)  |            Core(HW) |            CPU(OS) 0
                    |                     | C0 active   2.8%
                    |                     | POLL        0.0%    0.0 ms
                    |                     | C1          1.1%    0.4 ms
C2 (pc2)    7.2%    |                     |
C3 (pc3)    5.5%    | C3 (cc3)    0.0%    | C3          0.1%    0.1 ms
C6 (pc6)    1.5%    | C6 (cc6)    1.9%    | C6          2.2%    0.6 ms
C7 (pc7)   75.2%    | C7 (cc7)   92.8%    | C7s         0.0%    0.0 ms
C8 (pc8)    0.0%    |                     | C8         21.5%    2.5 ms
C9 (pc9)    0.0%    |                     | C9          0.0%    0.0 ms
C10 (pc10)  0.0%    |                     |
                    |                     | C10        72.8%   12.5 ms
                    |                     | C1E         0.4%    0.2 ms

                    |            Core(HW) |            CPU(OS) 1
                    |                     | C0 active   1.4%
                    |                     | POLL        0.0%    0.0 ms
                    |                     | C1          0.7%    0.9 ms
                    |                     |
                    | C3 (cc3)    0.1%    | C3          0.1%    0.2 ms
                    | C6 (cc6)    1.0%    | C6          1.1%    0.8 ms
                    | C7 (cc7)   96.3%    | C7s         0.0%    0.0 ms
                    |                     | C8         18.9%    2.9 ms
                    |                     | C9          0.0%    0.0 ms
                    |                     |
                    |                     | C10        78.3%   24.8 ms
                    |                     | C1E         0.0%    0.0 ms
...

On a custom-built server with an i5-13500, Asus Pro WS W680M-ACE SE motherboard, Unraid (which uses Slackware), 6.1.38 kernel, it shows the following output:

PowerTOP 2.15     Overview   Idle stats   Frequency stats   Device stats   Tunables   WakeUp


           Pkg(HW)  |            Core(HW) |            CPU(OS) 0   CPU(OS) 1
                    |                     | C0 active   5.9%        0.9%
                    |                     | POLL        0.1%    0.0 ms  0.0%    0.0 ms
                    |                     | C1_ACPI    14.2%    0.2 ms  1.0%    0.1 ms
C2 (pc2)    0.0%    |                     | C2_ACPI    39.2%    0.8 ms 27.0%    0.9 ms
C3 (pc3)    0.0%    | C3 (cc3)    0.0%    | C3_ACPI    33.6%    1.2 ms 69.7%    3.0 ms
C6 (pc6)    0.0%    | C6 (cc6)    1.1%    |
C7 (pc7)    0.0%    | C7 (cc7)    0.0%    |
C8 (pc8)    0.0%    |                     |
C9 (pc9)    0.0%    |                     |
C10 (pc10)  0.0%    |                     |

                    |            Core(HW) |            CPU(OS) 2   CPU(OS) 3
                    |                     | C0 active  10.4%        0.5%
                    |                     | POLL        0.0%    0.0 ms  0.0%    0.0 ms
                    |                     | C1_ACPI    17.4%    0.2 ms  0.4%    0.2 ms
                    |                     | C2_ACPI    14.3%    0.8 ms  4.9%    0.6 ms
                    | C3 (cc3)    0.0%    | C3_ACPI    41.8%    5.4 ms 93.5%    5.5 ms
                    | C6 (cc6)    5.9%    |
                    | C7 (cc7)   26.7%    |
                    |                     |
                    |                     |
                    |                     |

                    |            Core(HW) |            CPU(OS) 4   CPU(OS) 5
                    |                     | C0 active  11.7%        0.2%
                    |                     | POLL        0.0%    0.1 ms  0.0%    0.0 ms
                    |                     | C1_ACPI    19.0%    0.1 ms  0.0%    0.0 ms
                    |                     | C2_ACPI    11.3%    0.7 ms  0.0%    0.0 ms
                    | C3 (cc3)    0.0%    | C3_ACPI    39.6%    7.7 ms 99.6%    7.0 ms
                    | C6 (cc6)    1.3%    |
                    | C7 (cc7)   25.4%    |
...

Both systems have C-states enabled in the BIOS.

I have a few questions I'm hoping someone can help with:

  • Why does the older system show more C-states in the right-most "CPU(OS)" column?
  • What does it mean when they're suffixed with "_ACPI" like in the output from the new system?
  • How do I debug the new system not hitting any CPU package C-states?

I can't find any documentation about this, neither on the man page nor on Intel's site (the official powertop URL https://01.org/powertop doesn't go anywhere useful any more).

Thanks!

 

Google Analytics is broken on a bunch of my sites thanks to the GA4 migration. Since I have to update everything anyways, I'm looking at the possibility of replacing Google Analytics with something I self-host that's more privacy-focused.

I've tried Plausible, Umami and Swetrix (the latter of which I like the most). They're all very lightweight and most are pretty efficient due to their use of a column-oriented database (Clickhouse) for storing the analytics data - makes way more sense than a row-oriented database like MySQL for this use case.

However, these systems are all cookie-less. This is usually fine, however one of my sites is commonly used in schools on their computers. Cookieless analytics works by tracking sessions based on IP address and user-agent, so in places like schools with one external IP and the same browser on every computer, it just looks like one user in the analytics. I'd like to know the actual number of users.

I'm looking for a similarly lightweight analytics system that does use cookies (first-party cookies only) to handle this particular use case. Does anyone know of one?

Thanks!

Edit: it doesn't have to actually be a cookie - just being able to explicitly specify a session ID instead of inferring one based on IP and user-agent would suffice.

 

I'm replacing an SFF PC (HP ProDesk 600 G5 SFF) I'm using as a server with a larger one that'll function as a server and a NAS, and all I want is a case that would have been commonplace 10-15 years ago:

  • Fits an ATX motherboard.
  • Fits at least 4-5 hard drives.
  • Is okay sitting on its side instead of upright (or even better, is built to be horizontal) since it'll be sitting on a wire shelving unit (replacing the SFF PC here: https://upvote.au/post/11946)
  • No glass side panel, since it'll be sitting horizontally.
  • Ideally space for a fan on the left panel

It seems like cases like this are hard to find these days. The two I see recommended are the Fractal Design Define R5 and the Cooler Master N400, both of which are quite old. The Streacom F12C was really nice but it's long gone now, having been discontinued many years ago.

Unfortunately I don't have enough depth for a full-depth rackmount server; I've got a very shallow rack just for networking equipment.

Does anyone have recommendations for any cases that fit these requirements?

My desktop PC has a Fractal Design Define R4 that I bought close to 10 years ago... I'm tempted to just buy a new case for it and repurpose the Define R4 for the server.

 

Sorry for the long post. tl;dr: I've already got a small home server and need more storage. Do I replace an existing server with one that has more hard drive bays, or do I get a separate NAS device?


I've got some storage VPSes "in the cloud":

  • 10TB disk / 2GB RAM with HostHatch in LA
  • 100GB NVMe / 16GB RAM with HostHatch in LA
  • 3.5TB disk / 2GB RAM with Servarica in Canada

The 10TB VPS has various files on it - offsite storage of alert clips from my cameras, photos, music (which I use with Plex on the NVMe VPS via NFS), other miscellaneous files (using Seafile), backups from all my other VPSes, etc. The 3.5TB one is for a backup of the most important files from that.

The issue I have with the VPSes is that since they're shared servers, there's limits in terms of how much CPU I can use. For example, I want to run PhotoStructure for all my photos, but it needs to analyze all the files initially. I limit Plex to maximum 50% of one CPU, but limiting things like PhotoStructure would make them way slower.

I've had these for a few years. I got them when I had an apartment with no space for a NAS, expensive power, and unreliable Comcast internet. Times change... Now I've got a house with space for home servers, solar panels so running a server is "free", and 10Gbps symmetric internet thanks to a local ISP, Sonic.

Currently, at home I've got one server: A HP ProDesk SFF PC with a Core i5-9500, 32GB RAM, 1TB NVMe, and a single 14TB WD Purple Pro drive. It records my security cameras (using Blue Iris) and runs home automation stuff (Home Assistant, etc). It pulls around 41 watts with its regular load: 3 VMs, ~12% CPU usage, constant ~34Mbps traffic from the security cameras, all being written to disk.

So, I want to move a lot of these files from the 10TB VPS into my house. 10TB is a good amount of space for me, maybe in RAID5 or whatever is recommended instead these days. I'd keep the 10TB VPS for offsite backups and camera alerts, and cancel the other two.

Trying to work out the best approach:

  1. Buy a NAS. Something like a QNAP TS-464 or Synology DS923+. Ideally 10GbE since my network and internet connection are both 10Gbps.
  2. Replace my current server with a bigger one. I'm happy with my current one; all I really need is something with more hard drive bays. The SFF PC only has a single drive bay, its motherboard only has a single 6Gbps SATA port, and the only PCIe slots are taken by a 10Gbps network adapter and a Google Coral TPU.
  3. Build a NAS PC and use it alongside my current server. TrueNAS seems interesting now that they have a Linux version (TrueNAS Scale). Unraid looks nice too.

Any thoughts? I'm leaning towards option 2 since it'll use less space and power compared to having two separate systems, but maybe I should keep security camera stuff separate? Not sure.

 

I have a 10Gbps internet connection. On a system with a 10Gbps Ethernet card, I can get ~8Gbps down and ~6Gbps up:

I'd expect this to easily max out a 2.5Gbps network connection. However, while the upload is maxed (or close to it), I can only ever get ~1.0 to 1.5Gbps down:

Both tests were performed on the same system. The only difference is that the first one uses a TRENDnet 10Gbps PCIe network card (which uses an Aquantia AQC107 chipset) whereas the second one uses the onboard NIC on my motherboard (Intel I225-V chipset).

This is consistent across two devices that have 10Gbps ports and two devices that have 2.5Gbps ports.

I'm using an AdTran 622v ONT provided by my internet provider, a TP-Link ER8411 router, and a MikroTik CRS312-4C+8XG-RM switch. I'm using CAT6 cabling, except for the connection between the router and the switch which uses an SFP+ DAC cable.

I haven't been able to figure it out. The 'slower' speeds are still great, I just don't understand why it can't achieve more than 1.5Gbps down over a 2.5Gbps network connection.

Any ideas?

51
submitted 2 years ago* (last edited 2 years ago) by dan@upvote.au to c/selfhosted@lemmy.world
 

I couldn't find a "Home Networking" community, so this seemed like the best place to post :)

My house has this small closet in the hallway and thought it'd make a perfect place to put networking equipment. I got an electrician to install power outlets in it, ran some CAT6 myself (through the wall, down into the crawlspace, to several rooms), and now I finally have a proper networking setup that isn't just cables running across the floor.

The rack is a basic StarTech two-post rack (https://www.amazon.com/gp/product/B001U14MO8/) and the shelving unit is an AmazonBasics one that ended up perfectly fitting the space (https://www.amazon.com/gp/product/B09W2X5Y8F/).

In the rack, from top to bottom (prices in US dollars):

  • TP-Link ER8411 10Gbps router. My main complaint about it is that the eight 'RJ45' ports are all Gigabit, and there's only two 10Gbps ports (one SFP+ for WAN, and one SFP+ for LAN). It can definitely reach 10Gbps NAT throughput though. $350
  • Wiitek SFP+ to RJ45 module for connecting Sonic's ONT (which only has an RJ45 port), and 10Gtek SFP+ DAC cable to connect router to switch.
  • MikroTik CRS312-4C+8XG-RM managed switch (runs RouterOS). 12 x 10Gbps ports. I bought it online from Europe, so it ended up being ~$520 all-in, including shipping.
  • Cable Matters 24-port keystone patch panel.
  • TP-Link TL-SG1218MPE 16-port Gigabit PoE switch. 250 W PoE power budget. Used for security cameras - three cameras installed so far.
  • Tripp Lite 14 outlet PDU.

Other stuff:

  • AdTran 622v ONT provided by my internet provider (Sonic), mounted to the wall.
  • HP ProDesk 600 G5 SFF PC with Core i5-9500. Using it for a home server running Home Assistant, Blue Iris, Node-RED, Zigbee2MQTT, and a few other things. Bought it off eBay for $200.
    • Sonoff Zigbee dongle plugged in to the front USB port
  • (next to the PC) Raspberry Pi 4B with SATA SSD plugged in to it. Not doing anything at the moment, as I migrated everything to the PC.
  • (not pictured) Wireless access point is just a basic Netgear one I bought from Costco a few years ago. It's sitting on the top shelf. I'm going to replace it with a TP-Link Omada ceiling-mounted one once their wifi 7 access points have been released.

Speed test: https://www.speedtest.net/my-result/d/3740ce8b-bba5-486f-9aad-beb187bd1cdc

Edit: Sorry, I don't know why the image is rotated :/ The file looks fine on my computer.

 

Hi!

I just created a Lemmy server at https://upvote.au/ for my personal use. I created a test community with a test post, but searching for it in Mastodon doesn't work. I tried searching for both @dan@upvote.au and @[!dan@upvote.au](/c/dan@upvote.au). I see the requests in the Nginx log:

172.19.0.5 - - [13/Jun/2023:22:57:06 -0700] "GET /.well-known/webfinger?resource=acct:test@upvote.au HTTP/1.1" 200 312 "-" "http.rb/5.1.1 (Mastodon/4.1.2; +https://toot.d.sb/)"
172.19.0.5 - - [13/Jun/2023:22:57:06 -0700] "GET /c/test HTTP/1.1" 200 10033 "-" "http.rb/5.1.1 (Mastodon/4.1.2; +https://toot.d.sb/)"

However, no results appear in Mastodon.

Any ideas?

view more: next ›