blackstrat

joined 2 years ago
[โ€“] blackstrat@lemmy.fwgx.uk 11 points 2 days ago (1 children)

The Matrix for my group of millenials

[โ€“] blackstrat@lemmy.fwgx.uk 2 points 2 days ago (1 children)

Yes. But I can't write it down or use any words to even attempt to describe it because then it wouldn't be "100% original" ๐Ÿ™„

[โ€“] blackstrat@lemmy.fwgx.uk 2 points 2 days ago (1 children)

All my thoughts are my own.

[โ€“] blackstrat@lemmy.fwgx.uk 20 points 2 days ago (5 children)

The new caps they're putting on plastic bottles are awful. Make it very hard to put back on properly and we've have a few incidents with them looking on but they actually cross threaded and leaked. I just rip them off now.

Also, why is the glue on cereal boxes so damn strong now? I end up tearing the box more often than not these days and that never used to be the case.

[โ€“] blackstrat@lemmy.fwgx.uk 1 points 3 days ago (1 children)

I really only used it for syncing photos from my phone so I went to Syncthing. The NC web interface I found far too slow to be any use, so I just mount network shares over NFS.

[โ€“] blackstrat@lemmy.fwgx.uk 1 points 6 days ago (3 children)

And that is why I no longer run Nextcloud

[โ€“] blackstrat@lemmy.fwgx.uk 59 points 6 days ago (2 children)

I aint typing out 'please', I will compromise on alias plz='sudo'

Software engineer. In the past mostly C++, now it's mostly C#. Lots of databases too.

Renault are a true embarrassment.

Thank you to everyone who helped here. The monitor arrived this evening. Got it all setup with 2x 27" 1440 screens on arms connected over DP. KDE identified it straight away and ran at the full 180Hz with no configuration. Only thing I had to do was set the scaling to 100% instead of 125%. Played some Doom at a solid 180fps and it's really nice. Then some Metro Exodus where I get between 60 and 110, all looks lovely. The colours are pretty similar to my 27" Dell, but I haven't tried matching them 100% accurately.

Well done Linux devs for making this possible and easy.

PS, I should have had my second monitor on an arm years ago!

 

I currently have a dual monitor setup of a Dell 24" and 27", neither are variable refresh rate. I think the 24" monitor is getting on for 17 years old and has had issues with lines on it when cold for the past 13 years. But it's my second monitor and once it's warmed up it's not too bad. Well I think the time has come to retire it and for my main 27" to become my secondary monitor and buy a new primary. I am interested in photography so accurate colours are important to me, which is why I bought these monitors in the first place. But I also play games, so something with some gaming features like Freesync and >60Hz refresh rates I also want. I've got my eye on a ASUS ROG Strix XG27ACS, fwiw.

I am running Endeavour OS (kernel 6.13.1) with KDE Plasma (6.2.5) on Wayland with a Radeon RX 5700 XT. My question is: Will my setup allow me to run one non-freesync monitor @60Hz and one Freesync monitor using the VRR at up to 180Hz. So I can get all the benefits of the new monitor when gaming, without having to turn the second monitor off?

I believe that if I just had the one monitor I'd have no issues and my setup would be plug and play. But as this has been a long time coming with many issues along the way to get VRR on linux working, I'm concerned that it's a "everything must support it or it won't work" scenario.

Grateful for your insights and advise.

 

As per the title really. The whole AI revolution has largely passed me by, but the idea of self hosting something on a small box like this appeals. I don't have an nvidia GPU in my PC and never will, so far as I can tell that pretty much rules out doing anything AI there.

I guess I can run it as a headless machine and connect over SSH or whatever web interface the AI models provide? I'm assuming running Proxmox on it will not work that well.

My main idea for AI is identifying photos with certain properties to aid in tagging over 20 years and 10s of thousands of photos.

 

I haven't tagged my photos in 22 years, relying solely on folders with a brief description and the date. But now I realise tagging might actually be a good idea going forward (and back). As it's definitely getting unrulely.

I'm using PhotoLab and have started going back through and tagging people and places mostly, plus things like "landscape", "flower" etc. Fairly high level. I have a "home" and "garden" tag which covers a lot of photos as well as a " day out" when we visited some place.

I'm sure some people add way more tags- do you and is it useful to you?

 

I have a ZFS RAIDZ2 array made of 6x 2TB disks with power on hours between 40,000 and 70,000. This is used just for data storage of photos and videos, not OS drives. Part of me is a bit concerned at those hours considering they're a right old mix of desktop drives and old WD reds. I keep them on 24/7 so they're not too stressed in terms of power cycles bit they have in the past been through a few RAID5 rebuilds.

Considering swapping to 2x 'refurbed' 12TB enterprise drives and running ZFS RAIDZ1. So even though they'd have a decent amount of hours on them, they'd be better quality drives and fewer disks means less change of any one failing (I have good backups).

The next time I have one of my current drives die I'm not feeling like staying with my current setup is worth it, so may as well change over now before it happens?

Also the 6x disks I have at the moment are really crammed in to my case in a hideous way, so from an aesthetic POV (not that I can actually seeing the solid case in a rack in the garage),it'll be nicer.

 

As per the title really. I'd be looking to pick one up second hand and use it on EndeavourOS.

Is there a really worthwhile boost in performance moving to the 6700XT, or should I wait a bit longer to get something else higher end? I will not consider Nvidia as a Linux user.

 

I previously asked here about moving to ZFS. So a week on I'm here with an update. TL;DR: Surprisingly simple upgrade.

I decided to buy another HBA that came pre-flashed in IT mode and without an onboard BIOS (so that server bootups would be quicker - I'm not using the HBA attached disks as boot disks). For ยฃ30 it seems worth the cost to avoid the hassle of flashing it, plus if it all goes wrong I can revert back.

I read a whole load about Proxmox PCIE passthrough, most of it out of date it would seem. I am running an AMD system and there are many sugestions online to set grub parameters to amd_iommu=on, which when you read in to the kernel parameters for the 6.x version proxmox uses, isn't a valid value. I think I also read that there's no need to set iommu=pt on AMD systems. But it's all very confusing as most wikis that should know better are very Intel specific.

I eventually saw a youtube video of someone running proxmox 8 on AMD wanting to do the same as I was and they showed that if IOMMU isn't setup, then you get a warning in the web GUI when adding a device. Well that's interesting - I don't get that warning. I am also lucky that the old HBA is in its own IOMMU group, so it should pass through easy without breaking anything. I hope the new one will be the same.

Worth noting that there are a lot of bad Youtube videos with people giving bad advise on how to configure a VM for ZFS/TrueNAS use - you need them passed through properly so the VM's OS has full control of them. Which is why an IT HBA is required over an IR one, but just that alone doesn't mean you can't set the config up wrong.

I also discovered along the way that my existing file server VM was not setup to be able to handle PCIe passthrough. The default Machine Type that Proxmox suggests - i440fx - doesn't support it. So that needs changing to q35, also it has to be setup with UEFI. Well that's more of a problem as my VM is using BIOS. A this point it became easier to spin up a new VM with the correct setting and re-do the configuration of it. Other options to be aware of: Memory ballooning needs to be off and the CPU set to host.

At this point I haven't installed the new HBA yet.

Install a fresh version of Ubuntu Server 24.04 LTS and it all feels very snappy. Makes me wonder about my old VM, I think it might be an original install of 16.04 that I have upgraded every 2 years and was migrated over from my old ESXi R710 server a few years ago. Fair play to it, I have had zero issues with it in all that time. Ubuntu server is just absolutely rock solid.

Not too much to configure on this VM - SSH, NFS exports, etckeeper, a couple of users and groups. I use etckeeper, so I have a record of the /etc of all my VMs that I can look back to, which has come in handy on several occasions.

Now almost ready to swap the HBA after I run the final restic backup, which only takes 5 mins (I bloody love restic!). Also update the fstabs of VMS so they don't try mount the file server and stop a few from auto starting on boot, just temporarily.

Turn the server off and get inside to swap the cards over. Quite straightforward other than the SAS ports being in a worse place for ease of access. Power back on. Amazingly it all came up - last time I tried to add an NVME on a PCIe card it killed the system.

Set the PICe passthrough for the HBA on the new VM. Luckily the new HBA is on it's own IOMMU group (maybe that's somehow tied to the PCIE slot?) Make sure to tick the PCIE flag so it's not treated as PCI - remember PCI cards?!

Now the real deal. Boot the VM, SSH in. fdisk -l lists all the disks attached. Well this is good news! Try create the zpool zpool create storage raidz2 /dev/disk/by-id/XXXXXXX ...... Hmmm, can't do that as it knows it's a raid disk and mdadm has tried to mount it so they're in use. Quite a bit of investigation later with a combination of wipefs -af /dev/sdX, umount /dev/md126, mdadm --stop /dev/sd126 and shutdown -r now and the RAIDynes of the disks is gone and I can re-run the zpool command. It that worked! Note: I forgot to add in ashift=12 to my zpool creation command, I have only just noticed this as I write, but thankfully it was clever enough to pick the correct one.

$ zpool get all | grep ashift
storage  ashift                         0                              default

Hmmm, what's 0?

$ sudo zdb -l /dev/sdb1 | grep ashift
ashift: 12

Phew!!!

I also have passed through the USB backup disks I have, mounted them and started the restic backup restore. So far it's 1.503TB in after precisely 5 hours, which seems OK.

I'll setup monthly scrub cron jobs tomorrow.

P.S. I tried TrueNAS out in a VM with no disks to see what it's all about. It looks very nice, but I don't need any of that fancyness. I've always managed my VM's over SSH which I've felt is lighter weight and less open to attack.

Thanks for stopping by my Ted Talk.

46
Anyone running ZFS? (lemmy.fwgx.uk)
submitted 4 months ago* (last edited 4 months ago) by blackstrat@lemmy.fwgx.uk to c/selfhosted@lemmy.world
 

At the moment I have my NAS setup as a Proxmox VM with a hardware RAID card handling 6 2TB disks. My VMs are running on NVMEs with the NAS VM handling the data storage with the RAIDed volume passed through to the VM direct in Proxmox. I am running it as a large ext4 partition. Mostly photos, personal docs and a few films. Only I really use it. My desktop and laptop mount it over NFS. I have restic backups running weekly to two external HDDs. It all works pretty well and has for years.

I am now getting ZFS curious. I know I'll need to IT flash the HBA, or get another. I'm guessing it's best to create the zpool in Proxmox and pass that through to the NAS VM? Or would it be better to pass the individual disks through to the VM and manage the zpool from there?

 

CDs are in every way better than vinyl records. They are smaller, much higher quality audio, lower noise floor and don't wear out by being played. The fact that CD sales are behind vinyl is a sign that the world has gone mad. The fact you can rip and stream your own CD media is fantastic because generally remasters are not good and streaming services typically only have remastered versions, not originals. You have no control on streaming services about what version of an album you're served or whether it'll still be there tomorrow. Not an issue with physical media.

The vast majority of people listen to music using equipment that produces audio of poor quality, especially those that stream using ear buds. It makes me very sad when people don't care that what they're listening to could sound so much better, especially if played through a hifi from a CD player, or using half decent (not beats) headphones.

There's plenty of good sounding and well produced music out there, but it's typically played back through the equivalent of two cans and some string. I'm not sure people remember how good good music can sound when played back through good kit.

 

I've run my own email server for a few years now without too many troubles. I also pay for a ProtonMail account that's been very good. But I've always struggled with PGP keys for encrypting messages to non-Proton users - basically everyone. The PGP key distribution setup just seemed half baked and a bit broken relying on central key servers.

Then I noticed that email I set from my personal email to my company provided email were being encrypted even though I wasn't doing anything to achieve this. This got me curious as to why that was happening which lead me to WKD (Web Key Directory). It's such a simple idea for providing discoverable downloads for public keys and it works really well having set it up for my own emails now.

It's basically a way of discovering the public key of someone's email by making it available over HTTPS at an address that can be calculated based on the email address itself. So if your email is name@example.com, then the public key can be hosted at (in this case) https://openpgpkey.example.com/.well-known/openpgpkey/example.com/hu/pmw31ijkbwshwfgsfaihtp5r4p55dzmc?l=name this is derived using a command like gpg-wks-client --print-wkd-url name@example.com. You just need an email client that can do this and find the key for you automatically. And when setting up your own server you generate the content using the keys in your gpg key ring using env GNUPGHOME=$(mktemp -d) gpg --locate-keys --auto-key-locate clear,wkd,nodefault name@example.com. Move this generated folder structure to your webserver and you're basically good to go.

I have this working with Thunderbird, which now prompts me to do the discoverability step when I enter an email that doesn't have an associated key. On Android, I've found OpenKeyChain can also do a search based just on the email address that apps like K9-Mail (to be Thunderbird mail) can then use.

Anyway, I thought this was pretty cool and was excited to see such an improvement in seamless encryption integration. It'd be nicer if on Thunderbird and K9 it all happened as soon as you enter an email address rather than a few extra steps to jump through to perform the search and confirm the keys. But it's a major improvement.

Does your email provider have WKD setup and working or do you use it already?

 

Given there's been a bit of talk about IPv6 around here recently, I gave it a really good shot at implementing this past week. I spent 3 days getting up to speed, reading loads and trying various different things. But I am now back to IPv4 only because I just can't get IPv6 to do what I want and no amount of searching has made me think what I want to do is even possible.

Some background about the IPv4 network I run at home: I run opnsense on a Proxmox server. I have a few services publicly available using port forwarding. I run several VLANs for IoT, VoIP, Cameras etc. I use a bunch of firewall rules that are specific client devices on the network. So for example I have a rule that blocks youtube from the kids tablets and the TV. I have a special rule around DNS for the wife as she doesn't want to use the pihole blocking features. These rules are made possible because the DHCP server is set to give them a fixed IP and I can create a firewall alias and rule based on that.

None of these things on my existing network are particularly difficult to configure, they run really well.

What I want from IPv6 is:

  1. All devices to use IPv6 including android devices.
  2. To have the same firewall rules configured and not have them be easily bypassed.
  3. To use privacy addresses as I don't want to make every device uniquely trackable over the internet.
  4. To be able to cope with changes to the ISP provided /48 prefix seamlessly.
  5. Have internal DNS make accessing intranet devices easy.
  6. To ensure the privacy of individual devices on my network by avoiding individual device tracking.

What I've tried:

  1. Using DHCPv6, but this excludes android devices. So that's out.
  2. Using a NAT (to avoid tracking of individual devices) and fd00/8 addresses, but this is pointless as those addresses are lower priority than IPv4 (FFS!)
  3. SLACC just seems a non-starter.

Additional: I don't think I have a problem with "thinking about it all wrong for IPv6". I may have a skill issue, hence this question.

As far as I can tell to achieve requirement 1) you must use SLAAC. SLAAC without privacy extensions doesn't allow for 6).

Changes to external ISP prefix assignment impacts MY INTERNAL NETWORK (this just seems insane). And as far as I can tell there's no easy way around this, especially if I have static addresses configured for servers which would (if using SLAAC) have to be manually configured.

I can't see how DNS would be updated either, either Unbound running on Opnsense, or to the pihole. If I go for SLAAC with privacy extensions and I keep paying for a static IP (v4 & v6) to my ISP then I can't implement any firewall rules for specific devices as devices will change their IP regularly. And its even worse if I don't pay for a static IPv6 prefix.

I don't think anything I'm trying to do is particularly strange or unusual but 26 years after its introduction I don't see that IPv6 can meet these requirements. And one of the leading firewall routers, especially in the homelab doesn't have answers to these questions either.

Can you suggest a way to meet all 6 requirements I have with IPv6?

 

I noticed that I wasn't getting many mails (I need better monitoring), and discovered that my iredmail server was poorly.

I have spent far too much time and energy on getting it back and working these past few days, but I've finally got it back up and stable.

Some background: I've had iredmail running for probably going on 6 years now and have had very few issues at all. It runs on an Ubuntu VM on Proxmox and originally was running in the same VM on ESXi (I migrated it over). I haven't changed anything to do with the VM for years other than the Ubuntu LTS updates every 2-3 years, it's always been there and stable. I occasionally will update the Ubuntu OS and iredmail itself, no problems.

Back to the problem... I noticed that Postfix was running OK, but was showing a bunch of errors about clamav not being able to connect. Odd. I then noticed that amavis was not running and had seemed to just die. I couldn't find any reason in any log file. Very strange. Bunch of hunting, checking config file history in the git repo. Nothing significant for years.

Find that restarting the server got everything back up and running. Great, lets go to bed.... Wake up next morning to find that amavis was dead again - it only lasted about 40 mins and then just closed for no reason. Right, ok, time to turn off clamAV as that seemed be be coming up a bit wheilst looking, follow the guide, all is well. Hmm, this seems to be working, but I don't really want clamav off. A whole bunch of duck duck going and I still couldn't figure out a root cause.

And then it clicked, the thing that was causing amavis to close was that it was running out of memory and it was being killed. Bump the memory up to 4GB and re-enable everything as it originally was and.... it seems to have worked. Been going strong for over a day now.

I don't know what it was that's changed recently which has meant the memory requirements have gone up a bit, but at least it's now fixed and it took all of 2 minutes to adjust.

The joys of selfhosting!

 

There's 3 things that really stand out for me that I would say made a massive difference to my life:

  1. Cordless screw driver. Bought the day after building a flat pack bed with a crappy screw.driver that just shredded my hand. Thought it was frivolous at the time, but I've used it so much since. It's light, small enough to fit in my pocket and good for 90% of DIY tasks.

  2. Tassimo coffee machine. Bought it 9 years ago, use it every day. Nice quick easy coffee. What's not to like.

  3. My first DSLR camera. It was a Nikon D50 back in 2005/6 and it sparked my interest in photography to this day. It gave me a hobby I can take lots of places and do it alone or with others. I never loved the D50 camera itself, but I did get some really nice shots with it

view more: next โ€บ