gomp

joined 2 years ago
[–] gomp@lemmy.ml 22 points 1 day ago (1 children)

(rightfully) does not like mixed language codebases for projects as large and important as Linux

You make it sound like it's a matter of taste rather than a technical one (and I suspect it actually might be just about taste in the end)

[–] gomp@lemmy.ml 10 points 2 days ago (2 children)

I stopped at "secret" (yes, the occurrence in the title) :)

TBH the checksums are pretty useless for humans who download an .iso and install it... they are mainly for mirrors and similar that download files without using them

[–] gomp@lemmy.ml 2 points 4 days ago (1 children)

Thanks that was really helpful!

In my case, the system did not have a default route - I've updated the post with details.

 

I'm rebuilding my home server in nixos.

Rather that configuring the various services natively in nixos, I decided to run containers via virtualisation.oci-containers whenever possible, mostly to be able to independently update the system and the various services.

Everything is going smoothly, but whenever I (for whatever reason) do nixos-rebuild boot and reboot after adding a container instead of nixos-rebuild switch, I run into this issue where podman isn't able to resolve the host (below you see the docker hub host, but it also happened with ghcr.io):

podman-apprise-start[1352]: Trying to pull docker.io/caronc/apprise:1.1.8...
podman-apprise-start[1352]: Pulling image //caronc/apprise:1.1.8 inside systemd: setting pull timeout to 5m0s
podman-apprise-start[1352]: Error: initializing source docker://caronc/apprise:1.1.8: pinging container registry registry-1.docker.io: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io: no such host

I thought that my podman-* services were missing a dependency on network-online and that they were started before the network was available, but it is't the case:

# systemctl list-dependencies podman-apprise.service 
podman-apprise.service
● ├─system.slice
● ├─network-online.target
● │ └─systemd-networkd-wait-online.service
● └─sysinit.target
●   ├─dev-hugepages.mount
[...snip...]

Do you happen to know what the issue is?

PS: Manually running systemctl start podman-whatever once fixes the issue, of course, but I wonder if there's a more robust solution?


update:

After investigating based on balsoft input below, the issue seems to be that systemd-networkd-wait-online doesn't behave as expected (by me).

Basically, systemd-networkd-wait-online waits for network interfaces to have a carrier (working ethernet cable) and an IP address. This is what in systemd-networkd docs is called the "degraded" state (no, it doesn't mean that something got worse than before... don't think too much of what "degraded" implies in English).

In my case, I have an interface that is setup via DHCP and that also has static IPs assigned:

$ cat /etc/systemd/network/00-lan1.network 
[Match]
Name=lan1

[Network]
DHCP=ipv4
IPv6AcceptRA=no
LinkLocalAddressing=no

[Address]
Address=192.168.10.10/24

[Address]
Address=192.168.10.99/24

If you are wondering, the reason I do this is that I want static IPs for my dns server and reverse proxy, but I also want my home server to use DHCP to fetch some network-wide configuration which, critically, includes the default route.

Back to the issue: IIUC, since the interface has a non-link-local address (which systemd-networkd confusingly calls a "routable" address), it is immediately considered "routable" (a state that is moar better than "degraded") and so not only it's basically ignored by the default systemd-networkd-wait-online configuration, but even adding

[Link]
RequiredForOnline=routable

to /etc/systemd/network/00-lan1.network doesn't make a difference whatsoever.

For now, my stopgap solution is to explicitly set the default route for the "lan1" network:

[Network]
Gateway=192.168.10.1

this seems to solve the issue with podman and, while the system still thinks to be "online" before being fully configured, it will suffice until I find a more elegant/robust way (ping me in a while if you are interested).

refs:
systemd-networkd-wait-online man page
systemd-networkd docs on "RequiredForOnline"
networkctl man page

[–] gomp@lemmy.ml 3 points 5 days ago* (last edited 5 days ago) (1 children)

He never said he wants free speech for everyone :) TBH the shame should be on those who believed that Musk could somehow be the first right-wing extremist in history that wished the people at large had more rights and more freedoms.

[–] gomp@lemmy.ml 2 points 5 days ago

Circle jerking, I guess? Same reason I use lemmy :)

[–] gomp@lemmy.ml 3 points 1 week ago (1 children)

Yank is Copy, you heathen!
Only in inferior software it is Paste.

(for the uninitiated: it's Copy in vim and Paste in emacs; also if it wasn't clear, I'm just joking)

[–] gomp@lemmy.ml 5 points 1 week ago* (last edited 1 week ago) (1 children)
  1. By and large, distros package the same software so which one you pick is a matter of taste. As a beginner, you won't have the knowledge to take advantage of documentation/instructions that are not written for your specific distro, so pick one of the more popular ones.

  2. No, distro owners won't be a problem in the same way that Microsoft or Apple are. Don't worry about that: the moment they do something unsavory (even remotely) their projects will be forked, and switching to a different distro is not that big of a deal anyway.

  3. If you like to tinker you will break your system, not because linux is fragile (it is not) but because knowledge of low-level stuff is widespread and the temptation to thinker with it is too great. I recommend you look into system snapshots and how they integrate with boot options (eg. opensuse tumbleweed automatically snapshosts your system when you update it and during boot you can choose to boot into a previous state - surely other distros do the same and, if yours doesn't, you can set it up yourself).

  4. The short answer is "use KDE" :)

  5. KDE is great and seems to suit you. The DE you choose matters (IMHO) more that the distro, because once you are familiar with a DE and its shortcuts it's a pain to switch, and also because once you are used to some feature it's enormously frustrating to switch to a DE that doesn't have it :)

From what I hear (I switched to AMD years ago), it's not hard to make the Nvidia cards work properly, but it's a recurring hassle and there are lots of things that are more fun to thinker with. Unless specific reasons you need an Nvidia card, I'd suggest selling it off and replacing it with a second-hand AMD/Intel one.

[–] gomp@lemmy.ml 9 points 1 week ago (2 children)

I'm sorry if this sounds rude, especially after not reading what must have taken you a long time to write...

Have you tried writing "distro that looks like macos" into a search engine?

[–] gomp@lemmy.ml 2 points 1 week ago

The last name thing is true: outside family and close friends, that's what people call each other by.

I have no idea what the wiener thing was originally :)

[–] gomp@lemmy.ml 6 points 3 weeks ago

Configure it like the current router and keep it as a backup?

You can run a lot of stuff on it, but those boxes aren't really that powerful... a cheap, old raspberry pi from ebay (or anything really) will serve you better.

[–] gomp@lemmy.ml 2 points 3 weeks ago* (last edited 3 weeks ago)

sudo zypper packages --unneded will give you a list of packages that have not been explicitly requested and are not dependencies of explicitly requested packages. As for how to remove them... IDK (I do it manually, once in a blue moon: it's not like there's new unneded packages every week).

It's been a while since I've used debian, but IIRC apt autoremove will leave behind config files (unless you specify --purge).

In tumbleweed (and I think all rpm-based distros?) config files are removed per default together with packages (well, the config files installed with the package, not others that may have been created later such as the ones in your \~ - basically zypper rm is the same as apt purge).

[–] gomp@lemmy.ml 11 points 3 weeks ago* (last edited 3 weeks ago) (3 children)

Google has many faults, but the one responsible for this one is someone else :)

The FOSS google maps alternatives I hear recommended most often are OsmAnd+ and, especially, Organic Maps.

Personally I don't use maps very often (I know my way around my area pretty well, so I usually just lookup the location of wherever I want to go before leaving home), but I'd say Organic Maps is simpler and more user friendly than OsmAnd+.

Both can work offline if you download the maps for your area.

The maps are pretty good (at least in my area), but compared to Google Maps you'll have to rely more on street addresses as there aren't as many points of interest.

118
submitted 2 months ago* (last edited 2 months ago) by gomp@lemmy.ml to c/linux@lemmy.ml
 

I remember a story where people asked about blobs included in Ventoy and there were no comments from the devs, leading to suspicion.

At the time it wasn't clear to me if there was any substance to the story or if it was the usual Internet exaggeration, so I resolved to ignore it for the time being and saved a reminder to look into it after a while.

Now my reminder fired off and I looked around, but couldn't find how the story ended... do you know?

 

I have two functions that are similar but can fail with different errors:

#[derive(Debug, thiserror::Error)]
enum MyError {
  #[error("error a")]
  MyErrorA,
  #[error("error b")]
  MyErrorB,
  #[error("bad value ({0})")]
  MyErrorCommon(String),
}

fn functionA() -> Result<String, MyError> {
  // can fail with MyErrorA MyErrorCommon
  todo!()
}

fn functionB() -> Result<String, MyError> {
  // can fail with MyErrorB MyErrorCommon
  todo!()
}

Is there an elegant (*) way I can express this?

If I split the error type into two separate types, is there a way to reuse the definition of MyErrorCommon?


(*) by "elegant" I mean something that improves the code - I'm sure one could define a few macros and solve that way, but I don't want to go there

edit: grammar (rust grammar)

 

I experimented with several ways to run my services:

  1. "regular" systemd services (services.glance = { ... };)
  2. nix containers (containers.glance = { ... };)
  3. podman containers (virtualisation.oci-containers.containers.glance = { ... })

and I must say I'm starting to appreciate the last option (the least nixos-y) more and more.

Specifically, I appreciate that:

  • I just have to learn the app/container configuration, instead of also backwards-translating from their config into the various nixos options (of course the .yaml or whatever configuration files are still generated from my nixos config, I just do that in a derivation instead on relying on a module doing it for me)
  • Services are sometimes outdated in nixpks (even in unstable - and juggling packages between stable and unstable is yet another complication)
  • I feel like it's more secure (very arguable and also of very little consequence since everything is on my homelab... it's mainly for the warm fuzzies)

Do you guys use one of the options above? Something different?

 

edit: for the solution, see my comment below

I'm trying to package a go application (beszel) that bundles a bunch of html stuff built with bun (think, npm).

The html is generated by running bun install and bun run and then embedded in the go binary with //go:embed.

Being completely ignorant of the javascript ecosystem, my first idea was to just replicate what they do in the Makefile

postConfigure = ''
bun install --cwd ./site
bun run     --cwd ./site build
'' 

but, since bun install downloads dependencies from the net, that fails.

I guess the "clean" solution would be to look for buildNpmPackage or similar (assuming that exists) and let nix manage all the dependencies, but... it's some 800+ dependencies (at least, bun install ... --dry-run lists 800+ things) so that's a hard pass.

I then tried to look at how buildGoPackage handles the vendoring of dependencies, with the idea of replicating that (it dowloads what's needed and then compare a hash of what was downloaded with a hash provided in the nix package definition), but... I can't for the life of me decipher how nixpkgs' pkgs/build-support/go/module.nix works.

Do you know how to implement this kind of vendoring in a nix derivation?

 

Over the years I have accumulated a sizable music library (mostly flacs, adding up to a bit less than 1TB) that I now want to reorganize (ie. gradually process with Musicbrainz Picard).

Since the music lives in my NAS, flacs are relatively big and my network speed is 1GB, I insalled on my computer a hdd I had laying around and replicated the whole library there; the idea being to work on local files and the sync them to the NAS.

I setup Syncthing for replication and... everything works, in theory.

In practice, Syncthing loves to rescan the whole library (given how long it takes, it must be reading all the data and computing checksums rather than just scanning the filesystem metadata - why on earth?) and that means my under-powered NAS (Celeron N3150) does nothing but rescanning the same files over and over.

Syncthing by default rescans directories every hour (again, why on earth?), but it still seem to rescan a whole lot even after I have set rescanIntervalS to 90 days (maybe it rescans once regardless when restarted?).

Anyway, I am looking into alternatives.
Are there any you would recommend? (FOSS please)

Notes:

  • I know I could just schedule a periodic rsync from my PC to the NAS, but I would prefer a bidirectional solution if possible (rsync is gonna be the last resort)
  • I read about unison, but I also read that it's not great with big jobs and that it too scans a lot
  • The disks on my NAS go to sleep after 10 minutes idle time and if possible I would prefer not waking them up all the time (which would most probably happen if I scheduled a periodic rsync job - the NAS has RAM to spare, but there's no guarantee it'll keep in cache all the data rsync needs)
 

edit: for the solution, see my comment below

I need/want to build aeson and its subproject attoparsec-aeson from source (it's a fork of the "official" aeson), but I'm stuck... can you help out?

The sources of attoparsec-aeson live in a subdirectory of the aeson ones, so I have the sources:

aeson-src = fetchFromGitHub {
  ...
};

and the "main" aeson library:

aeson = haskellPackages.mkDerivation {
  pname = "aeson";
  src = aeson-src;
  ...
};

When I get to attoparsec-aeson however I run into a wall: I tried to follow the documentation about sourceRoot:

attoparsec-aeson = haskellPackages.mkDerivation {
  pname = "attoparsec-aeson";
  src = aeson-src;
  sourceRoot = "./attoparsec-aeson"; # maybe this should be "${aeson-src}/attoparsec-aeson"?
                                     # (it doesn't work either way)
  ...
};

but I get

 error: function 'anonymous lambda' called with unexpected argument 'sourceRoot'

Did I fail to spot some major blunder (I am nowhere near an expert)? Does sourceRoot not apply to haskellPackages.mkDerivation? What should I do to make it work?

BTW:

IDK if this may cause issues, but the attoparsec-aeson sources include symlinks to files in the "main" attoparsec sources:

~/git-clone-of-attoparsec-sources $ tree attoparsec-aeson/
attoparsec-aeson/
├── src
│   └── Data
│       └── Aeson
│           ├── Internal
│           │   ├── ByteString.hs -> ../../../../../src/Data/Aeson/Internal/ByteString.hs
│           │   ├── Text.hs -> ../../../../../src/Data/Aeson/Internal/Text.hs
│           │   └── Word8.hs -> ../../../../../src/Data/Aeson/Internal/Word8.hs
│           ├── Parser
│           │   └── Internal.hs
│           └── Parser.hs
├── attoparsec-aeson.cabal
└── LICENSE
 

Lately I noticed that when I want to ssh to a server using a password I need to specify -o PubkeyAuthentication=no or I won't be asked for a password and the authentication will fail (well, for all I know, setting some other option may work too).

I use password authentication only once on freshly installed servers/vms, so it's not a huge deal, but... it still bothers me (mainly because I don't remember which option to set).

Do you guys have any idea what it may be?

client's ~/.ssh/config

Host 127.*.*.* 192.168.*.* 10.*.*.* 172.16.*.* 172.17.*.* 172.18.*.* 172.19.*.* 172.2?.*.* 172.30.*.* 172.31.*.*
  LogLevel quiet
  Stricthostkeychecking no
  Userknownhostsfile /dev/null

Host *
  ForwardAgent no
  AddKeysToAgent no
  Compression yes
  ServerAliveInterval 10
  ServerAliveCountMax 3
  HashKnownHosts no
  UserKnownHostsFile ~/.ssh/known_hosts
  ControlMaster no
  ControlPath ~/.ssh/master-%r@%n:%p
  ControlPersist no

server's /etc/ssh/sshd_config (it's from the nixos install iso)

AuthorizedPrincipalsFile none
Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
GatewayPorts no
KbdInteractiveAuthentication yes
KexAlgorithms sntrup761x25519-sha512@openssh.com,curve25519-sha256,curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256
LogLevel INFO
Macs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com
PasswordAuthentication yes
PermitRootLogin yes
PrintMotd no
StrictModes yes
UseDns no
UsePAM yes
X11Forwarding no
Banner none
AddressFamily any
Port 22
Subsystem sftp /nix/store/78mv13w9mgh0s0rd7rnr6ff4d7a39bpd-openssh-9.7p1/libexec/sftp-server 
AuthorizedKeysFile %h/.ssh/authorized_keys /etc/ssh/authorized_keys.d/%u
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_ed25519_key

 

Solution:
hd-idle is the way to go (if you read their README, they explain that most drives don't support idle timers)

I've been looking into spinning down the drives of my NAS, as I use it infrequently and that brings power drain down from ~30W to ~17W.

Problem is, hdparm -S doesn't seem to do anything for these particular drives: if I set it and wait for the appropriate amount of time (eg. 5 seconds if set to 1) the drives are still reported as "active/idle" and power drain doesn't go down.

Both hdparm -y and hdparm -Y work fine, but I don't seem to be able to find settings for them in tlp (probably because they are commands rather than settings?).

Besides the caveats about disks living longer if they are kept spinning, are there reasons why I shouldn't setup a cron job (well, a systemd timer) that runs hdparm -Y every 10 minutes? (for example, could hdparm -y cause errors if run while the drive is being backed up?)

PS: According to hdparm's manpage, -y puts the drive standby mode while -Y puts it into sleep mode. Considering that in my case power drain seems the same either way, should I prefer one or the other?

 

(I'm just starting off with rust, so please be patient)

Is there an idiomatic way of writing the following as a one-liner, somehow informing rustc that it should keep the PathBuf around?

// nevermind the fully-qualified names
// they are there to clarify the code
// (that's what I hope at least)

let dir: std::path::PathBuf = std::env::current_dir().unwrap();
let dir: &std::path::Path   = dir.as_path();

// this won't do:
// let dir = std::env::current_dir().unwrap().as_path();

I do understand why rust complains that "temporary value dropped while borrowed" (I mean, the message says it all), but, since I don't really need the PathBuf for anything else, I was wondering if there's an idiomatic to tell rust that it should extend its life until the end of the code block.

 

I want to have my screen (the "dev" workspace) split in three "zones":

  • on the left side, a tabbed group with all the text editors I start (ie. if I start a new one, it goes there in a new tab)
  • on the top-right, a tabbed group of whatever many terminal I feel like launching
  • on the bottom-right, my browsers (and possibly other stuff), in a group without tabs
  • a key combination to cycle between: all three "zones" visible, text editors on the left - terminal on the right, text editors on the left - browser on the right, fullscreen browser

So far I've been looking at hyprland (for no particular reason except the hype) and I don't think I can do the above with it (I am by no means an expert, so... maybe it can actually be done?).

Do you know of any WM where it would be possible? (possibly, one with automatic splitting a-la bspwm, that I would use for the other workspaces)

 

I've been looking around for a scripting language that:

  • has a cli interpreter
  • is a "general purpose" language (yes, awk is touring complete but no way I'm using that except for manipulating text)
  • allows to write in a functional style (ie. it has functions like map, fold, etc and allows to pass functions around as arguments)
  • has a small disk footprint
  • has decent documentation (doesn't need to be great: I can figure out most things, but I don't want to have to look at the interpter source code to do so)
  • has a simple/straightforward setup (ideally, it should be a single executable that I can just copy to a remote system, use to run a script and then delete)

Do you know of something that would fit the bill?


Here's a use case (the one I run into today, but this is a recurring thing for me).

For my homelab I need (well, want) to generate a luhn mod n check digit (it's for my provisioning scripts to generate synchting device ids from their certificates).

I couldn't find ready-made utilities for this and I might actually need might a variation of the "official" algorithm (IIUC syncthing had a bug in their initial implementation and decided to run with it).

I don't have python (or even bash) available in all my systems, and so my goto language for script is usually sh (yes, posix sh), which in all honestly is quite frustrating for manipulating data.

view more: next ›