SavvyBeardedFish

joined 2 years ago
[–] SavvyBeardedFish@reddthat.com 5 points 23 hours ago (3 children)

In general it sounds like you want 'tiling'. There are multiple window managers that does this, e.g. AwesomeWM, i3, Sway, River etc.

Additionally you typically have 'tiling scripts' that work on top of Gnome and Kwin (Plasma), however unsure what the capabilities are there.

I can atleast speak for Sway:

Here you can can move/select the current focused window relative to whatever key strokes you prefer, the defaults are using Vim-bindings, but arrow keys are also pretty common.

For grabbing a specific window (like in an ordered manner) is probably something that you would need to extend through scripting if the 'basic' movement isn't enough.

Note: A tiling window manager is quite different (in usage) from a stacking one (which is what one is mostly used to) tiling capabilities/scripts

[–] SavvyBeardedFish@reddthat.com 4 points 1 week ago (1 children)

Maybe the LLMs they prompted didn't know about the built-in SSH support, hence still recommends PuTTY? πŸ€”

[–] SavvyBeardedFish@reddthat.com 26 points 1 week ago* (last edited 1 week ago) (1 children)

Der8auer's video is worth a watch, he got one of the Redditor's card:

https://youtu.be/Ndmoi1s0ZaY

Yes, so R&D and finalizing the model weight is done on NVIDIA GPUs (I guess you need an excessive amount of VRAM).

Inference is probably gonna be offloaded to consumers in the end where the NPU is taking care of the inference cost (See Apple, Qualcomm etc)

[–] SavvyBeardedFish@reddthat.com 26 points 3 weeks ago* (last edited 3 weeks ago) (4 children)

Not the best on AI/LLM terms, but I assume that training the models was done on Nvidia, while inference (using the model/getting the data from the model) is done on Huawei chips

To add: Training the model is a huge single-cost expense, while inference is a continuous expense.

[–] SavvyBeardedFish@reddthat.com 5 points 1 month ago (1 children)

Are you by chance using an integrated GPU?

Noticed that my AMD Radeon 680M uses quite a lot of RAM as shared memory.

Using something like amdgpu_top will show how much RAM your iGPU is using, metric is 'GTT'

[–] SavvyBeardedFish@reddthat.com 21 points 1 month ago* (last edited 1 month ago) (5 children)

The whole downside is that not everyone is a data horder with space for videos

Some media players allows for streaming directly using yt-dlp, e.g.;

mpv <youtube url>

Will use yt-dlp if installed

[–] SavvyBeardedFish@reddthat.com 8 points 1 month ago (2 children)

One thing to try is to update the firmware of the controller, however that needs to be done on Windows, the Arch Wiki has some additional info where people explain pairing issues

There's also a section on "able to pair but no inputs" in the troubleshooting section on the same Wiki page

Not what I expected, good thing you managed to get it solved!

[–] SavvyBeardedFish@reddthat.com 2 points 2 months ago (2 children)

That dump didn't reveal any particular useful information, however it seems like multiple people are reporting issues with mesa + segfault, e.g. https://bbs.archlinux.org/viewtopic.php?id=301550

Mesa v24.3.2-1 in Arch should revert that issue, Mesa v24.3.1 seems to be the problem one

[–] SavvyBeardedFish@reddthat.com 4 points 2 months ago (4 children)

You could check the backtrace of one of your crashes

coredumpctl debug
> bt

And then dump that trace here

It might be related to Mesa/GPU drivers

Sounds like you might just be max'ing out the capacity of the coax cable as well (depending on length/signal integrity). E.g. ITGoat (not sure how trustworthy this webpage is, just an example) lists 1 Gbps as the maximum for coax while you would typically expecting less than that, again depending on your situation (cable length, material, etc)

view more: next β€Ί