Audalin

joined 2 years ago
[–] Audalin@lemmy.world 1 points 1 week ago

Thanks! I now see that Tai Chi is mentioned frequently online in context of the film unlike yoga so that should be right; it narrows things down.

 

Hope it isn't considered offtopic here; I don't know of any better places to ask.

It's from a wonderful film by Jim Jarmusch, The Limits of Control. This character is frequently seen to practice something that might be yoga (or maybe not?) and in the end of his sessions he places his hands in this configuration and bows slightly.

I want to read about the precise significance/context behind the gesture, but first I need to know its name.

I've searched among various yoga mudras for a little while and I couldn't identify any exact matches so far (at least a couple of details differ from what I see). ChatGPT couldn't do it either. At the same time it seems to me that it isn't the first time I see it.

[–] Audalin@lemmy.world 1 points 2 months ago

KOReader supports custom CSS. You can certainly change the background colour with it, I think a grid should be possible too.

[–] Audalin@lemmy.world 2 points 2 months ago

That's the ones, the 0414 release.

[–] Audalin@lemmy.world 5 points 2 months ago (3 children)

QWQ-32B for most questions, llama-3.1-8B for agents. I'm looking for new models to replace them though, especially the agent one.

Want to test the new GLM models, but I'd rather wait for llama.cpp to definitely fix the bugs with them first.

[–] Audalin@lemmy.world 4 points 4 months ago

What I've ultimately converged to without any rigorous testing is:

  • using Q6 if it fits in VRAM+RAM (anything higher is a waste of memory and compute for barely any gain), otherwise either some small quant (rarely) or ignoring the model altogether;
  • not really using IQ quants - afair they depend on a dataset and I don't want the model's behaviour to be affected by some additional dataset;
  • other than the Q6 thing, in any trade-offs between speed and quality I choose quality - my usage volumes are low and I'd better wait for a good result;
  • I load as much as I can into VRAM, leaving 1-3GB for the system and context.
[–] Audalin@lemmy.world 2 points 7 months ago

Maybe some Borges too?

[–] Audalin@lemmy.world 5 points 10 months ago

LLaMA can't. Chameleon and similar ones can:

[–] Audalin@lemmy.world 2 points 10 months ago

For Tolkien's work, there is the twelve volume "The Complete History of Middle Earth" which is about as inside baseball as you can get for Tolkien.

I'd replace HoME with Parma Eldalamberon, Vinyar Tengwar and other journals publishing his early materials here.

[–] Audalin@lemmy.world 2 points 10 months ago (1 children)

Recommending Italo Calvino's Six Memos for the Next Millennium, the lectures he has been preparing shortly before his death.

Not an assembly guide for a work of literature, but it'll help your own process if it's already ongoing and you want to improve.

The lectures also have some comments on what Calvino himself was doing here and there and why.

[–] Audalin@lemmy.world 3 points 11 months ago

For me specifically, if spoilers hurt a book, it probably wasn't worth reading in the first place. I love when authors demonstrate mastery of language and narration, and no amount of spoilers can overshadow the direct experience of witnessing it enacted.

[–] Audalin@lemmy.world 2 points 11 months ago

ChatMusician isn't exactly new and the underlying dataset isn't particularly diverse, but it's one of the few models made specifically for classical music.

Are there any others, by the way?

 

How do you acquire sheet music?

There're IMSLP and musescore, but many things are just not there.

Bonus points if you know anything with xenharmonic/microtonal music well-represented.

view more: next ›