this post was submitted on 01 Jun 2024
103 points (88.7% liked)

Technology

72988 readers
2948 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] muntedcrocodile@lemm.ee 31 points 1 year ago (6 children)

We invented multi bit models so we could get more accuracy since neural networks are based off human brains which are 1 bit models themselves. A 2 bit neuron is 4 times as capable as a 1 bit neuron but only double the size and power requirements. This whole thing sounds like bs to me. But then again maybe complexity is more efficient than per unit capability since thats the tradeoff.

[–] echodot@feddit.uk 39 points 1 year ago

Human brains aren't binary. They send signals in lot of various strength. So "on" has a lot of possible values. The part of the brain that controls emotions considers low but non zero level of activation to be happy and high level of activation to be angry.

It's not simple at all.

[–] Wappen@lemmy.world 25 points 1 year ago (1 children)

Human brains aren't 1 bit models. Far from it actually, I am not an expert though but I know that neurons in the brain encode different signal strengths in their firing frequency.

[–] kromem@lemmy.world 10 points 1 year ago* (last edited 1 year ago) (1 children)

The network architecture seems to create a virtualized hyperdimensional network on top of the actual network nodes, so the node precision really doesn't matter much as long as quantization occurs in pretraining.

If it's post-training, it's degrading the precision of the already encoded network, which is sometimes acceptable but always lossy. But being done at the pretrained layer it actually seems to be a net improvement over higher precision weights even if you throw efficiency concerns out the window.

You can see this in the perplexity graphs in the BitNet-1.58 paper.

[–] lunar17@lemmy.world 6 points 1 year ago (1 children)

None of those words are in the bible

[–] kromem@lemmy.world 2 points 1 year ago* (last edited 1 year ago)

No, but some alarmingly similar ideas are in the heretical stuff actually.

[–] buzz86us@lemmy.world 4 points 1 year ago

We need to scale fusion

[–] Miaou@jlai.lu 2 points 1 year ago

Multi bits models exist because thats how computers work, but there's been a lot of work to use e.g. fixed point over floating for things like FPGAs, or with shorter integer types, and often results are more than good enough.