this post was submitted on 13 Feb 2025
105 points (94.1% liked)

Technology

63134 readers
3423 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

cross-posted from: https://lemm.ee/post/55428692

you are viewing a single comment's thread
view the rest of the comments
[–] brucethemoose@lemmy.world 30 points 1 week ago* (last edited 1 week ago) (2 children)

For context, Alibaba is behind Qwen 2.5, which is the series of LLMs for desktop/self-hosting use. Most of the series is Apache licensed, free to use, and they're what Deepseek based their smaller distillations on. Thier 32B/72B models, especially finetunes of them, can run circles around cheaper OpenAI models you'd need an $100,000+ fire-breathing server to run... if OpenAI actually released anything for public use.

I have Qwen 2.5 Coder or another derivative loaded on my desktop pretty much every day. And because its open, the whole community contributes back to it.

So... Yeah, if I were Apple, I would've picked Qwen/Alibaba too. They're the undisputed king of iDevice-sized LLMs at the moment, and do it for a fraction of the cost/energy usage of US companies. They'd have to be stupid to pick OpenAI or Anthropic, and even stupider to reinvent the wheel themselves (which they already tried. Their local "small" LLMs are kinda crap for their size).

[–] coherent_domain@infosec.pub 4 points 1 week ago (1 children)

How do you Apache license a LLM? Do they just treat the weights as code?

[–] brucethemoose@lemmy.world 4 points 1 week ago* (last edited 1 week ago)

It's software, so yeah, I suppose. See for yourself: https://huggingface.co/Qwen/QwQ-32B-Preview

Deepseek chose MIT: https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B

For whatever reason, the Chinese companies tend to go with very permissive licensing, while Cohere, Mistral, Meta (Llama), and some others add really weird commercial restrictions (though this trend may be reversing). IBM Granite is actually Apache 2.0 and way more "open" and documented than even the Chinese tech companies, but unfortunately their models are "small" (3B) and not very cutting edge.


Another note: Huggingface Transformers (Also Apache 2.0, and from a US company) is the reference code to run open models, but there are many other implementations to choose from (exllama, mlc-llm, vllm, llama.cpp, internlm, lorax, Text Generation Inference, Apple MLX, just to name a few).

[–] IndustryStandard@lemmy.world 1 points 1 week ago (1 children)

Deepseek R1 is currently the selfhosting model to use

[–] brucethemoose@lemmy.world 1 points 1 week ago

Some of the distillations are trained on top of Qwen 2.5.

And for some cases, FuseAI (a special merge of several thinking models), Qwen Coder, EVA-Gutenberg Qwen, or some other specialized models do a better job than Deepseek 32B in certain niches.