this post was submitted on 27 Jul 2025
71 points (98.6% liked)

Fuck AI

3662 readers
859 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 

I got a little curious about local LLM hosting I'll admit, and was playing with a few models to look for obvious censorship etc.

I've tended to make some assumptions about Deepseek due to its country of origin, and I thought I'd check that out in particular.

Turns out that Winnie the Pooh thing is jut not happening.

😂

you are viewing a single comment's thread
view the rest of the comments
[–] brucethemoose@lemmy.world 10 points 1 week ago* (last edited 1 week ago) (1 children)

The Deepseek distills will do pretty much anything you want in completion mode. Just modify their chat template.

My impression of Chinese models going back to Yi is that they’re more uncensored in English modes in particular, but much more hesitant answering in Chinese characters.

…That being said, Deepseek 14B is basically obsolete now. There are much better models in that size/speed class depending on your hardware, including explicitly uncensored ones.

[–] octopus_ink@slrpnk.net 3 points 1 week ago* (last edited 1 week ago) (1 children)

I've got fairly low end hardware, this was just easily installable via gpt4all so I gave it a shot. :)

I will play with some others....

Thanks!

[–] brucethemoose@lemmy.world 3 points 1 week ago (1 children)

If you mean like a laptop, look for MoE models like Qwen3 A3B. And pay attention to sampling, try low or zero temperature first.

[–] octopus_ink@slrpnk.net 1 points 1 week ago

Thank you! I do, and I will do!