this post was submitted on 21 Feb 2025
8 points (100.0% liked)

LocalLLaMA

2590 readers
3 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 2 years ago
MODERATORS
 

Hi, I don't have much experience with running local AIs, but have tried to use ChatGPT for psychoanalysis purposes and the smarter, but limited for free users model is amazing. I don't like giving such personal information to OpenAI though, so I'd like to set something similar up locally, if possible. I am running Fedora Linux, and have had the best results with KoboldCpp, as it was by far the easiest to set up. I have a Ryzen 7600, 32 GB of ram, and a 7800 xt (16 GB vram). The two things I mostly want from this setup is the smartest model possible, as I've tried some and the responses just don't feel as insightful or though provoking as ChatGPT's, and I also really like the way it handles memory. I don't need "real time" conversation speed, if it means I can get the smarter responses that I am looking for. What models/setups would you recommend? Generally, I've been going for newer + takes up more space = better, but I'm kind of disappointed with the results, although the largest models I've tried have only been around 16 GB, is my setup capable of running bigger models? I've been hesitant to try, as I don't have fast internet and downloading a model usually means keeping my pc running overnight.

PS, I am planning to use this mostly as a way to grow/reflect, not dealing with trauma or loneliness. If you are struggling and are considering AI for help, never forget that it can not replace connections with real human beings.

top 2 comments
sorted by: hot top controversial new old
[โ€“] DavidGarcia@feddit.nl 5 points 1 day ago (1 children)

In my experience anything similar to qwen-2.5:32B comes closest to gpt-4o. I think it should run on your setup. the 14b model is alright too, but definitely inferior. Mistral Small 3 also seems really good. anything smaller is usually really dumb and I doubt it would work for you.

You could probably run some larger 70b models at a snails pace too.

Try the Deepseek R1 - qwen 32b distill, something like deepseek-r1:32b-qwen-distill-q4_K_M (name on ollama) or some finefune of it. It'll be by far the smartest model you can run.

There are various fine tunes that remove some of the censorship (ablated/abliterated) or are optimized for RP, which might do better for your use case. But personally haven't used them so I can't promise anything.

[โ€“] Rez@sh.itjust.works 1 points 22 hours ago

Thank you so much for the suggestion! I tried Q8 of the model you mentioned, and I am very impressed with the results! The output itself was exactly what I wanted, the speed was a little on the slower side. Loading my previous conversation with a context of over 15k tokens took about 10 minutes to get the first response, but the later messages were much faster. The web ui loses connection almost every time though, and I just manually copy the response from the terminal window in to the web ui to save it for future context. I am currently downloading the Q6 model, and might experiment with going even lower for faster speeds and more stability, if the quality of the output doesn't degrade too much.