this post was submitted on 26 Jul 2023
19 points (100.0% liked)

LocalLLaMA

2889 readers
78 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

founded 2 years ago
MODERATORS
 

For example, does a 13B parameter model at 2_K quantiation perform worse than a 7B parameter model at 8bit or 16bit?

you are viewing a single comment's thread
view the rest of the comments
[–] noneabove1182@sh.itjust.works 2 points 2 years ago

These are good sources, to add one more, the GPTQ paper talks a lot about perplexity at several quantization and model sizes:

https://arxiv.org/abs/2210.17323