this post was submitted on 02 Oct 2023
31 points (97.0% liked)

LocalLLaMA

2590 readers
3 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 2 years ago
MODERATORS
 

Trying something new, going to pin this thread as a place for beginners to ask what may or may not be stupid questions, to encourage both the asking and answering.

Depending on activity level I'll either make a new one once in awhile or I'll just leave this one up forever to be a place to learn and ask.

When asking a question, try to make it clear what your current knowledge level is and where you may have gaps, should help people provide more useful concise answers!

you are viewing a single comment's thread
view the rest of the comments
[–] corvus@lemmy.ml 2 points 2 days ago (1 children)

I get an error when offloading the whole model to GPU

./build/bin/llama-cli -m ~/software/ai/models/deepseek-math-7b-instruct.Q8_0.gguf -n 200 -t 10 -ngl 31 -if

The relevant output is:

....

llama_model_load_from_file_impl: using device Vulkan0 (Intel(R) Iris(R) Xe Graphics (RPL-U)) - 7759 MiB free

...

print_info: file size = 6.84 GiB (8.50 BPW)

....

load_tensors: loading model tensors, this can take a while... (mmap = true) load_tensors: offloading 30 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 31/31 layers to GPU load_tensors: Vulkan0 model buffer size = 6577.83 MiB load_tensors: CPU_Mapped model buffer size = 425.00 MiB

.....

ggml_vulkan: Device memory allocation of size 2013265920 failed ggml_vulkan: vk::Device::allocateMemory: ErrorOutOfDeviceMemory llama_kv_cache_init: failed to allocate buffer for kv cache llama_init_from_model: llama_kv_cache_init() failed for self-attention cache common_init_from_params: failed to create context with model '~/software/ai/models/deepseek-math-7b-instruct.Q8_0.gguf' main: error: unable to load model

It seems to me that there is enough room for the model, but I don't know what "Device memory allocation of size 2013265920" means.

[–] hendrik@palaver.p3x.de 2 points 2 days ago* (last edited 2 days ago) (1 children)

I suppose that line means llama.cpp tried to allocate another chunk of memory, roughly 2GB and that failed because there wasn't any memory left. I'm not sure about the details, maybe it's the KV cache and additional stuff that is required for the computation aside from the model itself? Have you tried lowering the number of layers to offload to the iGPU and see if that works? Like lowering the value to -ngl 20 might leave additional space for other important things.

[–] corvus@lemmy.ml 2 points 1 day ago (1 children)

Yeah I tested with lower numbers and it works, I just wanted to offload the whole model thinking it will work, 2GB it's a lot. With other models it prints about 250MB when fails and if you sum up the model size it's still well below the iGPU free memory so I dont get it... anyway, I was thinking about upgrading the memory to 32GB or may be 64GB but I hesitate because with models around 7GB and CPU only I get around 5 t/s and with 14GB 2-3 t/s, so I run one of around 30GB I guess it will get around 1 t/s? My supposition is that increasing RAM doesn't increase performance per se, just let's you upload bigger models to memory, so performance is approximately linear on model size... what do you think?

[–] hendrik@palaver.p3x.de 2 points 1 day ago* (last edited 1 day ago) (1 children)

From what I know, I assume yes, the relation between model size and speed/performance should be linear. Maybe there is some additional small overhead making it a bit faster or slower than expected. But I'm really not an expert on the maths, so don't trust me.

And maybe have a look at this bugreport: https://github.com/ggml-org/llama.cpp/issues/11332
I think it matches your situation. They resolve this by messing with the batch size and someone recommends not to use Vulkan on an iGPU.

[–] corvus@lemmy.ml 1 points 1 day ago

Oh great, thanks