this post was submitted on 12 Feb 2024
162 points (98.8% liked)

Linux

8902 readers
117 users here now

Welcome to c/linux!

Welcome to our thriving Linux community! Whether you're a seasoned Linux enthusiast or just starting your journey, we're excited to have you here. Explore, learn, and collaborate with like-minded individuals who share a passion for open-source software and the endless possibilities it offers. Together, let's dive into the world of Linux and embrace the power of freedom, customization, and innovation. Enjoy your stay and feel free to join the vibrant discussions that await you!

Rules:

  1. Stay on topic: Posts and discussions should be related to Linux, open source software, and related technologies.

  2. Be respectful: Treat fellow community members with respect and courtesy.

  3. Quality over quantity: Share informative and thought-provoking content.

  4. No spam or self-promotion: Avoid excessive self-promotion or spamming.

  5. No NSFW adult content

  6. Follow general lemmy guidelines.

founded 2 years ago
MODERATORS
all 21 comments
sorted by: hot top controversial new old
[–] cbarrick@lemmy.world 38 points 1 year ago (1 children)

After two years of development and some deliberation, AMD decided that there is no business case for running CUDA applications on AMD GPUs. One of the terms of my contract with AMD was that if AMD did not find it fit for further development, I could release it. Which brings us to today.

From https://github.com/vosen/ZLUDA?tab=readme-ov-file#faq

[–] woelkchen@lemmy.world 7 points 1 year ago (2 children)

I'm really curious who at AMD thought it to be a great idea to develop a CUDA compatibility layer but not to release it. As stated, the release was only made because AMD ended financial support.

[–] 520@kbin.social 1 points 1 year ago* (last edited 1 year ago) (1 children)

The problem is that if we make CUDA the standard, then they put nVidia in control of a standard. nVidia could try to manipulate the situation in future versions of CUDA by reworking it to fuck with this implementation, giving AMD a shaky name in the space.

We saw this happen with Wine, where although probably not deliberately, MS made Windows compatibility a moving and very unstable target.

That is something tolerable by open source communities, but isn't something that will fly for official support.

[–] woelkchen@lemmy.world 1 points 1 year ago (1 children)

The problem is that if we make CUDA the standard, then they put nVidia in control of a standard. nVidia could try to manipulate the situation in future versions of CUDA by reworking it to fuck with this implementation, giving AMD a shaky name in the space.

I get that but why woulde they fund development of ZLUDA for two years?

[–] 520@kbin.social 2 points 1 year ago

Reverse engineering CUDA can bring other benefits. It allows AMD to see what nVidia is doing right and potentially implement it in their own tech. Having not only documentation but a working implementation can help wonders in this regard.

Or maybe they did want to use it but was scared of getting SLAPPed by Nvidia, so instead let the dev open source it.

[–] acockworkorange@mander.xyz 1 points 6 months ago

Probably a way to save face and not have AMD directly do it.

[–] vexikron@lemmy.zip 24 points 1 year ago* (last edited 1 year ago)

Basically it means that AMD is now a possible contender for the rather large market of basically scientific researchers and private industry who have CUDA based/oriented software to do 'AI' driven development or research on huge banks of GPUs.

Probably this initial implementation still has some kinks to iron out, but it could eventually result in Nvidia not having a functional monopoly in that market.

Also its neat from a hobbyist perspective if youre looking to do some kind of small version of CUDA based stuff along the same lines.

[–] JustUseMint@lemmy.world 14 points 1 year ago (1 children)

Another common AMD W. So glad I got away from Nvidia. This will help my local work with LLMs nicely.

[–] Newtra@pawb.social 13 points 1 year ago (1 children)

I'd say it's more like they're failing upwards. It's certainly good for AMD, but it seems like it happened in spite of their involvement, not because of it:

For reasons unknown to me, AMD decided this year to discontinue funding the effort and not release it as any software product. But the good news was that there was a clause in case of this eventuality: Janik could open-source the work if/when the contract ended.

AMD didn't want this advertised or released, and even canned this project despite it reaching better performance than the OpenCL alternative. I really don't get their thought process. It's surreal. Do they not want to support AI? Do they not like selling GPUs?

[–] woelkchen@lemmy.world 7 points 1 year ago* (last edited 1 year ago)

I really don’t get their thought process. It’s surreal.

Maybe they see it as something that would undermine their effords in increasing ROCm/HIP adoption? (But why fund its development for two years then? I agree with you: It all seems so weird!)

[–] gregorum@lemm.ee 13 points 1 year ago* (last edited 1 year ago) (3 children)

Can someone please explain like I’m five what the meaning and impact of this will be? Past posts and comments don’t seem to be very clear. As someone who uses both Linux and macOS professionally for design, this could be a massive game changer for me.

[–] aodhsishaj@lemmy.world 22 points 1 year ago (3 children)

If you already have a cuda workflow and want to use an AMD card, you can do that with this library.

[–] rar@discuss.online 10 points 1 year ago

That includes stuff like Stable Diffusion that recommended nvidia cards because it uses CUDA to accelerate image generation?

[–] You999@sh.itjust.works 2 points 1 year ago (1 children)

So does it work with off the shelf software or is it something the developer has the patch in?

[–] woelkchen@lemmy.world 2 points 1 year ago

So does it work with off the shelf software or is it something the developer has the patch in?

The point of a drop-in replacement is that no patching is required but in reality the software was released in incomplete form.

[–] gregorum@lemm.ee -1 points 1 year ago* (last edited 1 year ago) (1 children)

ok, I get that much. what I’d like to know, if you’re willing to explain: what’s it going to be like deploying that on, say, a Mac workstation? a pop_os workstation? (edit: such as: how, can I on macOS, will I work with after effects, etc.)

thanks for your time

[–] laughterlaughter@lemmy.world 2 points 1 year ago

Your question is legitimate, but chances are that you will need to find the answers yourself by reading the docs.