OpticalMoose

joined 2 years ago

I remember those days, playing games with DOS4GW or whatever it was called. Man, I don't miss that.

[–] OpticalMoose@discuss.tchncs.de 9 points 3 days ago (4 children)

Yeah, but this happens with every version. Windows 7 still had a big market share up until MSFT cut off support. Users are going to bitch and complain and talk about switching to Linux, but eventually just install the next Windows version.

I used to watch it on the Discovery Channel in the US, before that whole reality TV thing.

 

Anybody else grow up watching The Secret Life of Machines?

It was an amazing show. Tim Hunkin is still around and making Youtube videos. Sadly Rex is no longer with us.

Unfortunately, a certain government(all three branches) could make things very difficult for AMD if they don't play ball.

[–] OpticalMoose@discuss.tchncs.de 6 points 3 days ago (1 children)

And we make it worse by saying "Just pick one. It doesn't matter what instance you're on because they're federated."

Some people are going to be very upset to find their local feed is a lot of content they don't agree with. Or when they go out into the fediverse and people automatically assume they're an A-hole because of the instance they're from. I mean, it's generally not that bad, but there are a few instances that are that bad.

And for people like me who gravitate toward smaller instances, that instance is probably gonna die. Happened to me twice already, 4 times if you count Mastodon and Peertube.

[–] OpticalMoose@discuss.tchncs.de 2 points 4 days ago (1 children)

I assume they’re mostly going to “spare change” jars in people’s homes.

I've never understood that. Even if they accumulate a fair amount, now they have to roll the coins and take them to a very unhappy bank teller, or just dump them in a coinstar machine which takes a percent off the top.

I'm the exact opposite. I always try to get rid of my coins. I save a handful of quarters, nickels & dimes in my car for parking meters, but that's it. If I can't spend them, I'll just go to the credit union and deposit a $20 bill + 78¢ or whatever.

Plot twist - while I'm at the credit union, I ask if they have $2 bills. I got $80 worth last time. Now I'm the weird guy who tips in $2 bills. And people save those too, thinking they're rare or out of circulation.

ASUS' website has the 128Gb version at $2,800.

 

The Ryzen AI MAX+ 395 and Ryzen AI MAX 390 are supposed to be Apple M4 and Apple M4 Pro competitors that combine high efficiency with some pretty crazy performance numbers in gaming, AI and creator workloads. That's because this Strix Halo design combines an insanely powerful CPU with a huge GPU onto one chip. The end result is something special and unique in the ROG Flow Z13 2025.

[–] OpticalMoose@discuss.tchncs.de 1 points 4 days ago (1 children)

I guess if I get REALLY bored, I might do a fresh install and load up legacy drivers just to see what the performance is like with the old cards. It would be interesting to see how they stack up to the Vega APU.

I'm not going to actually use these cards, just trying them out for the heck of it.

[–] OpticalMoose@discuss.tchncs.de 2 points 4 days ago (1 children)

I saw the SR+RoadCraft package on Steam. I might check it out.

[–] OpticalMoose@discuss.tchncs.de 6 points 4 days ago (9 children)

I'm still playing Mudrunner. Is there a big difference?

I mean I don’t really see the point here.

There isn't one. I guess I should have made that more clear. Sorry. 🫤

And I’m not sure if I’m missing something ...

Nope, just a guy with too much time on his hands. I mean, I hope someone out there found it a little informative. There are a lot of people thinking "If Ollama doesn't work then I'm out of luck." I'm just trying to let people know there are other options.

Yes, the Nvidia cards get 30+ t/s together or individually, but the point of this was to see if AMD and Nvidia could work together. Now that this works, I might actually buy an AMD GPU.

 

Yesterday I got bored and decided to try out my old GPUs with Vulkan. I had an HD 5830, GTX 460 and GTX 770 4Gb laying around so I figured "Why not".

Long story short - Vulkan didn't recognize them, hell, Linux didn't even recognize them. They didn't show up in nvtop, nvidia-smi or anything. I didn't think to check dmesg.

Honestly, I thought the 770 would work; it hasn't been in legacy status that long. It might work with an older Nvidia driver version (I'm on 550 now) but I'm not messing with that stuff just because I'm bored.

So for now the oldest GPUs I can get running are a Ryzen 5700G APU and 1080ti. Both Vega and Pascal came out in early 2017 according to Wikipedia. Those people disappointed that their RX 500 and RX 5000 don't work in Ollama should give Llama.cpp Vulkan a shot. Kobold has a Vulkan option too.

The 5700G works fine alongside Nvidia GPUs in Vulkan. The performance is what you'd expect from an APU, but at least it works. Now I'm tempted to buy a 7600 XT just to see how it does.

Has anyone else out there tried Vulkan?

 

Well, it was nice ... having hope, I mean. That was a good feeling.

 


So here's the way I see it; with Data Center profits being the way they are, I don't think Nvidia's going to do us any favors with GPU pricing next generation. And apparently, the new rule is Nvidia cards exist to bring AMD prices up.

So here's my plan. Starting with my current system;

OS: Linux Mint 21.2 x86_64  
CPU: AMD Ryzen 7 5700G with Radeon Graphics (16) @ 4.673GHz  
GPU: NVIDIA GeForce RTX 3060 Lite Hash Rate  
GPU: AMD ATI 0b:00.0 Cezanne  
GPU: NVIDIA GeForce GTX 1080 Ti  
Memory: 4646MiB / 31374MiB

I think I'm better off just buying another 3060 or maybe 4060ti/16. To be nitpicky, I can get 3 3060s for the price of 2 4060tis and get more VRAM plus wider memory bus. The 4060ti is probably better in the long run, it's just so damn expensive for what you're actually getting. The 3060 really is the working man's compute card. It needs to be on an all-time-greats list.

My limitations are that I don't have room for full-length cards (a 1080ti, at 267mm, just barely fits), also I don't want the cursed power connector. Also, I don't really want to buy used because I've lost all faith in humanity and trust in my fellow man, but I realize that's more of a "me" problem.

Plus, I'm sure that used P40s and P100s are a great value as far as VRAM goes, but how long are they going to last? I've been using GPGPU since the early days of LuxRender OpenCL and Daz Studio Iray, so I know that sinking feeling when older CUDA versions get dropped from support and my GPU becomes a paperweight. Maxwell is already deprecated, so Pascal's days are definitely numbered.

On the CPU side, I'm upgrading to whatever they announce for Ryzen 9000 and a ton of RAM. Hopefully they have some models without NPUs, I don't think I'll need them. As far as what I'm running, it's Ollama and Oobabooga, mostly models 32Gb and lower. My goal is to run Mixtral 8x22b but I'll probably have to run it at a lower quant, maybe one of the 40 or 50Gb versions.

My budget: Less than Threadripper level.

Thanks for listening to my insane ramblings. Any thoughts?

 

I had to take my GPU out to do some troubleshooting, so I figured why not try some games on the old Ryzen 5700G. Ray-traced Quake wasn't exactly playable at 3 fps, but I'm impressed that it could load and display correctly.

Other games I tried; Portal RTX wouldn't start at all. Spider-Man remastered did start, but I can't get past the load menu, not related to the Ryzen APU. Most of my library is 10+ years old, so pretty much everything else runs fine on the APU.

 

A place for everything and everything in its place.

 

It's the first of 4 dams to be removed along the Klamath River by the end of 2024. The upper basin hasn't had Salmon in over 100 years and scientists are releasing some there as a test run.

 

Hartford is credited as creator of Dolphin-Mistral, Dolphin-Mixtral and lots of other stuff.

He's done a huge amount of work on uncensored models.

 

An update to this post https://beehaw.org/post/6717143

 

This is an interesting demo, but it has some drawbacks I can already see:

  • It's Windows only (maybe Win11 only, the documentation isn't clear)
  • It only works with RTX 30 series and up
  • It's closed source, so you have no idea if they're uploading your data somewhere

The concept is great, having an LLM to sort through your local files and help you find stuff, but it seems really limited.

I think you could get the same functionality(and more) by writing an API for text-gen-webui.

more info here: https://videocardz.com/newz/nvidia-unveils-chat-with-rtx-ai-chatbot-powered-locally-by-geforce-rtx-30-40-gpus

view more: next ›