this post was submitted on 08 Apr 2025
166 points (98.8% liked)

Technology

69156 readers
3027 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] brucethemoose@lemmy.world 6 points 1 week ago* (last edited 1 week ago) (1 children)

It’s not theoretical, it’s just math. Removing 1/3 of the bus paths, and also removing the need to constantly keep RAM powered

And here’s the kicker.

You're supposing it’s (given the no refresh bonus) 1/3 as fast as dram, similar latency, and cheap enough per gigabyte to replace most storage. That is a tall order, and it would be incredible if it hits all three of those. I find that highly improbable.

Even dram is starting to become a bottleneck for APUs, specifically, because making the bus wide is so expensive. This applies to the very top (the MI300A) and bottom (smartphones and laptop APUs).

Optane, for reference, was a lot slower than DRAM and a lot more expensive/less dense than flash even with all the work Intel put into it and busses built into then top end CPUs for direct access. And they thought that was pretty good. It was good enough for a niche when used in conjunction with dram sticks

[–] just_another_person@lemmy.world 2 points 1 week ago (1 children)

No, you misunderstood. A current standard computer bus path is guaranteed to have at least 3 bus paths: CPU, RAM, Storage.

The amount of energy required to communicate between all three parts varies, but you can be guaranteed that removing just one PLUS removing the capacitor requirement for the memory will reduce power consumption by 1/3 of whatever that total bus power consumption is. This is ignoring any other additional buses and doing the bare minimum math.

The speed of this memory would matter less if you're also reducing the static storage requirement. The speed at which it can communicate with the CPU only is what would matter, so if you're not traversing CPU>RAM>SSD and only doing CPU>DRAM+, it's going to be more efficient.

[–] barsoap@lemm.ee 0 points 1 week ago* (last edited 1 week ago) (1 children)

PCIe 5.0 x16 can match DDR5's bandwidth, that's not the issue, the question is latency. The only reason OSs cache disk contents in memory is because SSD latency is something like at least 30x slower, the data ends up in the CPU either way RAM can't talk directly to the SSD, modern mainboards are very centralised and it's all point-to-point connection, the only bus you'll find will be talking i2c. Temperature sensors and stuff.

And I think it's rather suspicious that none of those articles are talking about latency. Without that being at least in the ballpark of DDR5 all this is is an alternative to NAND which is of course also a nice thing but not a game changer.

[–] just_another_person@lemmy.world 3 points 1 week ago (1 children)

I don't even think you know what you're trying to say at this point, because it's not making sense. Think what you will, but it's obvious your conception of how computer architecture works is flawed. You'll see this memory in machines and hopefully figure it out though. Good luck 🤞

[–] barsoap@lemm.ee 2 points 1 week ago (1 children)

So... what's wrong about my characterisation of computer hardware? Do you have any issue with the claim that RAM doesn't talk directly to the SSD, but via the CPU? If yes, please show me the traces on the motherboard which enable that. About the importance of latency to CPU-type computations?

Or do you want to tell me how it's absolutely unsuspicious to bang out a press release in tech and talk about "speed", not distinguishing between bandwidth and latency? Where's the fucking numbers. There's no judging the tech without numbers and them not being forward with those numbers means they're talking to investors, not techies.

[–] just_another_person@lemmy.world 1 points 1 week ago (1 children)

I'm not even sure where you got this RAM talking to storage thing from. This is why I'm saying you don't know what you're talking about it. I think your fundamental understanding of this is flawed.

[–] barsoap@lemm.ee 1 points 1 week ago (1 children)

It was you who was talking about "bus paths" and "traversing CPU>RAM>SSD". There's neither buses connected up to any of those things nor does the data ever flow like that, it always flows via the CPU.

[–] just_another_person@lemmy.world 1 points 1 week ago (1 children)

🤣 That isn't a relational diagram. Simply pointing out the three bus paths.

[–] barsoap@lemm.ee 0 points 1 week ago

That's three devices. There's two connections between them, and they go "RAM<->CPU<->SSD". The first <-> is the DRAM Phy, the second <-> is 1-4 PCIe lanes. Neither of them are a bus. There is no third connection.