Hardware

5215 readers
1 users here now

This is a community dedicated to the hardware aspect of technology, from PC parts, to gadgets, to servers, to industrial control equipment, to semiconductors.

Rules:

founded 5 years ago
MODERATORS
1
2
3
 
 

Upgrading a server for the first time in 10 years, so I’m a little out of the loop. I was surprised to find that the RAM I bought didn’t fit.

This is my first time dabbling in ECC RAM, so I figured there was some minor detail I missed when purchasing, but I eventually came across the data sheet for this stick, and the dimensions given don’t match the measurements I’m making. The tip of the caliper should be in the middle of the notch at 68.1mm.

What’s more is that the dimensions in the data sheet seem to match the dimensions on my motherboard. What’s going on here?

[SOLVED] I and Kingston are morons. I ordered RDIMM instead of UDIMM. The Kingston datasheet gives the wrong dimensions.

4
5
 
 

I'm trying to figure out how to get an eGPU for my surface pro 9. I thought any eGPU would work but I'm coming across a lot of eGPUs that seem to be made for Mac. Is this a case of specifying that something is also compatible with Mac or are some thunderbolt based eGPU for some reason only mac compatible?

Thanks

6
 
 

cross-posted from: https://lemmy.ml/post/25262591

RISC-V is a relatively young and open source instruction set. So far, it has gained traction in microcontrollers and academic applications. For example, Nvidia replaced the Falcon microcontrollers found in their GPUs with RISC-V based ones. Numerous university projects have used RISC-V as well, like Berkeley’s BOOM. However, moving RISC-V into more consumer-visible, higher performance applications will be an arduous process. SiFive plays a key role in pushing RISC-V CPUs toward higher performance targets, and occupies a position analogous to that of Arm (the company). Arm and SiFive both design and license out IP blocks. The task of creating a complete chip is left to implementers.

By designing CPU blocks, both SiFive and Arm can lower the cost of entry to building higher performance designs in their respective ISA ecosystems. To make that happen within the RISC-V ecosystem though, SiFive needs to develop strong CPU cores. Here, I’ll take a look at SiFive’s P550. This core aims for “30% higher performance in less than half the area of a comparable Arm Cortex A75.”

Just as with Arm’s cores, P550’s performance will depend heavily on how it’s implemented. For this article, I’m testing the P550 as implemented in the Eswin EC7700X SoC. This SoC has a 1.4 GHz, quad core P550 cluster with 4 MB of shared cache. The EIC7700X is manufactured on TSMC’s 12nm FFC process. The SiFive Premier P550 Dev Board that hosts the SoC has 16 GB of LPDDR5-6400 memory. For context, I’ve gathered some comparison data from the Qualcomm Snapdragon 670 in a Pixel 3a. The Snapdragon 670 has a dual core Arm Cortex A75 cluster running at 2 GHz.

7
8
9
 
 
10
11
 
 

many people seem to be excited about nVidias new line of GPUs, which is reasonable, since at CES they really made it seem like these new bois are insance for their price.

Jensen (the CEO guy) said that with the power of AI, the 5070 at a price of sub 600, is in the same class as the 4090, being over 1500 pricepoint.

Here my idea: They talk a lot about upscaling, generating frames and pixels and so on. I think what they mean by both having similar performace, is that the 4090 with no AI upscaling and such achieves similar performance as the 5070 with DLSS and whatever else.

So yes, for pure "gaming" performance, with games that support it, the GPU will have the same performance. But there will be artifacts.

For ANYTHING besides these "gaming" usecases, it will probably be closer to the 4080 or whatever (idk GPU naming..).

So if you care about inference, blender or literally anything not-gaming: you probably shouldn't care about this.

i'm totally up for counter arguments. maybe i'm missing something here, maybe i'm being a dumdum <3

imma wait for amd to announce their stuffs and just get the top one, for the open drivers. not an nvidia person myself, but their research seems spicy. currently still slobbing along with a 1060 6GB

12
 
 

cross-posted from: https://lemmy.ml/post/24419043

cross-posted from: https://lemmy.ml/post/24419041

RISC-V laptops offer customizable and affordable personal computing with their open-source instruction set architecture. Early versions have demonstrated their potential, but lagged in performance. But in 2025, Framework and DeepComputing are partnering to make the best RISC-V laptop yet, promising an alternative to laptops powered by x86 and Arm.

13
14
15
 
 

cross-posted from: https://lemdro.id/post/16463131

I'm in the market for a new laptop and I've been checking out the Lenovo Slim 7i (14” Intel) Gen 9, which comes with an Intel Core Ultra 7 155H, 32GB of soldered RAM, and a 1920x1200 60Hz OLED display. This setup is priced just over a grand (USD).

More details could be found here lenovo.com/ca

16
 
 

I just love working with computers and I have a bit of money. Problem is that I don't have a job, and this hobby is quite expensive. So, I thought of perhaps trying to make at least a little bit of money from it by buying old and broken laptops, repairing them, and then reselling them. Perhaps if I get a laptop that's compatible with Libreboot, such as the Lenovo ThinkPad T480, I could also flash Libreboot onto it.

Nevertheless, I'm also planning to get a real job soon.

17
19
submitted 2 months ago* (last edited 2 months ago) by overgrown@lemmings.world to c/hardware@lemmy.ml
 
 

I was looking for a new Laptop for my personal use. I shortlisted Lenovo Yoga Pro 7 with AMD's Ryzen 9 AI 365. Then I was searching around and found they released a new lineup of Ryzen 9000 series just a month after the AI 300 series's launch.

I am confused here. So confused that I am debating whether to buy a processor with AI jargon in its name.

Will there be good Linux support for this NPU enabled laptops or should I go ahead and buy a ThinkPad P14s with Ryzen 8840HS inside. Both are about similar in price and only thing that keeps me from buying its 60Hz panel (No OLED 120Hz display where I live).

I use Gnome on EndeavourOS.

18
19
20
21
22
23
24
25
 
 

cross-posted from: https://lemmy.ml/post/22205865

Ahead of tomorrow's availability of the AMD Ryzen 7 9800X3D processor as the first Zen 5 CPU released with 3D V-Cache, today the review embargo lifts. Here is a look at how this 8-core / 16-thread Zen 5 CPU with 64MB of 3D V-Cache is performing under Ubuntu Linux compared to a variety of other Intel Core and AMD Ryzen desktop processors.

The AMD Ryzen 7 9800X3D as previously shared is AMD's first processor leveraging 2nd Gen 3D V-Cache. The 64MB of cache is now underneath the processor cores so that the CCD is positioned closer to the heatsink/cooler to help with more efficient cooling compared to earlier X3D models.

The AMD Ryzen 7 9800X3D boosts up to 5.2GHz and feature a 4.7GHz base clock while total it provides 104MB of cache. Like with the prior 8-core Ryzen 7 7800X3D, all eight cores have access to the 64MB 3D V-Cache. The Ryzen 7 9800X3D features a 120 Watt default TDP. AMD's suggested pricing on the Ryzen 7 9800X3D is $479 USD.

The AMD Ryzen 7 9800X3D will work with existing AMD AM5 motherboards with a simple BIOS update. For my testing I was able to use the ASUS ROG STRIX X670E-E GAMING WIFI motherboard previously used for all Ryzen 9000 series testing after a simple BIOS update. AMD also sent out an ASRock X870E Taichi motherboard as part of the review kit. For these 9800X3D benchmarks I ended up testing both initially on the ASUS ROG STRIX X670E-E GAMING WIFI motherboard to match the previously tested Ryzen 9000 series processors and then repeated the run with the ASRock X870E Taichi motherboard as well for reference.

view more: next ›