this post was submitted on 10 Oct 2025
110 points (99.1% liked)

Technology

40526 readers
313 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
all 23 comments
sorted by: hot top controversial new old
[–] Butterbee@beehaw.org 83 points 1 week ago (1 children)

Now it's really in the cloud!

[–] doeknius_gloek@discuss.tchncs.de 41 points 1 week ago (3 children)

Allegedly, backups simply couldn't be kept, due to the G-Drive system's massive capacity.

X Doubt. Things like S3 can also store massive amounts of data and still support backups or at least geo replication. It's probably just a matter of cost.

But it gets worse. It turns out that before the fire, the Ministry of the Interior and Safety had apparently instructed government employees to store everything in the G-Drive cloud and not on their office PCs.

Which is totally fine and reasonable? The problem isn't the order to use the centralized cloud system, but that the system hasn't been sufficiently secured against possible data loss.

[–] GenderNeutralBro@lemmy.sdf.org 44 points 1 week ago (2 children)

If you can't afford backups, you can't afford storage. Anyone competent would factor that in from the early planning stages of a PB-scale storage system.

Going into production without backups? For YEARS? It's so mind-bogglingly incompetent that I wonder if the whole thing was a long-term conspiracy to destroy evidence or something.

[–] dalekcaan@feddit.nl 17 points 1 week ago

A conspiracy is always possible of course, but people really do tend to put off what isn't an immediate problem until it's a disaster.

Fukushima springs to mind. The plant had been warned more than a decade before the disaster that an earthquake in the wrong place would result in catastrophe and didn't do anything about it, and lo and behold...

[–] Dragonstaff@leminal.space 4 points 1 week ago (1 children)

I was just thinking that incompetence on this scale is likely deliberate.

Either some manager refused to pay for backups and they're too highly placed to hold accountable, or they deliberately wanted to lose some data, but I refuse to believe anyone built this system without even considering off-site backups.

[–] locuester@lemmy.zip 3 points 1 week ago

Pretty sure this guy was in charge. Feels like simple incompetence and bad luck.

[–] Damage@feddit.it 5 points 1 week ago

It would be useful to know what percentage of the total storage these 858TB are, because that is practically nothing nowadays.

[–] Clent@lemmy.dbzer0.com 3 points 1 week ago

Tape backup is still a thing.

[–] XenGi@feddit.org 26 points 1 week ago (1 children)

Why didn't they have an offsite backup if the data was that valueable?

[–] theangriestbird@beehaw.org 3 points 1 week ago (2 children)

that's gonna be an expensive backblaze account

[–] noxypaws@pawb.social 15 points 1 week ago (1 children)

yes it's expensive. it's also a basic requirement.

[–] theangriestbird@beehaw.org 3 points 1 week ago

maybe they can do what I do, run Windows on the server so you can tell Backblaze that it's a personal computer. Then I only have to pay a flat rate for the one computer.

And now it's an expensive blaze lol

[–] belated_frog_pants@beehaw.org 24 points 1 week ago (1 children)

858tb not backed up? Thats paltry in data center terms. 86 x 10tb hard drives? Two of those 45 drive rack mounts worth of backup...

This was bad data architecture

[–] circuscritic@lemmy.ca 12 points 1 week ago* (last edited 1 week ago) (2 children)

There were backups!

Unfortunately, they were located three rows and two racks down from their source.

In their defense, they clearly never thought the building would burn down.

And let's be fair to them, who's even heard of lithium batteries catching fire?

This was a once in a millennia accident, something you can't anticipate, and therefore can't plan for.

Unless you're talking about off-site backups. Then maybe they could have planned for that.

But who am I to judge?

[–] N0x0n@lemmy.ml 5 points 1 week ago (2 children)

And let's be fair to them, who's even heard of lithium batteries catching fire?

Euhhhh... You sure about that? Maybe you're beeing sarcastic? But buses lithium batteries catching fire is not something unheard off ! Even a few Teslas burned to the ground went viral in the news !

[–] Clent@lemmy.dbzer0.com 2 points 1 week ago

Teslas burned to the ground

All user error. Musk said so. Trust.

Everyone knows lithium is a mood stabilizer, same for batteries.

[–] SteevyT@beehaw.org 4 points 1 week ago

At least the front didn't fall off.

[–] irotsoma@piefed.blahaj.zone 8 points 1 week ago

All valuable data should be backed up off site in "cold storage" type places. It's not that expensive compared to the production storage.

[–] Trainguyrom@reddthat.com 3 points 1 week ago

Something something the cloud is not a backup