this post was submitted on 28 Aug 2025
618 points (99.8% liked)

Technology

74731 readers
2786 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] Zwrt@lemmy.sdf.org 46 points 4 days ago (2 children)

For what crime?

Describing the desire to commit a crime? Not illegal

Writing fantasy about a world beyond the restrictions of current day moral laws? Not illegal

Jokingly telling ai you did a crime to see how it would react? Not illegal.

Is this the same guy that said he wanted us all to have personal, unrestricted models at some point?

[–] Deflated0ne@lemmy.world 16 points 4 days ago

Where you been?

Thought crime is a thing now. Look into what Peter Thiel is doing.

[–] ArmchairAce1944@discuss.online 16 points 4 days ago (7 children)

They dont care. They want people to be in line. Remember that tech bastard who said he wants to make AI pervasive and to force people to be on their best behavior? You think he's talking about Jay walking and shoplifting? He is talking about political or economic protests and advocacy. They don't give a fuck if murderers and armed robbers get away with their shit.

Look at what is happening in the UK. There are hundreds of arrests with their new law and basically all of them sre activists or people sharing information the genocide in Gaza. They then took fingerprints and DNA samples from them to enter into a database and they locked them up in jail for days.

Count dankula, a fascist youtuber in the UK only got a fine when he taught his dog how to do a Nazi salute and showed the video online. He never had to give fingerprints or DNA and never spent any time in a cell.

Let that sink in.

Count dankula, a fascist youtuber in the UK...

Wait until you see what the Fascists will do once they're in power, because they ain't gonna abolish the OSA or delist Palestine Action from the Proscribed Orgs list, that's for sure. Reform UK are saying "we're against the OSA" while at the same time banning books from Libraries in Council areas they control.

We are fucked five ways from sideways, the previous fuckers fucked things up, the current fuckers are fucking things up, the next fuckers are gonna fuck things up even more.

[–] Zwrt@lemmy.sdf.org 4 points 4 days ago

Let that sink in.

Please no, i am already drowning in it.

load more comments (5 replies)
[–] eve@evenyc.com 25 points 4 days ago* (last edited 4 days ago) (2 children)

there is no such thing as privacy online. assume everything you type is seen or will be seen

[–] chiliedogg@lemmy.world 11 points 4 days ago (1 children)

And, increasingly, there's no such thing as offline.

load more comments (1 replies)
load more comments (1 replies)
[–] yesman@lemmy.world 103 points 5 days ago (1 children)

"When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts," the blog post notes. "If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement."

See? Even the people who make AI don't trust it with important decisions. And the "trained" humans don't even see it if the AI doesn't flag it first. This is just a microcosm of why AI is always the weakest link in any workflow.

This is exactly the use-case for an LLM and even OpenAI can't make it work.

[–] Perspectivist@feddit.uk 36 points 5 days ago (13 children)

This is exactly the use-case for an LLM

I don't think it is. LLM is language generating tool, not language understanding one.

load more comments (13 replies)
[–] Blaster_M@lemmy.world 76 points 5 days ago (6 children)

Local models. Can't be surveiled if your ai isn't on the internet.

[–] CapnClenchJaw@lemmy.world 24 points 5 days ago

Exactly.

This is going to be the next Google searches thing, isn't it. People being ignorant to, or forgetting that corporations are saying everything they say or do. And then bring all shocked when they get exploited for profits or reported to authorities for doing shady things.

Rinse and repeat.

load more comments (5 replies)
[–] FauxLiving@lemmy.world 65 points 5 days ago

Pretty much every corporately owned service on the Internet actively spies on you for the police.

An important thing to understand as authoritarians take control of governments and start using this comprehensive spying apparatus to target political opponents.

Learn to use your computer. Use open sourced tools and software, invest in your own hardware and host your own services. It doesn’t require years of learning or study, you can often get by with a video or two.

My Jellyfin server doesn’t call the police. My local language models don’t store everything I’ve ever written. Nobody is scanning my NextCloud server or mining my Signal/Matrix/Jami contacts to determine my social graph.

All of this is running on cheap leftover hardware (with some new hard drives) and I save over $100/mo on the equivalent services. And way more if you consider access to every streaming service with exclusive content.

Windows is spying on you, Meta is spying on you, Google is spying on you, Amazon is spying on you, OpenAI is spying on you.

They do this because they make it slightly easier to use software and so people give up every bit of privacy and autonomy for their entire lives just to avoid reading a wiki or learning a technical skill.

I don’t think that that is a good deal.

[–] crystalmerchant@lemmy.world 14 points 4 days ago

Lmao no fucking SHIT.

I eagerly await the day when a majority of people understand that all technology will eventually be used against them by the state.

[–] hotdogcharmer@lemmy.world 22 points 4 days ago (12 children)

Sam Altman belongs in prison. His machine encouraged and guided a child to kill themselves. His machine actively stopped that child seeking outside help. Sam Altman belongs in prison. Sam Altman does not need another $20,000,000,000,000. He needs to go through the legal system and be sentenced and sent to prison because his machine pushed a child to suicide.

[–] sunbytes@lemmy.world 6 points 4 days ago

He's pretty untouchable.

Every government thinks AI is the next gold/oil rush and whoever gets to be the "AI country" will become excruciatingly rich.

That's why they're being given IP exemptions and all sorts of legal loopholes are being attempted/ set up for them.

load more comments (11 replies)
[–] spirinolas@lemmy.world 19 points 4 days ago* (last edited 4 days ago) (3 children)

This scares the shit out of me. A hundred years ago we saw the rise of fascism. We saw freedom of expression being suppressed. But we had one thing going for us, which is the weakness of every dictatorship. The snitches are not enough and they can't be everywhere. You never know when they can be listening and chances are most times they aren't.

Now we are seeing the birth of a new fascism. Where AI can monitor ALL of us, ALL THE TIME. Not just our prompts. Everything. Everybody experienced talking about something with a friend and a few minutes later you are receiving ads about that thing, which you never searched before. Now imagine you are being monitored all the time for any kind of subsersive opinions. You won't have a window to fight. The moment you give the smallest hint of dissent, you are efficiently removed from society.

And forget just leaving smartphones. More and more all our services are associated with it. Very soon you won't be able to function in society without it.

AI won't rule us. AI will be the ultimate tool to help other humans rule us and fighting back will be almost impossible. I feel this isn't being talked enough and how eminent it is.

[–] korazail@lemmy.myserv.one 8 points 4 days ago* (last edited 4 days ago) (1 children)

It's almost like the privacy alarmists, who have been screaming for decades, were on to something.

Some people saw the beginning of Minority Report and thought, 'that sounds like a good idea.'

We used to be in a world where it was unfeasible to watch everyone, and you could get away with small 'crimes' like complaining about the president online because it was impossible to get a warrant for surveillance without any evidence. Now, we have systems like Flock^1^ cameras^2^, ChatGPT and others that generate alerts to law enforcement on their own, subverting a need for a warrant in the first place. And more and more frequently, you both can't opt out and are unable to avoid them.

For now, the crime might be driving a car with a license plate flagged as stolen (or one where OCR mistakes a number), but all it takes is a tiny nudge further towards fascism before you can be auto-SWATted because the sentiment of your draft sms is determined to be negative towards the fuhrer.

Even now, I'm sacrificing myself to warn people. This message is on the internet and federated to multiple instances. There's no way I can't be identified by it with enough resources. Once it's too late, I'll be already on the list of people to remove.

[–] ArmchairAce1944@discuss.online 6 points 4 days ago (1 children)

I remember watching a video from the early 2000s that had a nightmare privacy scenario of someone trying to order a pizza, but then that person said nothing other than what they wanted and the people already knew his address, his job history, his health records and said that due to his latest health checkups if he wanted his double meat pizza they would need to pay additional, otherwise they would have to go with their health recommended vege pizza.

The video was made at a time when smart phones were a rarity and most people ordered food the old fashioned way... they called the place by phone, told them what they wanted and where they were , and paid for when it arrived in cash, since portable credit card readers were very uncommon.

Now we ARE in that situation, except for syncing medical records so that any company can use it. But we are going to get there soon enough, and it will be 'for the children, you pedo! Also if you have nothing to hide then you have nothing to fear!' Bullshit that for some reason everyone believes and will never question... even if they caught up in the system themselves for some bullshit reason they will never, ever connect the dots.

load more comments (1 replies)
load more comments (2 replies)
[–] Kyrgizion@lemmy.world 47 points 5 days ago (2 children)

"Hey ChatGPT, how many human corpses can 12 pigs who haven't been fed in a week process"?

[–] db2@lemmy.world 21 points 5 days ago (7 children)

They don't eat teeth. Just saying.

[–] fartographer@lemmy.world 15 points 5 days ago

Hence the phrase: as toothless as a pig

load more comments (6 replies)
load more comments (1 replies)
[–] Showroom7561@lemmy.ca 40 points 5 days ago

Funny, now you'll have the cops arresting you for prompts like "how to survive being homeless?", rather than social services when you prompt "how to avoid being homeless?".

And will authorities be called when someone prompts "how to shoot wild animals?" when asking about wildlife photography? 😆

[–] Electricd@lemmybefree.net 14 points 4 days ago (1 children)

I mean, I see this as a consequence of all these articles about people using ChatGPT for harmful things

[–] BreadstickNinja@lemmy.world 7 points 4 days ago (1 children)

Yeah, they're damned if you do, damned if you don't. Both they and Google are getting sued over kids who committed suicide, whose parents should have been monitoring them and getting them mental health treatment. If the courts decide that LLM companies bear legal and financial responsibility for user actions, then of course they're going to do this.

The only privacy is local. And actually, given Microsoft, local and Linux-based.

[–] Electricd@lemmybefree.net 4 points 4 days ago* (last edited 4 days ago)

Coming soon: McDonalds get sued for selling burgers to a minor that ate 3 burgers every day and died! McDonalds must set thresholds per customer and collect IDs from minors!

I would be for holding companies responsible when they fuck up, like McDonalds clearly marketing burgers to minors and saying it's healthy, but we must not hold them more accountable than they really are

The only privacy is local. And actually, given Microsoft, local and Linux-based.

Soon: huggingface gets sued because hosted models are being used for getting drug recipes, or did not actively prevent people from killing themselves through it

By the way, I was recently testing https://nano-gpt.com/ which claims to have privacy through TEE models... but I don't see how it's private in any way? It just guarantees that the output went through some TEE, but it doesn't guarantee that the input and output didn't leak elsewhere or got logged

[–] Mr_Dr_Oink@lemmy.world 31 points 5 days ago

Of course they are. They are a literal data farm. People need to stop using it.

[–] granolabar@kbin.melroy.org 33 points 5 days ago (9 children)

Well if idiots share their crime plans with a corpoo, what do they expect.

However, police won't do anything.

But even if they do, prosecutors won't.

Just look at that incident in Nevada with Israeli pedophile.

Cops bust him, federal prosecutor refused to charge him..

Laws are enforced selectively

[–] DeathsEmbrace@lemmy.world 28 points 5 days ago* (last edited 5 days ago) (1 children)

Are you seriously comparing a corrupt Israeli politician to an average joe? Israel can get away with murdering Americans and they would apologize they didnt die earlier?

load more comments (1 replies)
load more comments (8 replies)
[–] abbiistabbii@lemmy.blahaj.zone 8 points 4 days ago (2 children)

All of it will be justified with that guy who killed himsefl after talking to ChatGPT.

[–] OsrsNeedsF2P@lemmy.ml 7 points 4 days ago (3 children)

Nah, not for suicide:

But in the post warning users that the company will call the authorities if they seem like they're going to hurt someone, OpenAI also acknowledged that it is "currently not referring self-harm cases to law enforcement to respect people’s privacy given the uniquely private nature of ChatGPT interactions."

[–] synae@lemmy.sdf.org 2 points 3 days ago

Oh, so only for discussing topics the authorities consider verboten

[–] Soup@lemmy.world 2 points 4 days ago

Consider how the US handles those cases, that may actually be a broken-clock good thing. If they sent the cops to a suicidal person’s house said cops would probably kill them themselves.

load more comments (1 replies)
load more comments (1 replies)
[–] spankmonkey@lemmy.world 24 points 5 days ago (4 children)

Lazy authors of crime themed novels are sweating so heavily right now.

load more comments (4 replies)
[–] WhatGodIsMadeOf@feddit.org 3 points 3 days ago

Your DAF if you don't think all your digital life isn't stored and watched at will.

[–] mienshao@lemmy.world 16 points 5 days ago (1 children)

I’m cool with this! OpenAI ought to send those suicide instructions it created for that teen and send it to those pigs for some inspo :)

[–] unexposedhazard@discuss.tchncs.de 8 points 5 days ago* (last edited 5 days ago) (1 children)

Because US cops will totally do something about it lmao

[–] 11111one11111@lemmy.world 14 points 5 days ago (2 children)

Woooooosh

Im pretty sure the op you replied to is hoping for the five-oh to start taking GPT's advice to commit suicide... meaning they want dead cops not for cops to intervene. Not the classiest comment you replied to, which is why I think it woooooshed right over your head.

load more comments (2 replies)
[–] nutsack@lemmy.dbzer0.com 5 points 4 days ago

do you mean to tell me that a service provider is cooperating with authorities? holy garbage crab

[–] UnderpantsWeevil@lemmy.world 10 points 5 days ago

I gotta say... imagine being the police department on the receiving end of that firehose.

[–] veni_vedi_veni@lemmy.world 9 points 5 days ago (2 children)

Yo, I was just joking about making a gallon of PCP

load more comments (2 replies)
[–] FireWire400@lemmy.world 9 points 5 days ago

Self-harm doesn't count apparently

[–] Thedogdrinkscoffee@lemmy.ca 7 points 5 days ago* (last edited 5 days ago)

This sounds like PR deflection from egging on the kid to suicide. But I still don't doubt it.

load more comments
view more: next ›