this post was submitted on 27 Jun 2025
21 points (86.2% liked)
Technology
71998 readers
2544 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I mean theoretically things could be anonymized for the AI, with only the charts with identifiers present, I'd imagine assuming the AI itself stays locked into the EHR system and isn't say outsourced to one of the big AI firms. With those conditions it's, roughly the same privacy as the existance of EHR in general.
As far as practical/legal/ethical, that comes down to how they market it to doctors. Personally I think it could be a useful tool for a doctor to "second opinion" himself. IE reach his own conclusion first, then hit the AI, see if it noticed something he missed. Though the obvious fear is of course going to be lazy or rushed doctors, working in a hospital that's pushing them to see the most patients possible in an hour, rewarding the doctors that walk in, hand the patient their AI diagnosis, and move into the next room. Which... well in modern America we all know this is what's going to be pushed.
The tools have amazing potential when used appropriately.... but for profit healthcare has all the wrong incentives, and they will see this as a tool to magnify them.
Think of all the data breaches of big insurance and clearinghouses. That's definitely going to be an AI nightmare.
Well yeah exactly why I said "the same risk". ideally it's going to be in the same systems... and assuming no one is stupid enough (or the laws don't let them) attach it to the publicly accessible forms of existing AIs It's not a new additional risk, just the same one. (though those assumptions are largely there own risks.
I wasn't contradicting, but adding.