this post was submitted on 03 Jul 2025
216 points (97.0% liked)

Technology

72320 readers
2827 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

The Basque Country is implementing Quantus Skin in its health clinics after an investment of 1.6 million euros. Specialists criticise the artificial intelligence developed by the Asisa subsidiary due to its "poor” and “dangerous" results. The algorithm has been trained only with data from white patients.

you are viewing a single comment's thread
view the rest of the comments
[–] BassTurd@lemmy.world 9 points 19 hours ago* (last edited 19 hours ago) (3 children)

My only real counter to this is who created the dataset and did the people that were creating the app have any power to affect that? To me, to say something is racist implies intent, where this situation could be that, but it could also be a case where it's just not racially diverse, which doesn't necessarily imply racism.

There's a plethora of reasons that the dataset may be mostly fair skinned. To prattle off a couple that come to mind (all of this may be known, idk, these are ignorant possibilities on my side) perhaps more fair skinned people are susceptible so there's more data, like you mentioned that dark skinned individuals may have less options to get medical help, or maybe the dataset came from a region with not many dark skinned patients. Again, all ignorant speculation on my part, but I would say that none of those options inherently make the model racist, just not a good model. Maybe racist actions led to a bad dataset, but if that's out of the devs control, then I wouldn't personally put that negative on the model.

Also, my interpretation of what racist means may differ, so there's that too. Or it could have all been done intentionally in which case, yea racist 100%

Edit: I actually read the article. It sounds like they used public datasets that did have mostly Caucasian people. They also acknowledged that fair skinned people are significantly more likely to get melanoma, which does give some credence to the unbalanced dataset. It's still not ideal, but I would also say that maybe nobody should put all of their eggs in an AI screening tool, especially for something like cancer.

[–] xorollo@leminal.space 10 points 16 hours ago

There is a more specific word for it: Institutional racism.

Institutional racism, also known as systemic racism, is a form of institutional discrimination based on race or ethnic group and can include policies and practices that exist throughout a whole society or organization that result in and support a continued unfair advantage to some people and unfair or harmful treatment of others. It manifests as discrimination in areas such as criminal justice, employment, housing, healthcare, education and political representation.[1]

[–] WanderingThoughts@europe.pub 5 points 18 hours ago* (last edited 18 hours ago) (1 children)

My only real counter to this is who created the dataset and did the people that were creating the app have any power to affect that?

A lot of AI research in general was first done by largely Caucasian students, so the datasets they used skewed that way, and other projects very often started from those initial datasets. The historical reason there are more students of that skin tone is because they have in general the most money to finance the schooling, and that's because past racism held African-American families back from accumulating wealth and accessing education, and that still affects their finances and chances today, assuming there is no racism still going on in scholarships and accepting students these days.

Not saying this is specifically happening for this project, just a lot of AI projects in general. It causes issues with facial recognition in lots of apps for example.

[–] BassTurd@lemmy.world 1 points 18 hours ago

They did touch on the facial recognition aspect as well. My main thing is, does that make the model racist if the source data is diverse? I'd argue that it's not, although racist decisions may have lead to a poor dataset.

[–] AbidanYre@lemmy.world 2 points 18 hours ago (2 children)

Seems more like a byproduct of racism than racist in and of itself.

[–] SreudianFlip@sh.itjust.works 7 points 17 hours ago

Yes, we call that "structural racism".

[–] BassTurd@lemmy.world 4 points 18 hours ago

I think that's a very possible likely hood, but as with most things, there are other factors that could affect the dataset as well.