this post was submitted on 22 Oct 2025
289 points (96.8% liked)

Technology

76520 readers
2216 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Test scores across OECD countries peaked around 2012 and have declined since. IQ scores in many developed countries appear to be falling after rising throughout the twentieth century. Nataliya Kosmyna at MIT's Media Lab began noticing changes around two years ago when strangers started emailing her to ask if using ChatGPT could alter their brains. She posted a study in June tracking brain activity in 54 students writing essays. Those using ChatGPT showed significantly less activity in networks tied to cognitive processing and attention compared to students who wrote without digital help or used only internet search engines. Almost none could recall what they had written immediately after submitting their work. She received more than 4,000 emails afterward. Many came from teachers who reported students producing passable assignments without understanding the material. A British survey found that 92% of university students now use AI and roughly 20% have used it to write all or part of an assignment. Independent research has found that more screen time in schools correlates with worse results. Technology companies have designed products to be frictionless, removing the cognitive challenges brains need to learn. AI now allows users to outsource thinking itself.

you are viewing a single comment's thread
view the rest of the comments
[–] batmaniam@lemmy.world 3 points 1 week ago* (last edited 1 week ago)

This is a great conversation because I'm one of those people who's terrible at arithmetic, but quite good at math. As in: I can look at a function, visualize it in 3D space, see what different max, mins and surfaces are dominated by what terms etc, but don't ask me to tally a meal check. I'd be useless at applying any math without a calculator.

Similarly, there's a lot of engineers out there that use CAD extensively that would probably not be engineers if they had to do drafting by hand.

The oatmeal did a comic that distilled this for me where they talked about why they didn't like AI "art". They made the point that in making a drawing, there are a million little choices made reconciling what's in your head with what you can do on the page. Either from the medium, what you're good at drawing, whatever, it's those choices that give the work "soul". Same thing for writing. Those choices are where learning, development, and style happen, and what generative AI takes away.

That helped crystalize for me the difference between a tool and autocomplete on steroids.

Edit: to add: you're statement "I claim to understand but don't" hits it on the head and is similar to why you have to be careful if plagiarism in citing academic review papers. If you write YOUR paper in a way that agrees with the review but discuss the paper the review was referencing, and, even accidentally, skip over that the conclusion you're putting forward is from the review, not the paper you're both citing, that's plagiarism. Notion being you misrepresented their thoughts as your own. That is basically ALL generative AI.