this post was submitted on 17 Jul 2024
355 points (99.2% liked)

Open Source

38894 readers
57 users here now

All about open source! Feel free to ask questions, and share news, and interesting stuff!

Useful Links

Rules

Related Communities

Community icon from opensource.org, but we are not affiliated with them.

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Suoko@feddit.it 3 points 1 year ago (5 children)
[–] sramder@lemmy.world 3 points 1 year ago (4 children)

It was struggling harder than I was ;-)

[–] Chewy7324@discuss.tchncs.de 8 points 1 year ago* (last edited 1 year ago) (3 children)

I noticed those language models don't work well for articles with dense information and complex sentence structure. Sometimes they forget the most important point.

They are useful as a TLDR but shouldn't be taken as fact, at least not yet and for the foreseeable future.

A bit off topic, but I've read a comment in another community where someone asked chatgpt something and confidently posted the answer. Problem: the answer is wrong. That's why it's so important to mark ~~AI~~ LLM generated texts (which the TLDR bots do).

[–] quiteStraightEdge@lemmy.ml 4 points 1 year ago

Not calling ML and LLM "AI" would also help. (I went offtopic even more)

load more comments (2 replies)
load more comments (2 replies)
load more comments (2 replies)