HedyL

joined 2 years ago
[–] HedyL@awful.systems 5 points 2 days ago

What purpose is this tool even supposed to serve? The most obvious use case that comes to mind is employee monitoring.

[–] HedyL@awful.systems 5 points 5 days ago (1 children)

It's also very difficult to get search results in English when this isn't set as your first language in Google, even if your entire search term is in English. Even "Advanced Search" doesn't seem to work reliably here, and of course, it always brings up the AI overview first, even if you clicked advanced search from the "Web" tab.

[–] HedyL@awful.systems 7 points 6 days ago (4 children)

I guess the question here really boils down to: Can (less-than-perfect) capitalism solve this problem somehow (by allowing better solutions to prevail), or is it bound to fail due to the now-insurmountable market power of existing players?

[–] HedyL@awful.systems 16 points 6 days ago

Somehow makes me think of the times before modern food safety regulations, when adulterations with substances such as formaldehyde or arsenic were common, apparently: https://pmc.ncbi.nlm.nih.gov/articles/PMC7323515/ We may be in a similar age regarding information now. Of course, this has always been a problem with the internet, but I would argue that AI (and the way oligopolistic companies are shoving it into everything) is making it infinitely worse.

[–] HedyL@awful.systems 8 points 1 week ago (1 children)

Or like the radium craze of the early 20th century (even if radium may have a lot more legitimate use cases than current-day LLM).

[–] HedyL@awful.systems 44 points 1 week ago

New reality at work: Pretending to use AI while having to clean up after all the people who actually do.

[–] HedyL@awful.systems 10 points 1 week ago (1 children)

If I'm not mistaken, even in pre-LLM days, Google had some kind of automated summaries which were sometimes wrong. Those bothered me less. The AI hallucinations appear to be on a whole new level of wrong (or is this just my personal belief - are there any statistics about this?).

[–] HedyL@awful.systems 22 points 1 week ago (6 children)

Most searchers don’t click on anything else if there’s an AI overview — only 8% click on any other search result. It’s 15% if there isn’t an AI summary.

I can't get over that. An oligopolistic company imposes a source on its users that is very likely either hallucinating or plagiarizing or both, and most people seem to eat it up (out of convenience or naiveté, I assume).

[–] HedyL@awful.systems 6 points 1 week ago

Maybe us humans possess a somewhat hardwired tendency to "bond" with a counterpart that acts like this. In the past, this was not a huge problem because only other humans were capable of interacting in this way, but this is now changing. However, I suppose this needs to be researched more systematically (beyond what is already known about the ELIZA effect etc.).

[–] HedyL@awful.systems 8 points 1 week ago (2 children)

Turns out that being a proficient liar might be the key to success in this attention economy (see also: chatbots).

[–] HedyL@awful.systems 11 points 1 week ago

Of course, there are also the usual comments saying artists shouldn't complain about getting replaced by AI etc. Reminds me why I am not on Twitter anymore.

It also strikes me that in this case, the artist didn't even expect to get paid. Apparently, the AI bros even crave the unpaid "exposure" real artists get, without wanting to put in any of the work and while (in most cases) generating results that are no better than spam.

It is a sickening display of narcissism IMHO.

[–] HedyL@awful.systems 6 points 2 weeks ago

With LLMs not only do we see massive increases in overhead costs due to the training process necessary to build a usable model, each request that gets sent has a higher cost. This changes the scaling logic in ways that don’t appear to be getting priced in or planned for in discussions of the glorious AI technocapital future

This is a very important point, I believe. I find it particularly ironic that the "traditional" Internet was fairly efficient in particular because many people were shown more or less the same content, and this fact also made it easier to carry out a certain degree of quality assurance. Now with chatbots, all this is being thrown overboard and extreme inefficiencies are being created, and apparently, the AI hypemongers are largely ignoring that.

view more: next ›