swlabr

joined 2 years ago
[–] swlabr@awful.systems 9 points 1 week ago* (last edited 1 week ago)
 Rupi Kaur 

   should sue
[–] swlabr@awful.systems 13 points 1 week ago (1 children)

I don’t know that we can offer you a good world, or even one that will be around for all that much longer. But I hope we can offer you a good childhood. […]

When “The world is gonna end soon so let’s just rawdog from now on” gets real

[–] swlabr@awful.systems 10 points 1 week ago (1 children)

How much of this is the AI bubble collapsing vs. Ohiophobia

[–] swlabr@awful.systems 5 points 1 week ago

JFC I click on the rocket alignment link, it's a yud dialogue between "alfonso" and "beth". I am not dexy'ed up enough to read this shit.

[–] swlabr@awful.systems 6 points 1 week ago

Spooks as a service

[–] swlabr@awful.systems 16 points 1 week ago (11 children)

Utterly rancid linkedin post:

text inside image:Why can planes "fly" but AI cannot "think"?

An airplane does not flap its wings. And an autopilot is not the same as a pilot. Still, everybody is ok with saying that a plane "flies" and an autopilot "pilots" a plane.

This is the difference between the same system and a system that performs the same function.

When it comes to flight, we focus on function, not mechanism. A plane achieves the same outcome as birds (staying airborne) through entirely different means, yet we comfortably use the word "fly" for both.

With Generative AI, something strange happens. We insist that only biological brains can "think" or "understand" language. In contrast to planes, we focus on the system, not the function. When AI strings together words (which it does, among other things), we try to create new terms to avoid admitting similarity of function.

When we use a verb to describe an AI function that resembles human cognition, we are immediately accused of "anthropomorphizing." In some way, popular opinion dictates that no system other than the human brain can think.

I wonder: why?

[–] swlabr@awful.systems 15 points 2 weeks ago
[–] swlabr@awful.systems 9 points 2 weeks ago

It's an anti-fun version of listening to dark side of the moon while watching the wizard of oz.

[–] swlabr@awful.systems 26 points 2 weeks ago* (last edited 2 weeks ago) (11 children)

You didn't link to the study; you linked to the PR release for the study. This and this are the papers linked in the blog post.

Note that the papers haven't been published anywhere other than on Anthropic's online journal. Also, what the papers are doing is essentially tea leaf reading. They take a look at the swill of tokens, point at some clusters, and say, "there's a dog!" or "that's a bird!" or "bitcoin is going up this year!". It's all rubbish dawg

[–] swlabr@awful.systems 7 points 2 weeks ago (1 children)

This needed a TW jfc (jk, uh, sorta)

[–] swlabr@awful.systems 34 points 2 weeks ago* (last edited 2 weeks ago) (7 children)

“Notably, O3-MINI, despite being one of the best reasoning models, frequently skipped essential proof steps by labeling them as “trivial”, even when their validity was crucial.”

LLMs achieve reasoning level of average rationalist

view more: ‹ prev next ›