swlabr
I don’t know that we can offer you a good world, or even one that will be around for all that much longer. But I hope we can offer you a good childhood. […]
When “The world is gonna end soon so let’s just rawdog from now on” gets real
How much of this is the AI bubble collapsing vs. Ohiophobia
JFC I click on the rocket alignment link, it's a yud dialogue between "alfonso" and "beth". I am not dexy'ed up enough to read this shit.
Spooks as a service
Utterly rancid linkedin post:
text inside image:
Why can planes "fly" but AI cannot "think"?
An airplane does not flap its wings. And an autopilot is not the same as a pilot. Still, everybody is ok with saying that a plane "flies" and an autopilot "pilots" a plane.
This is the difference between the same system and a system that performs the same function.
When it comes to flight, we focus on function, not mechanism. A plane achieves the same outcome as birds (staying airborne) through entirely different means, yet we comfortably use the word "fly" for both.
With Generative AI, something strange happens. We insist that only biological brains can "think" or "understand" language. In contrast to planes, we focus on the system, not the function. When AI strings together words (which it does, among other things), we try to create new terms to avoid admitting similarity of function.
When we use a verb to describe an AI function that resembles human cognition, we are immediately accused of "anthropomorphizing." In some way, popular opinion dictates that no system other than the human brain can think.
I wonder: why?
It's an anti-fun version of listening to dark side of the moon while watching the wizard of oz.
You didn't link to the study; you linked to the PR release for the study. This and this are the papers linked in the blog post.
Note that the papers haven't been published anywhere other than on Anthropic's online journal. Also, what the papers are doing is essentially tea leaf reading. They take a look at the swill of tokens, point at some clusters, and say, "there's a dog!" or "that's a bird!" or "bitcoin is going up this year!". It's all rubbish dawg
This needed a TW jfc (jk, uh, sorta)
“Notably, O3-MINI, despite being one of the best reasoning models, frequently skipped essential proof steps by labeling them as “trivial”, even when their validity was crucial.”
LLMs achieve reasoning level of average rationalist