terrific

joined 2 months ago
[–] terrific@lemmy.ml 1 points 3 weeks ago

I'm not sure I can give a satisfying answer. There are a lot of moving parts here, and a big issue here is definitions which you also touch upon with your reference to Searle.

I agree with the sentiment that there must be some objective measure of reasoning ability. To me, reasoning is more than following logical rules. It's also about interpreting the intent of the task. The reasoning models are very sensitive to initial conditions and tend to drift when the question is not super precise or if they don't have sufficient context.

The AI models are in a sense very fragile to the input. Organic intelligence on the other hand is resilient and also heuristic. I don't have any specific idea for the test, but it should test the ability to solve a very ill-posed problem.

[–] terrific@lemmy.ml 1 points 3 weeks ago

I'm not saying that we can't ever build a machine that can think. You can do some remarkable things with math. I personally don't think our brains have baked in gradient descent, and I don't think neural networks are a lot like brains at all.

The stochastic parrot is a useful vehicle for criticism and I think there is some truth to it. But I also think LMMs display some super impressive emergent features. But I still think they are really far from AGI.

[–] terrific@lemmy.ml 3 points 3 weeks ago (2 children)

I definitely think that's remarkable. But I don't think scoring high on an external measure like a test is enough to prove the ability to reason. For reasoning, the process matters, IMO.

Reasoning models work by Chain-of-Thought which has been shown to provide some false reassurances about their process https://arxiv.org/abs/2305.04388 .

Maybe passing some math test is enough evidence for you but I think it matters what's inside the box. For me it's only proved that tests are a poor measure of the ability to reason.

[–] terrific@lemmy.ml 6 points 3 weeks ago

This is a very good point since tritium is a very limited resource.

The hope is that it will be generated by the fusion reactor itself using tritium breeder blankets https://www.iter.org/machine/supporting-systems/tritium-breeding

Whether that will work remains to be seen.

[–] terrific@lemmy.ml 8 points 3 weeks ago (7 children)

Do you have any expertise on the issue?

I hold a PhD in probabilistic machine learning and advise businesses on how to use AI effectively for a living so yes.

IMHO, there is simply nothing indicating that it's close. Sure LLMs can do some incredibly clever sounding word-extrapolation, but the current "reasoning models" still don't actually reason. They are just LLMs with some extra steps.

There is lots of information out there on the topic so I'm not going to write a long justification here. Gary Marcus has some good points if you want to learn more about what the skeptics say.

[–] terrific@lemmy.ml 48 points 3 weeks ago (23 children)

We're not even remotely close. The promise of AGI is part of the AI hype machine and taking it seriously is playing into their hands.

Irrelevant at best, harmful at worst 🤷

[–] terrific@lemmy.ml 6 points 3 weeks ago (1 children)
[–] terrific@lemmy.ml 7 points 3 weeks ago

Wow. It sounds like something out of PKD's VALIS. Which of my memory serves me right is thought to have been an outlet for the author's paranoid delusions.

ChatGPT is probably one of the most dangerous catalysts to psychosis, and almost completely unchecked at this point.

[–] terrific@lemmy.ml 5 points 3 weeks ago

Hvis man kører modellen på din egen device i stedet for skyen behøver man ihvertfald ikke dele sine data. Det flytter også elregningen ud til forbrugerne. Så det er ihvertfald en mulig fremtid. Ikke dermed sagt at der ikke kommer til at være scummy tech firmaer som stadig "stjæler" din data.

Det er meget tomme løfter fra AI industrien lige nu. Jeg tror ikke nødvendigvis kapitalismen løser alting automatisk, men jeg tænker der må komme et tidspunkt hvor investorer vil se resultater og man begynder at fokusere mere på det som faktisk fungerer.

Jeg synes også det er sindssygt så meget spild den hype skaber. Himmelråbende dumt. Jeg tror heldigvis det er en fase. Derfor synes jeg stadig det giver god mening at kigge sig om efter alternativer. Jeg har f.eks. flyttet hele min "tech suite" til Proton for at undgå Google.

[–] terrific@lemmy.ml 13 points 3 weeks ago* (last edited 3 weeks ago) (2 children)

Gen-AI er endnu ikke profitabelt og forbrugere er typisk ikke villige til at betale for det https://www.forbes.com/sites/antonyleather/2024/07/19/the-biggest-problem-with-ai-getting-consumers-to-buy-it/

Det miljøsvineri du taler om koster jo faktisk også søgemaskinerne på elregningen. Jeg tror ikke det varer længe før vi ser det bliver erstattet med billigere modeller, f.eks. destillerede modeller, eller helt forsvinde fra steder hvor de i bund og grund er overflødige.

Jeg tror vi er på toppen af hypet og hele historien ender med at ligne dotcom boblen. Men vi får se 🤷

Edit: IMO overhovedet ikke et fjollet emne. Det er super vigtigt!

[–] terrific@lemmy.ml 64 points 3 weeks ago (1 children)

Stonetoss is a nazi.

[–] terrific@lemmy.ml 3 points 1 month ago

Neural networks are about as much a model of a brain as a stick man is a model of human anatomy.

I don't think anybody knows how we actually, really learn. I'm not a neuro scientist (I'm a computer scientist specialised in AI) but I don't think the mechanism of learning is that well understood.

AI hype-people will say that it's "like a neural network" but I really doubt that. There is no loss-function in reality and certainly no way for the brain to perform gradient descent.

view more: ‹ prev next ›