Nobody thought it would do very well. This was a software dev's little diversion.
We should praise attempts to make the public aware of the limitations of LLMs, not laugh at the guy who did this.
A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.
Created as an evolution of White People Twitter and other tweet-capture subreddits.
Rules:
Related communities:
Nobody thought it would do very well. This was a software dev's little diversion.
We should praise attempts to make the public aware of the limitations of LLMs, not laugh at the guy who did this.
LLM sucks at maths, sucks at chess, sucks at remembering stuff and being consistent ... They suck at everything a computer is usually good at.
Yes, LLMs are designed to emulate how a human would respond to a prompt by digesting a huge amount of human-generated content. They can do that fairly well except when they can't.
It's a very specialized program intended to get a computer to do something that computers are generally very, very bad at - write sensible language about a wide variety of topics. Trying to then get that one specialized program to turn around and do things that computers are good at, and expect to do it well, is very silly.
You could train an AI just to play chess. Sites like chess.com have tens, hundreds of millions of games to use as training data. But the AI isn't "thinking" though, it's just being asked given this input, what's the most likely outputs and picking one based on its settings. Then the other player moves, the context updates, rinse, repeat. Such an AI would likely whoop most people's asses but experienced players might figure how to lead it down a path where it doesn't sufficient training data to play strongly.
But it's not a generalized LLM like ChatGPT where it's picking up a handful of chess games from god knows without knowing or enforcing the rules or anything else.
Likewise I bet we'll see AIs for poker and other lucrative online sports. I bet a lot of online casinos have amassed huge stores of data to produce AIs, as well as players using scraping or logs to do the same. I could even see online casinos running AIs in games because it's a way of taking money from players beyond the normal rake.
Casinos are such a scam. The few skill based games where you actually can manage to get an edge they just stop you from playing if you're doing well. We'll never see much use of poker/blackjack bots because if they catch on they'll just limit your winnings.
Asking ChatGPT to play chess is like asking someone who's not played Chess to play well, and then documenting how poorly it played. Like no shit the hammer did a bad job as a saw. You wanted it cut, you should have used the tool for the job.
ChatGPT isn't Deep Blue. It's not made for that. You're asking a word processor to calculate pi.
That's a very good move! To counter, you should follow these three principles:
Prepare a response move that will prevent a future good move.
Defend your own pieces and try to attack theirs.
Don't be too eager to sacrifice pieces in order to make short term gains.
Be prepared to sacrifice an unimportant piece to make a good gain.
If you want to make a good move, try Rook H8 -> G7.
Alt text
Me: Knight to C3
ChatGPT: You've played Knight to C3 — a classic developing move! That suggests we’re likely in the early phase of the game, possibly playing something like:
or perhaps you're playing the English or Reti and going for flexibility.
Let me know the full board state or moves so far if you'd like to continue the game or get commentary!
Queen to queen's bishop three, Captain.
Chess engines don’t have real difficulties. Every level of the chess engine is designed to make more blunders as the elo gets smaller.
In other works it is programmed to make bad moves in regular intervals. What that means is even on beginner modes when the engine isn’t blundering it is playing perfect chess. This is why it isn’t good to play against chess bots. At best you will learn some pattern recognition but chess puzzles are better at that.
In CoD MW 2 (or maybe Black Ops) the multiplayer AI bots were like this. Obviously all bots are but the kill cams were illuminating. And they didn't even try to make it look human. They'd even use a light machine gun. They'd walk around. Once they see you they'd turn towards you. The only thing the difficulty changed was how fast they turned. Then they'd shoot a single shot at your head. For things like a sniper rifle it looked mostly believable, but that's not how people use machine guns lol. The single shot with the most inaccurate weapon is just dirty lmao.
I had a dedicated electronic chess game - a board with LEDs on it showing where the game wanted to move. You had to move physical pieces around and press membrane switches under the squares to tell it where you moved. I don't remember if it was described as "AI" back then or not. I thought of it as a chess expert system on a chip. As a total novice player I could rarely beat it on its lowest skill level. Was never interested enough in chess to get the game for my 2600. But I still have both of those things in a box.
Using an LLM to play chess is like using autocorrect to write a novel.
And that's the big problem with AI right now. People don't understand what it is, they just want the label slapped on to as many things as possible.
AI is the new IoT, it will be integrated into everything, less than useless for 99.9% of consumers, and yet, still wildly successful.
Given how much it costs it will need to be ten times more successful than web search to even hop to break even. It's the biggest dot com bubble yet.
It's because the venture capitalists who are sinking BILLION$ into these things are calling it AI even though it's not and literally never will be. And unfortunately, too many people are too stupid to understand that these aren't AI but Generative Adversarial Networks or GAN's for short. Which doesn't sound as sexy and "take my money please"-ish as Artificial Intelligence or ✨AI✨ does.
These will never be HAL9000 or Jarvis or even Roku's Basilisk. The stuff needed for that kind of "intelligence" doesn't exist in these things. And the sooner people come to realize that this is all just digital snake oil the sooner we can collectively get on with our lives.
Too many people are failing to understand that fqughds are actually woplels, and until they understand that, they are just going to keep wasting their money on 💫woplels💫.
Even though woplels have proven to be useful for some things, they're not as good as some people want them to be, so they're useless.
The brain dead morons who defend it and accuse me of just being a hater for understanding any part of it are the worst.
I literally no longer believe personhood is a thing because of how stupid and oblivious they're capable of being.
Using an LLM to play chess is like using autocorrect to write a novel.
They could probably have done better by training a crow to play chess.
It also demonstrates how much AI companies mislead the public on what their products can do. If a guy is selling lawnmowers that actually just generate grass clippings without mowing the lawn, you’re not an idiot for thinking it was going to mow grass.
But once someone explains it to you and you insist the grass was mowed, they show you the unmowed grass, and you still insist it's great for mowing lawns.
And also you're in the desert where you shouldn't even have a fucking lawn, and you plant more lawns because they're so easy to mow now
What do you call that? Because it's a bit past 'idiot'.
furthermore. companies mislead journalists, investors, philosphers, influencers etc. most of which dont have a technical background but a lot of reach. They then carry their misunderstanding into the general public.
All these public "academic" panel debates on conferences about AGI being the next nuclear weapon and singularity. They lead to Highbrow publications, opinion peaces, books and blog articles, which then lead to tweets, memes and pop cultural references
Hundreds of billions of dollars spent
No profitable product
No consistently usable product other than beginner code tasks
Massive environmental harms
Tens of thousands of (useful!) careers terminated
Destroyed Internet search, arguably the one necessary service on the Internet
No chance it's going to get better
Atari 2600 beating it at chess is a perfect metaphor. People who want to complain about it can bite its plastic woodgrain printed ass.
Massive environmental harms
I find this questionable; people forget that a locally-hosted LLM is no more taxing than a video game.
No chance it’s going to get better
Why do you believe this? It has continued to get dramatically better over the past 5 years. Look at where GPT2 was in 2019.
No consistently usable product other than beginner code tasks
It is not consistently usable for coding. If you are hoping this slop-producing machine is consistently useful for anything then you are sorely mistaken. These things are most suitable for applications where unreliability is acceptable.
No profitable product [...] Tens of thousands of (useful!) careers terminated
Do you not see the obvious contradiction here? If you are sure that this is not going to get better and it's not profitable, then you have nothing to worry about in the long-term about careers being replaced by AIs.
Destroyed Internet search, arguably the one necessary service on the Internet
Google did this intentionally as part of enshittification.
Massive environmental harms
I find this questionable; people forget that a locally-hosted LLM is no more taxing than a video game.
No chance it’s going to get better
Why do you believe this? It has continued to get dramatically better over the past 5 years. Look at where GPT2 was in 2019.
Fair enough. It's not going to get better because the fundamental problem is AI as represented by, say, ChatGPT doesn't know anything. It has no understanding of anything it's "saying". Therefore, any results derived from ChatGPT or equivalent, will need to be double-checked in any serious endeavor. So, yes it can poop out a legal brief in two seconds but it still has to be revised, refined, and inevitably fixed when it hallucinates precedent citations and just about anything else. That, the core of it, will never get better. It might get faster. It might "sound" "more human". But it won't get better.
No profitable product [...] Tens of thousands of (useful!) careers terminated
Do you not see the obvious contradiction here? If you are sure that this is not going to get better and it's not profitable, then you have nothing to worry about in the long-term about careers being replaced by AIs.
Well tell that to the half a million people laid off in the last couple of years. Damage is done. Also, the bubble is still growing, and if you haven't noticed what AI has done to the HR industry, let me summarize it thusly: it has destroyed it.
Destroyed Internet search, arguably the one necessary service on the Internet
Google did this intentionally as part of enshittification.
Well, yes. Every company which has chosen to promote and focus on AI has done so intentionally. That doesn't mean it's good. If AI wasn't the all-hype vaporware it is, this wouldn't have been an option. If OpenAI had been honest about it and said "it's very interesting and we're still working on it" instead of "it's absolutely going to change the world in six months" this wouldn't be the unusable shitpile it is.
I mean, you literally have whole videos on YouTube made by GothamChess who shows how LLMs play chess. They literally spawn pieces from air, play moves that are illegal etc.
I'm quite sure that the guy understood pretty well what LLMs can do. He just wanted to deinflate all the bullshit promises by Techbros
Man that sucks