this post was submitted on 07 Apr 2025
248 points (97.3% liked)

tumblr

4060 readers
408 users here now

Welcome to /c/tumblr, a place for all your tumblr screenshots and news.

Our Rules:

  1. Keep it civil. We're all people here. Be respectful to one another.

  2. No sexism, racism, homophobia, transphobia or any other flavor of bigotry. I should not need to explain this one.

  3. Must be tumblr related. This one is kind of a given.

  4. Try not to repost anything posted within the past month. Beyond that, go for it. Not everyone is on every site all the time.

  5. No unnecessary negativity. Just because you don't like a thing doesn't mean that you need to spend the entire comment section complaining about said thing. Just downvote and move on.


Sister Communities:

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] jjjalljs@ttrpg.network 17 points 3 hours ago (1 children)

I feel like it's an unpopular take but people are like "I used chat gpt to write this email!" and I'm like you should be able to write email.

I think a lot of people are too excited to neglect core skills and let them atrophy. You should know how to communicate. It's a skill that needs practice.

[–] Denvil@lemmy.one 1 points 3 hours ago (1 children)

I think it is a good learning tool if you use it as such. I use it for help with google sheets functions (not my job or anything important, just something I'm doing), and while it rarely gets a working function out, it can set me on the right track with functions I didn't even know existed.

[–] jjjalljs@ttrpg.network 5 points 3 hours ago

We used to have web forums for that, and they worked pretty okay without the costs of LLMs

This is a little off topic but we really should, as a species, invest more heavily in public education. People should know how to read and follow instructions, like the docs that come with Google sheets.

[–] Whats_your_reasoning@lemmy.world 4 points 3 hours ago (1 children)

Oh hey it's me! I like using my brain, I like using my own words, I can't imagine wanting to outsource that stuff to a machine.

Meanwhile, I have a friend who's skeptical about the practical uses of LLMs, but who insists that they're "good for porn." I can't help but see modern AI as a massive waste of electricity and water, furthering the destruction of the climate with every use. I don't even like it being a default on search engines, so the idea of using it just to regularly masturbate feels ... extremely selfish. I can see trying it as a novelty, but for a regular occurence? It's an incredibly wasteful use of resources just so your dick can feel nice for a few minutes.

[–] Foxfire@pawb.social 2 points 2 hours ago (1 children)

Using it for porn sounds funny to me given the whole concept of "rule 34" being pretty ubiquitous. If it exists, there's porn of it! Like even from a completely pragmatic prespective, it sounds like generating pictures of cats. Surely there is a never ending ocean of cat pictures which you can search and refine, do you really need to bring a hallucination machine into the mix? Maybe your friend has an extremely specific fetish list that nothing else will scratch? That's all I can think of.

[–] Whats_your_reasoning@lemmy.world 1 points 11 minutes ago* (last edited 7 minutes ago)

He says he uses it to do sexual roleplay chats, treats it kinda like a make-your-own-adventure porn story. I don't know if he's used it for images.

[–] Kolanaki@pawb.social 3 points 3 hours ago* (last edited 3 hours ago)

I've tried a few GenAI things, and didn't find them to be any different than CleverBot back in the day. A bit better at generating a response that seems normal, but asking it serious questions always generated questionably accurate responses.

If you just had a discussion with it about what your favorite super hero is, it might sound like an actual average person (including any and all errors about the subject it might spew), but if you try to use it as a knowledge base, it's going to be bad because it is not intelligent. It does not think. And it's not trained well enough to only give 100% factual answers, even if it only had 100% factual data entered into it to train on. It can mix two different subjects together and create an entirely new, bogus response.

[–] nelly_man@lemmy.world 5 points 4 hours ago* (last edited 3 hours ago)

I was finally playing around with it for some coding stuff. At first, I was playing around with building the starts of a chess engine, and it did ok for a quick and dirty implementation. It was cool that it could create a zip file with the project files that it was generating, but it couldn't populate it with some of the earlier prompts. Overall, it didn't seem that worthwhile for me (as an experienced software engineer who doesn't have issues starting projects).

I then uploaded a file from a chess engine that I had already implemented and asked for a code review, and that went better. It identified two minor bugs and was able to explain what the code did. It was also able to generate some other code to make use of this class. When I asked if there were some existing projects that I could have referenced instead of writing this myself, it pointed out a couple others and explained the ways they differed. For code review, it seemed like a useful tool.

I then asked it for help with a math problem that I had been working on related to a different project. It came up with a way to solve it using dynamic programming, and then I asked it to work through a few examples. At one point, it returned numbers that were far too large, so I asked about how many cases were excluded by the rules. In the response, it showed a realization that something was incorrect, so it gave a new version of the code that corrected the issue. For this one, it was interesting to see it correct its mistake, but it ultimately still relied on me catching it.

[–] TabbsTheBat@pawb.social 65 points 9 hours ago (4 children)

The amount of times I've seen a question answered by "I asked chatgpt and blah blah blah" and the answer being completely bullshit makes me wonder who thinks asking the bullshit machine™ questions with a concrete answer is a good idea

[–] Tar_alcaran@sh.itjust.works 17 points 6 hours ago

This is your reminder that LLMs are associative models. They produce things that look like other things. If you ask a question, it will produce something that looks like the right answer. It might even BE the right answer, but LLMs care only about looks, not facts.

[–] CarbonatedPastaSauce@lemmy.world 9 points 6 hours ago (1 children)

A lot of people really hate uncertainty and just want an answer. They do not care much if the answer is right or not. Being certain is more important than being correct.

[–] scintilla@lemm.ee 2 points 3 hours ago (1 children)

Why not just read the first part of a wikipedia article if they want that though? It's not the end all source but it'd better than asking the machine known to make things up the same question.

Because the AI propaganda machine is not exactly advertising the limitations, and the general public sees LLMs as a beefed up search engine. You and I know that’s laughable, but they don’t. And OpenAI sure doesn’t want to educate people - that would cost them revenue.

[–] can@sh.itjust.works 3 points 6 hours ago

I don't see the point either if you're just going to copy verbatim. OP could always just ask AI themselves if that's what they wanted.

[–] Diplomjodler3@lemmy.world 12 points 9 hours ago

The stupid and the lazy.

[–] adam_y@lemmy.world 11 points 8 hours ago (1 children)

Spent this morning reading a thread where someone was following chatGPT instructions to install "Linux" and couldn't understand why it was failing.

[–] Tar_alcaran@sh.itjust.works 7 points 6 hours ago (1 children)

Hmm, I find chatGPT is pretty decent at very basic techsupport asked with the correct jargon. Like "How do I add a custom string to cell formatting in excel".

It absolutely sucks for anything specific, or asked with the wrong jargon.

[–] adam_y@lemmy.world 10 points 6 hours ago* (last edited 6 hours ago) (1 children)

Good for you buddy.

Edit: sorry that was harsh. I'm just dealing with "every comment is a contrarian comment" day.

Sure, GPT is good at basic search functionality for obvious things, but why choose that when there are infinitely better and more reliable sources of information?

There's a false sense of security couple to a notion of "asking" an entity.

Why not engage in a community that can support answers? I've found the Linux community (in general) to be really supportive and asking questions is one way of becoming part of that community.

The forums of the older internet were great at this... Creating community out of commonality. Plus, they were largely self correcting I'm a way in which LLMs are not.

So not only are folk being fed gibberish, it is robbing them of the potential to connect with similar humans.

And sure, it works for some cases, but they seem to be suboptimal, infrequent or very basic.

[–] Tar_alcaran@sh.itjust.works 1 points 5 hours ago

Oh, I fully agree with you. One of the main things about asking super basic things is that when it inevitably gets them wrong, as least you won't waste that much time. And it's inherently parasitical, basic questions are mostly right with LLMs because thousands of people have answered the basic questions thousands of times.

[–] Kecessa@sh.itjust.works 9 points 9 hours ago

Used it once to ask it silly questions to see what the fuss is all about, never used it again and hopefully never will.

[–] Xerxos@lemmy.ml 12 points 9 hours ago (10 children)

I don't get how so many people carry their computer illiteracy as a badge of honor.

Chatgpt is useful.

Is it as useful as Tech Evangelists praise it to be? No. Not yet - and perhaps never will be.

But I sure do love to let it write my mails to people who I don't care for, but who I don't want to anger by sending my default 3 word replies.

It's a tool to save time. Use it or pay with your time if you willfully ignore it.

[–] rtxn@lemmy.world 54 points 8 hours ago* (last edited 8 hours ago) (8 children)

Tech illiteracy. Strong words.

I'm a sysadmin at the IT faculty of a university. I have a front row seat to witness the pervasive mental decline that is the result of chatbots. I have remote access to all lab computers. I see students copy-paste the exercise questions into a chatbot and the output back. Some are unwilling to write a single line of code by themselves. One of the network/cybersecurity teachers is a friend, he's seen attendance drop to half when he revealed he'd block access to chatbots during exams. Even the dean, who was elected because of his progressive views on machine learning, laments new students' unwillingness to learn. It's actual tech illiteracy.

I've sworn off all things AI because I strongly believe that its current state is a detriment to society at large. If a person, especially a kid, is not forced to learn and think, and is allowed to defer to the output of a black box of bias and bad data, it will damage them irreversibly. I will learn every skill that I need, without depending on AI. If you think that makes me an old man yelling at clouds, I have no kind words in response.

[–] grrgyle@slrpnk.net 4 points 5 hours ago

Speaking of being old, just like there are noticeable differences between people growing up before or after ready internet access. I think there will be a similar divide between people who did their learning before or after llms.

Even if you don't use them directly, there's so much more useless slop than there used to be online. I'll make it five minutes into a how-to article before realizing it doesn't actually make any sense when you look at the whole thing, let alone have anything interesting or useful to say.

[–] Neuromancer49@midwest.social 20 points 7 hours ago* (last edited 5 hours ago)

x 1000. Between the time I started and finished grad school, Chat GPT had just come out. The difference in students I TA'd at the beginning and end of my career is mind melting. Some of this has to do with COVID losses, though.

But we shouldn't just call out the students. There are professors who are writing fucking grants and papers with it. Can it be done well? Yes. But the number of games talking about Vegetative Electron Microscopy, or introductions whose first sentence reads "As a language model, I do not have opinions about the history of particle models," or completely non sensical graphics generated by spicy photoshop, is baffling.

Some days it held like LLMs are going to burn down the world. I have a hard time being optimistic about them, but even the ancient Greeks complained about writing. It just feels different this time, ya know?

ETA: Just as much of the onus is on grant reviewers and journal editors for uncritically accepting slop into their publications and awarding money to poorly written grants.

[–] Tar_alcaran@sh.itjust.works 4 points 6 hours ago

If a person, especially a kid, is not forced to learn and think, and is allowed to defer to the output of a black box of bias and bad data, it will damage them irreversibly.

I grew up, mostly, in the time of digital search, but far enough back that they still resembled the old card-catalog system. Looking for information was a process that you had to follow, and the mere act of doing that process was educational and helped order your thoughts and memory. When it's physically impossible to look for two keywords at the same time, you need to use your brain or you won't get an answer.

And while it's absolutely amazing that I can now just type in a random question and get an answer, or at least a link to some place that might have the answer, this is a real problem in how people learn to mentally process information.

A true expert can explain things in simple terms, not because they learned them in simple terms or think about them in simple terms, but because they have to ability to rephrase and reorder information on the fly to fit into a simplified model of the complex system they have in their mind. That's an extremely important skill, and it's getting more and more rare.

If you want to test this, ask people for an analogy. If you can't make an analogy, you don't truly understand the subject (or the subject involves subatomic particles, relativity or topology and using words to talk about it is already basically an analogy)

load more comments (5 replies)
[–] essell@lemmy.world 6 points 9 hours ago (2 children)

As an older techy I'm with you on this, having seen this ridiculous fight so many times.

Whenever a new tech comes out that gets big attention you have the Tech Companies saying everyone has to over it in Overhype.

And you have the proud luddites who talk like everyone else is dumb and they're the only ones capable of seeing the downsides of tech

"Buy an iPhone, it'll Change your life!"

"Why do I need to do anything except phone people and the battery only lasts one day! It'll never catch on"

"Buy a Satnav, it'll get you anywhere!"

"That satnav drove a woman into a lake!"

"Our AI is smart enough to run the world!"

"This is just a way to steal my words like that guy who invented cameras to steal people's souls!"

🫤

Tech was never meant to do your thinking for you. It's a tool. Learn how to use it or don't, but if you use tools right, 10,000 years of human history says that's helpful.

[–] Tar_alcaran@sh.itjust.works 6 points 6 hours ago (1 children)

The thing is, some "tech" is just fucking dumb, and should have never been done. Here are just a few small examples:

"Get connected to the city gas factory, you can have gaslamps indoors and never have to bother with oil again!"
"Lets bulldoze those houses to let people drive through the middle of our city"
"In the future we'll all have vacuum tubes in our homes to send and deliver mail"
"Airships are the future of transatlantic travel"
"Blockchain will revolutionize everything!"
"People can use our rockets to travel across the ocean"
"Roads are a great place to put solar panels" "LLMs are a great way of making things"

[–] essell@lemmy.world 2 points 5 hours ago (1 children)

There are two kinds of scientific progress: the methodical experimentation and categorization which gradually extend the boundaries of knowledge, and the revolutionary leap of genius which redefines and transcends those boundaries.

Acknowledging our debt to the former, we yearn nonetheless for the latter.

 -- Academician Prokhor Zakharov,
[–] Tar_alcaran@sh.itjust.works 4 points 5 hours ago* (last edited 5 hours ago) (1 children)

Always upvote Alpha Centauri!

EDIT: and in slightly more content-related answer: I picked those examples because there's a range of reason why these things were stupid. Some turned out to be stupid afterwards, like building highly polluting gasworks in the middle of cities or airships. Some turned were always stupid even in their very principles, like using rockets for airtravel, solarpanel roads or blockchain.

LLMs are definitely in the latter category. Like solar roadways, blockchains or commute-by-rocket, the "solution" just doesn't have problem or a market.

[–] essell@lemmy.world 2 points 5 hours ago* (last edited 1 hour ago)

I agree. People are often dumb, especially the smart ones.

When you go through life seeing the world differently it's easy to assume that other people just don't get it, that they're the problem as always, when they say your invention is useless, misguided, inappropriate or harmful.

No matter how smart these people are, reality always catches up in the end, hopefully with as few casualties as possible.

[–] CarbonatedPastaSauce@lemmy.world 6 points 6 hours ago (1 children)

Not all tools are worthy of the way they are being used. Would you use a hammer that had a 15% chance of smashing you in the face when you swung it at a nail? That's the problem a lot of us see with LLMs.

[–] essell@lemmy.world 3 points 6 hours ago (1 children)

No, but I do use hammers despite the risks.

Because I'm aware of the risks and so I use hammers safely, despite the occasional bruised thumb.

[–] CarbonatedPastaSauce@lemmy.world 3 points 6 hours ago (2 children)

You missed my point. The hammers you're using aren't 'wrong', i.e. smacking you in the face 15% of the time.

Said another way, if other tools were as unreliable as ChatGPT, nobody would use them.

[–] essell@lemmy.world 5 points 6 hours ago (1 children)

You've missed my point.

ChatGPT can be wrong but it can't hurt you unless you assume it's always right

[–] CarbonatedPastaSauce@lemmy.world 3 points 5 hours ago (1 children)

And assuming it's always right is what the general public is doing.

[–] essell@lemmy.world 3 points 5 hours ago

Like the lady who drives into the lake because sat nav told her to.

[–] xor@lemmy.blahaj.zone 2 points 5 hours ago (1 children)

Hammers are unreliable.

You can hit your thumb if you use the tool wrong, and it can break, doing damage, if e.g. it is not stored properly. When you use a hammer, you accept these risks, and can choose to take steps to mitigate them by storing it properly, taking care when using it and checking it's not loose before using it.

In the same regard, if you use LLMs for what they're good at, and verify their outputs, they can be useful tools.

"LLMs pointless because I can write a shopping list myself" is like saying "hammers are pointless because I can just use this plank instead". Sure, you can do that, but there's other scenarios where a hammer would be kinda handy.

[–] CarbonatedPastaSauce@lemmy.world 2 points 5 hours ago (1 children)

if you use LLMs for what they’re good at, and verify their outputs

This is the part the general public is not prepared for, and why the whole house of cards falls apart.

[–] xor@lemmy.blahaj.zone 2 points 3 hours ago

I agree - but that's user error, not a bad tool

load more comments (8 replies)
[–] hperrin@lemmy.ca 11 points 9 hours ago

I don’t know how to feel about this. I need to ask ChatGPT.

[–] Lembot_0001@lemm.ee 7 points 10 hours ago (3 children)

You can always ask ChatGPT how to proceed with such trivial tasks. It is too dumb to write code but it can suggest how to get access to ChatGPT.

load more comments (3 replies)
load more comments
view more: next ›