this post was submitted on 07 Apr 2025
502 points (95.6% liked)

tumblr

4067 readers
648 users here now

Welcome to /c/tumblr, a place for all your tumblr screenshots and news.

Our Rules:

  1. Keep it civil. We're all people here. Be respectful to one another.

  2. No sexism, racism, homophobia, transphobia or any other flavor of bigotry. I should not need to explain this one.

  3. Must be tumblr related. This one is kind of a given.

  4. Try not to repost anything posted within the past month. Beyond that, go for it. Not everyone is on every site all the time.

  5. No unnecessary negativity. Just because you don't like a thing doesn't mean that you need to spend the entire comment section complaining about said thing. Just downvote and move on.


Sister Communities:

founded 2 years ago
MODERATORS
 
(page 2) 50 comments
sorted by: hot top controversial new old
[–] Kusimulkku@lemm.ee 2 points 10 hours ago (10 children)

Some people are very proud of not knowing things

load more comments (10 replies)
[–] glitchdx@lemmy.world 17 points 1 day ago (6 children)

Wait, people actually try to use gpt for regular everyday shit?

I do lorebuilding shit (in which gpt's "hallucinations" are a feature not a bug), or I'll just ramble at it while drunk off my ass about whatever my autistic brain is hyperfixated on. I've given up on trying to do coding projects, because gpt is even worse at it than I am.

load more comments (6 replies)
[–] jjjalljs@ttrpg.network 72 points 1 day ago (7 children)

I feel like it's an unpopular take but people are like "I used chat gpt to write this email!" and I'm like you should be able to write email.

I think a lot of people are too excited to neglect core skills and let them atrophy. You should know how to communicate. It's a skill that needs practice.

[–] minorkeys@lemmy.world 23 points 1 day ago* (last edited 1 day ago) (1 children)

This is a reality as most people will abandon those skills, and many more will never learn them to begin with. I'm actually very worried about children who will grow up learning to communicate with AI and being dependent on it to effectively communicate with people and navigate the world, potentially needing AI as a communication assistant/translator.

AI is patient, always available, predicts desires and effectively assumes intent. If I type a sentence with spelling mistakes, chatgpt knows what I meant 99% of the time. This will mean children don't need to spell or structure sentences correctly to effectively communicate with AI, which means they don't need to think in a way other human being can understand, as long as an AI does. The more time kids spend with AI, the less developed their communication skills will be with people. GenZ and GenA already exhibit these issues without AI. Most people go experience this communicating across generations, as language and culture context changes. This will emphasize those differences to a problematic degree.

Kids will learn to communicate will people and with AI, but those two styles with be radically different. AI communication will be lazy, saying only enough for AI to understand. With communication history, which is inevitable tbh, and AI improving every day, it can develop a unique communication style for each child, what's amounts to a personal language only the child and AI can understand. AI may learn to understand a child better than their parents do and make the child dependent on AI to effectively communicate, creating a corporate filter of communication between human being. The implications of this kind of dependency are terrifying. Your own kid talks to you through an AI translator, their teachers, friends, all their relationships could be impacted.

I have absolutely zero beleif that the private interests of these technology owners will benefit anyone other than themselves and at the expense of human freedom.

load more comments (1 replies)
[–] Soup@lemmy.world 8 points 1 day ago (1 children)

I know someone who very likely had ChatGPT write an apology for them once. Blew my mind.

[–] Lemminary@lemmy.world 8 points 23 hours ago (1 children)

I use it to communicate with my landlord sometimes. I can tell ChatGPT all the explicit shit exactly as I mean it and it'll shower it and comb it all nice and pretty for me. It's not an apology, but I guess my point is that some people deserve it.

[–] Soup@lemmy.world 6 points 12 hours ago (7 children)

You don’t think being able to communicate properly and control your language, even/especially for people you don’t like, is a skill you should probably have? It’s not that much more effort.

load more comments (7 replies)
load more comments (5 replies)
[–] TabbsTheBat@pawb.social 110 points 1 day ago (9 children)

The amount of times I've seen a question answered by "I asked chatgpt and blah blah blah" and the answer being completely bullshit makes me wonder who thinks asking the bullshit machine™ questions with a concrete answer is a good idea

[–] LarmyOfLone@lemm.ee -4 points 11 hours ago

We're in a post truth world where most web searches about important topics give you bullshit answers. But LLMs have read basically all the articles already and has at least the potential make deductions and associations about it - like this belongs to "propaganda network 4335". Or "the source of this claim is someone who has engaged in deception before". Something like a complex fact check machine.

This is sci-fi currently because it's an ocean wide but can't think deeply or analyze well, but if you press GPT about something it can give you different "perspectives". The next generations might become more useful in this in filtering out fake propaganda. So you might get answers that are sourced and referenced and which can also reference or dispute wrong answers / talking points and their motivation. And possibly what emotional manipulation and logical fallacies they use to deceive you.

[–] Tar_alcaran@sh.itjust.works 51 points 1 day ago (1 children)

This is your reminder that LLMs are associative models. They produce things that look like other things. If you ask a question, it will produce something that looks like the right answer. It might even BE the right answer, but LLMs care only about looks, not facts.

load more comments (1 replies)
[–] CarbonatedPastaSauce@lemmy.world 15 points 1 day ago (2 children)

A lot of people really hate uncertainty and just want an answer. They do not care much if the answer is right or not. Being certain is more important than being correct.

load more comments (2 replies)
load more comments (5 replies)
[–] Lucky_777@lemmy.world 5 points 1 day ago (1 children)

Using AI is helpful, but by no means does it replace your brain. Sure, it can write emails and really helps with code, but anything beyond basic troubleshooting and "short" code streams, it's an assistant, not an answer.

[–] Lemminary@lemmy.world 3 points 23 hours ago

Yeah, I don't get the people who think it'll replace your brain. I find it useful for learning even if it's not always entirely correct but that's why I use my brain too. Even if it gets me 60% of the way there, that's useful.

[–] ArchmageAzor@lemmy.world 5 points 1 day ago (1 children)

I use ChatGPT mainly for recipes, because I'm bad at that. And it works great, I can tell it "I have this and this and this in my fridge and that and that in my pantry, what can I make?" and it will give me a recipe that I never would have come up with. And it's always been good stuff.

And I do learn from it. People say you can't learn from using AI, but I've gotten better at cooking thanks to ChatGPT. Just a while ago I learned about deglazing.

[–] turnip@lemm.ee 2 points 1 day ago

You should try this thing, its pretty neat, just press maya or miles. Though it requires a microphone so you may have to open it on your phone.

https://www.sesame.com/research/crossing_the_uncanny_valley_of_voice#demo

[–] inclementimmigrant@lemmy.world 1 points 20 hours ago

I use it somewhat regularly to send snarky emails to coworkers in a professional, buzzword overload responses to mundane inquiries.

I use it every so often to help craft a professional go fuck yourself email too.

[–] Whats_your_reasoning@lemmy.world 11 points 1 day ago (4 children)

Oh hey it's me! I like using my brain, I like using my own words, I can't imagine wanting to outsource that stuff to a machine.

Meanwhile, I have a friend who's skeptical about the practical uses of LLMs, but who insists that they're "good for porn." I can't help but see modern AI as a massive waste of electricity and water, furthering the destruction of the climate with every use. I don't even like it being a default on search engines, so the idea of using it just to regularly masturbate feels ... extremely selfish. I can see trying it as a novelty, but for a regular occurence? It's an incredibly wasteful use of resources just so your dick can feel nice for a few minutes.

[–] Foxfire@pawb.social 8 points 1 day ago (2 children)

Using it for porn sounds funny to me given the whole concept of "rule 34" being pretty ubiquitous. If it exists, there's porn of it! Like even from a completely pragmatic prespective, it sounds like generating pictures of cats. Surely there is a never ending ocean of cat pictures which you can search and refine, do you really need to bring a hallucination machine into the mix? Maybe your friend has an extremely specific fetish list that nothing else will scratch? That's all I can think of.

load more comments (2 replies)
load more comments (3 replies)
[–] Kolanaki@pawb.social 9 points 1 day ago* (last edited 1 day ago) (3 children)

I've tried a few GenAI things, and didn't find them to be any different than CleverBot back in the day. A bit better at generating a response that seems normal, but asking it serious questions always generated questionably accurate responses.

If you just had a discussion with it about what your favorite super hero is, it might sound like an actual average person (including any and all errors about the subject it might spew), but if you try to use it as a knowledge base, it's going to be bad because it is not intelligent. It does not think. And it's not trained well enough to only give 100% factual answers, even if it only had 100% factual data entered into it to train on. It can mix two different subjects together and create an entirely new, bogus response.

load more comments (3 replies)
[–] adam_y@lemmy.world 18 points 1 day ago (2 children)

Spent this morning reading a thread where someone was following chatGPT instructions to install "Linux" and couldn't understand why it was failing.

[–] Tar_alcaran@sh.itjust.works 12 points 1 day ago (1 children)

Hmm, I find chatGPT is pretty decent at very basic techsupport asked with the correct jargon. Like "How do I add a custom string to cell formatting in excel".

It absolutely sucks for anything specific, or asked with the wrong jargon.

[–] adam_y@lemmy.world 14 points 1 day ago* (last edited 1 day ago) (1 children)

Good for you buddy.

Edit: sorry that was harsh. I'm just dealing with "every comment is a contrarian comment" day.

Sure, GPT is good at basic search functionality for obvious things, but why choose that when there are infinitely better and more reliable sources of information?

There's a false sense of security couple to a notion of "asking" an entity.

Why not engage in a community that can support answers? I've found the Linux community (in general) to be really supportive and asking questions is one way of becoming part of that community.

The forums of the older internet were great at this... Creating community out of commonality. Plus, they were largely self correcting I'm a way in which LLMs are not.

So not only are folk being fed gibberish, it is robbing them of the potential to connect with similar humans.

And sure, it works for some cases, but they seem to be suboptimal, infrequent or very basic.

load more comments (1 replies)
load more comments (1 replies)
[–] SynopsisTantilize@lemm.ee 2 points 1 day ago

I'm using it to learn to code! If anyone wants to try my game let me know I'll figure out a way to send it.

load more comments
view more: ‹ prev next ›