this post was submitted on 12 Oct 2025
1215 points (99.2% liked)

Programmer Humor

27065 readers
396 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] MonkderVierte@lemmy.zip 256 points 2 weeks ago (3 children)
[–] WorldsDumbestMan@lemmy.today 38 points 2 weeks ago

LMAO! Well, at least I can cry to Claude about it...

[–] Aneb@lemmy.world 11 points 2 weeks ago (1 children)

This is why I'm on Lemmy. You guys get me

[–] buttnugget@lemmy.world 5 points 2 weeks ago

This is 100% every normal person’s experience on there at one time or another. Having it be a dupe is perfect.

[–] Petter1@discuss.tchncs.de 6 points 2 weeks ago (2 children)

My boiled egg has to be just a little bit liquid on the inside 🌚

[–] MonkderVierte@lemmy.zip 8 points 2 weeks ago* (last edited 2 weeks ago) (1 children)
[–] Petter1@discuss.tchncs.de 2 points 2 weeks ago (1 children)

Sticky =/= Liquid

🧐😜💦

[–] MonkderVierte@lemmy.zip 4 points 2 weeks ago (1 children)
[–] Petter1@discuss.tchncs.de 3 points 1 week ago

❤️‍🔥

[–] BossDj@piefed.social 4 points 2 weeks ago

Downvote harder damnit! I think I broke my screen

[–] magic_lobster_party@fedia.io 117 points 2 weeks ago (2 children)

That’s a stupid question it doesn’t deserve an answer. You should be ashamed you even thought about it.

[–] Sonotsugipaa@lemmy.dbzer0.com 83 points 2 weeks ago (2 children)

Your description of the problem has words I've heard before, like "a" and "even"; marked as duplicate.

[–] jessica@beehaw.org 22 points 2 weeks ago

Closing as selecting the correct tool for the job is opinion-based

Whenever I see this, I know the answers will contain useful curated facts about each tool...

[–] ronigami@lemmy.world 6 points 2 weeks ago

This has been asked before. The thread was closed with zero comments but suck it.

[–] fibojoly@sh.itjust.works 4 points 2 weeks ago

Now, that's the real SO experience.

[–] Wispy2891@lemmy.world 39 points 2 weeks ago (2 children)

I found a workaround for this:

I start with "a buggy LLM wrote this piece of code..." then i paste my code for review, so they can shit and bash on it "you're absolutely right: that LLM done a disaster, this is a mess, look how inefficient is this function, here is how it can be improved..."

[–] melfie@lemy.lol 2 points 2 weeks ago

I shall try that. 🤔

load more comments (1 replies)
[–] pinchy@lemmy.world 38 points 2 weeks ago (2 children)

SO: “that’s a stupid question!” GPT: “that’s a great question!”

[–] jumping_redditor@sh.itjust.works 18 points 2 weeks ago

stack overflow at least is polite enough to call you a moron for asking

load more comments (1 replies)
[–] Wynnstan@lemmy.world 38 points 2 weeks ago

ChatGPT is a narcissists dream app.

[–] anton@lemmy.blahaj.zone 35 points 2 weeks ago

Well, if I asking for help, it's probably because I am wrong about something. So I know who to trust.

[–] finitebanjo@piefed.world 26 points 2 weeks ago

Unfortunately aged like milk, StackOverflow was an early adopter of the LLM fad.

[–] cupcakezealot@piefed.blahaj.zone 25 points 2 weeks ago (1 children)

piefed is working on solution and answer features and i can't wait for stackoverflow like communities without the ai "enhancements"

[–] MonkeMischief@lemmy.today 2 points 2 weeks ago

This sounds pretty exciting and I keep hearing more and more about piefed lately. I'm kinda excited for this new burgeoning era of the federated 'Net!

[–] goatinspace@feddit.org 16 points 2 weeks ago
[–] Kolanaki@pawb.social 16 points 2 weeks ago (1 children)

People on Social Media: "You absolutely are an asshole."

load more comments (1 replies)
[–] mstrk@lemmy.world 14 points 2 weeks ago (5 children)

I usually combine both to unblock myself. Lately, SO, repository issues, or just going straight to the documentation of the package/crate seem to give me faster outcomes.

People have suggested that my prompts might not be optimal for the LLM. One even recommended I take a prompt engineering boot camp. I'm starting to think I’m too dumb to use LLMs to narrow my research sometimes. I’m fine with navigating SO toxicity, though it’s not much different from social media in general. It’s just how people are. You either take the best you can from it or let other people’s bad days affect yours.

[–] Sonotsugipaa@lemmy.dbzer0.com 34 points 2 weeks ago

One even recommended I take a prompt engineering boot camp

[–] Darkcoffee@sh.itjust.works 18 points 2 weeks ago (1 children)

They always accuse the user of being the problem when using glorified if-else machines.

[–] xthexder@l.sw0.com 16 points 2 weeks ago (1 children)

LLMs have a bit of RNG sprinkled in with the if-else to spice things up.

[–] Darkcoffee@sh.itjust.works 9 points 2 weeks ago

Have some RNG in your garbage AI, as a treat.

[–] marcos@lemmy.world 10 points 2 weeks ago

If SO doesn't have the answer to your question, LLMs won't either. You can't improve that by prompting "better".

They are just an easier way to search for it. They don't make answers up (or rather, they do, but when they do that, they are always wrong).

[–] ArsonButCute@lemmy.dbzer0.com 8 points 2 weeks ago (2 children)

If you're planning on using LLMs for coding advice, may I recommend selfhosting a model and adding the documentation and repositories as context?

I use a a 1.5b qwen model (mega dumb) but with no context limit I can attach the documentation for the language I'm using, and attach the files from the repo I'm working in (always a local repo in my case) I can usually explain what I'm doing, what I'm trying to accomplish, and what I've tried to the LLM and it will generate snippets that at the very least point me in the right direction but more often than not solve the problem (after minor tweaks because dumb model not so good at coding)

[–] MonkeMischief@lemmy.today 6 points 2 weeks ago* (last edited 2 weeks ago)

That's a really cool idea actually. I never considered that you could use such a crazy low quant to, it sounds like, temporarily "train" it for the task at hand instead of having to use up countless watt hours training the model itself!

That's how I use these things, too. Not to "help me code", but as a fancy search engine that can generally nudge me towards a solution I can work out myself.

[–] mstrk@lemmy.world 2 points 2 weeks ago (1 children)

I do use the 1.5b of whatever latest ollama with open web ui as frontend for my personal use. Although I can upload files and search the web it's too slow on my machine.

[–] ArsonButCute@lemmy.dbzer0.com 3 points 2 weeks ago (1 children)

If you've got a decent Nvidia GPU and are hoping on linux, look into the Kobold-cpp Vulkan backend, in my experience it works far better than the CUDA backend and is astronomically faster than the CPU-Only backend.

[–] mstrk@lemmy.world 3 points 1 week ago (1 children)

Will look into that when I have some money to invest. Thank you 💪

[–] ArsonButCute@lemmy.dbzer0.com 3 points 1 week ago

When/If you do, a RTX3070-lhr (about $300 new) is just about the BARE MINIMUM for gpu inferencing. Its what I use, it gets the job done, but I often find context limits too small to be usable with larger models.

If you wanna go team red, Vulkan should still work for inferencing and you have access to options with significantly more VRAM, allowing you to more effectively use larger models. I'm not sure about speed though, I haven't personally used AMDs GPUs since around 2015.

[–] smh@slrpnk.net 2 points 2 weeks ago

I've been having good luck with Kimi K2 for CSS/bootstrap stuff, and boilerplate API calls (example: update x to y, pulling x and y from this .csv). I appreciate that it cites its sources because then I can go read more and hopefully become more self-reliant when looking up documentation.

[–] tomiant@programming.dev 8 points 2 weeks ago* (last edited 2 weeks ago)

User: "Can I ask a q..."

Stackoverflow: "NO!"

[–] dumbass@aussie.zone 8 points 2 weeks ago (1 children)
[–] Sonotsugipaa@lemmy.dbzer0.com 5 points 2 weeks ago (1 children)

That's just the average stackoverflow comment

[–] copd@lemmy.world 8 points 2 weeks ago (1 children)

Answer: Why don't you try searching for the question first?

Me (confused face): How tf do you think I found this page?


Answer: Why are you doing it that way?

Me (sighs): Because I'm an idiot and don't know what I'm doing, is that what you wanted to hear?

[–] Sonotsugipaa@lemmy.dbzer0.com 14 points 2 weeks ago

Answer: Why don’t you try searching for the question first?

Me (confused face): How tf do you think I found this page?

[–] rozodru@piefed.social 4 points 2 weeks ago

it's getting worse too.

Just this morning I asked claude a very basic question, it halucinated the answer 3 times in a row. zero correct solution. first answer It halucinated what a certain cli daemon does, second solution it provided an alternative that it literally halucinated, the thing didn't exist at all, third solution it halucinated how another application works as well as the git repo for said application (A. doesn't even do the the thing Claude describe and B. the repo it provided had NOTHING to do with the application it described) I just gave up and went to my searx and found the answer myself. I shouldn't have been so lazy.

ChatGPT isn't much better anymore.

[–] b0ber@lemmy.world 3 points 1 week ago

Every second the most ridiculous question gets positive feedback so yeah who knows where we'll end up from here.

[–] stupidcasey@lemmy.world 3 points 2 weeks ago

I knew you write a function like this f(x) stack overflow has no idea what they are talking about.

[–] Itdidnttrickledown@lemmy.world 2 points 1 week ago* (last edited 1 week ago)

And neither can get it right over half the time.

[–] BilSabab@lemmy.world 2 points 1 week ago

Man, I miss those batshit crazy threads about esolangs!

[–] _cryptagion@anarchist.nexus 2 points 2 weeks ago

two of the worst places to copy code from.

load more comments
view more: next ›