this post was submitted on 22 Jun 2025
52 points (94.8% liked)

Programming

21072 readers
241 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
all 13 comments
sorted by: hot top controversial new old
[–] mcv@lemm.ee 1 points 8 hours ago

I've always taken it to mean: put your effort into making it work first, before worrying about optimization. Once it works, you can decide whether it's worth putting effort into optimization. But if you do that first, you might optimize for the wrong thing. Or get distracted by optimization so you never get it to work.

[–] thenextguy@lemmy.world 13 points 1 day ago (3 children)

Usually people say “premature optimization is the root of all evil” to say “small optimizations are not worth it”

No, that’s not what people mean. They mean measure first, then optimize. Small optimizations may or may not be worth it. You don’t know until you measure using real data.

[–] boonhet@sopuli.xyz 7 points 1 day ago

Exactly. A 10% decrease in run time for a method is a small optimization most of the time, but whether or not it's premature depends on whether the optimization has other consequences. Maybe you lose functionality in some edge cases, or maybe it's actually 10x slower in some edge case. Maybe what you thought was a bit faster, is actually slower in most cases. That's why you measure when you're optimizing.

Maybe you took 3 hours of profiling and made a loop 10% faster but you could have trivially rewritten it to run log n times instead of n times...

[–] ugo@feddit.it 3 points 1 day ago* (last edited 1 day ago)

No, that’s what good programmers say (measure first, then optimize). Bad programmers use it to mean it’s perfectly fine to prematurely pessimize code because they can’t be bothered to spend 10 minutes to come up with something that isn’t O(n^2) because their set is only 5 elements long so it’s “not a big deal” and “it’s fine” even though the set will be hundreds or thousands of elements long under real load.

It would be almost funny if it didn’t happen every single time.

[–] FizzyOrange@programming.dev 1 points 23 hours ago

They mean measure first, then optimize.

This is also bad advice. In fact I would bet money that nobody who says that actually always follows it.

Really there are two things that can happen:

  1. You are trying to optimise performance. In this case you obviously measure using a profiler because that's by far the easiest way to find places that are slow in a program. It's not the only way though! This only really works for micro optimisations - you can't profile your way to architectural improvements. Nicholas Nethercote's posts about speeding up the Rust compiler are a great example of this.

  2. Writing new code. Almost nobody measures code while they're writing it. At best you'll have a CI benchmark (the Rust compiler has this). But while you're actually writing the code it's mostly find just to use your intuition. Preallocate vectors. Don't write O(N^2) code. Use HashSet etc. There are plenty of things that good programmers can be sure enough are the right way to do it that you don't need to constantly second guess yourself.

[–] zerofk@lemmy.zip 23 points 1 day ago* (last edited 1 day ago) (1 children)

A nice post, and certainly worth a read. One thing I want to add is that some programmers - good and experienced programmers - often put too much stock in the output of profiling tools. These tools can give a lot of details, but lack a bird’s eye view.

As an example, I’ve seen programmers attempt to optimise memory allocations again and again (custom allocators etc.), or optimise a hashing function, when a broader view of the program showed that many of those allocations or hashes could be avoided entirely.

In the context of the blog: do you really need a multi set, or would a simpler collection do? Why are you even keeping the data in that set - would a different algorithm work without it?

When you see that some internal loop is taking a lot of your program’s time, first ask yourself: why is this loop running so many times? Only after that should you start to think about how to make a single loop faster.

[–] ViatorOmnium@piefed.social 13 points 1 day ago (1 children)

You don't even need to go at a low level. Lots of programmers forget that their applications are not running in a piece of paper in general.

My team at work once had an app running Kubernetes and it had a memory leak, so its pod would get terminated every few hours. Since there were multiple pods, this had effectively no effect on the clients.

The app in question was otherwise "done", there were no new features needed, and we hadn't seen another bug in years.

When we transferred the ownership of the app to another team, they insisted on finding and fixing the memory leak. They spent almost one month to find the leak and refactor the app. The practical effect was none - in fact due to the normal pod scheduling they didn't even buy that much lifetime to each individual pod.

[–] furrowsofar@beehaw.org 9 points 1 day ago (1 children)

I get your point but I do not think you should justify releasing crap code because you think it has minimal impack on the customer. A memory leak is a bug and just should not be there.

[–] wise_pancake@lemmy.ca 4 points 1 day ago

Fun read, I’ve never heard the full quote before.