this post was submitted on 11 Jun 2025
84 points (86.2% liked)
Programming
20885 readers
76 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Ah this ancient nonsense. Typescript and JavaScript get different results!
It's all based on
https://en.wikipedia.org/wiki/The_Computer_Language_Benchmarks_Game
Microbenchmarks which are heavily gamed. Though in fairness the overall results are fairly reasonable.
Still I don't think this "energy efficiency" result is worth talking about. Faster languages are more energy efficient. Who new?
Edit: this also has some hilarious visualisation WTFs - using dendograms for performance figures (figures 4-6)! Why on earth do figures 7-12 include line graphs?
Which benchmarks aren't?
Private or obscure ones I guess.
Real-world (macro) benchmarks are at least harder to game, e.g. how long does it take to launch chrome and open Gmail? That's actually a useful task so if you speed it up, great!
Also these benchmarks are particularly easy to game because it's the actual benchmark itself that gets gamed (i.e. the code for each language); not the thing you are trying to measure with the benchmark (the compilers). Usually the benchmark is fixed and it's the targets that contort themselves to it, which is at least a little harder.
For example some of the benchmarks for language X literally just call into C libraries to do the work.
Private and obscure benchmarks are very often gamed by the benchmarkers. It's very difficult to design a fair benchmark (e.g chrome can be optimized to load Gmail for obvious reasons. maybe we should choose a more fair website when comparing browsers? but which? how can we know that neither browser has optimizations specific for page X?). Obscure benchmarks are useless because we don't know if they measure the same thing. Private benchmarks are definitely fun but only useful to the author.
If a benchmark is well established you can be sure everyone is trying to game it.
It does make sense, if you skim through the research paper (page 11). They aren't using
performance.now()
or whatever the state-of-the-art in JS currently is. Their measurements include invocation of the interpreter. And parsing TS involves bigger overhead than parsing JS.I assume (didn't read the whole paper, honestly DGAF) they don't do that with compiled languages, because there's no way the gap between compiling C and Rust or C++ is that small.
But TS is compiled to JS so it's the same interpreter in both cases. If they're including the time for
tsc
in their benchmark then that's an even bigger WTF.