this post was submitted on 09 Jul 2025
1058 points (99.6% liked)
Greentext
6687 readers
2063 users here now
This is a place to share greentexts and witness the confounding life of Anon. If you're new to the Greentext community, think of it as a sort of zoo with Anon as the main attraction.
Be warned:
- Anon is often crazy.
- Anon is often depressed.
- Anon frequently shares thoughts that are immature, offensive, or incomprehensible.
If you find yourself getting angry (or god forbid, agreeing) with something Anon has said, you might be doing it wrong.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
In the 90s, a lot of programmers spent a lot of time carefully optimizing everything, on the theory that every CPU cycle counted. And in the decades since, it's gotten easier than ever to write software, but the craft of writing great software has stalled compared to the ease of writing mediocre software. "Why shouldn't we block on a call to a remote service? Computers are so fast these days"
The flip side of that is entire classes of bugs being removed from modern software.
The differences are primarily languages. A GUI in the 90s was likely programmed with C/C++. Increasingly, it's now done in languages that have complex runtime environments like dotnet, or what is effectively a browser tab written with browser languages.
Those C/C++ programs almost always had buffer overflows. Which were taken off of the OWASP Top 10 back in 2007, meaning the industry no longer considers it a primary threat. This should be considered a huge success. Related issues, like dynamic memory mismanagement, are also almost gone.
There are ways to take care of buffer overflows without languages in complex managed runtimes, such as what Go and Rust do. You can have the compiler produce ASM that does array bounds checking every time while only being a smidge slower than C/C++. With SSDs all but removing the excuse that disk IO is the limiting factor, this is increasingly the way to go.
The industry had good reasons to use complex runtimes, though some of the reasons are now changing.
Oh, and look at what old games did to optimize things, too. The Minus World glitch in Super Mario Bros--rooted in uninitialized values of a data structure that needed to be a consistent shape--would be unlikely to happen if it were written in Python, and almost certainly wouldn't happen in Rust. Optimizations tend to make bugs all their own.
While there's an overhead to safer runtime environments, I wouldn't put much blame there. I feel like "back in the day" when something was inefficient you noticed it quicker because it had a much larger impact, windows would stop updating, the mouse would get laggy, music would start stuttering. These days you can take up 99% of the CPU time and the system will still chug along without any of those issues showing.
I remember early Twitter had a "famous" performance issue, where the sticky heading bar would slow systems down, because they were re-scanning the entire page DOM on every scroll operation to find and adjust the header, rather than just caching a reference to it. Meanwhile yesterday I read an article about the evolution of the preferences UI in Apple OSs, that showed them off by running each individual version of said OS in VMs embedded within the page. It wasn't snappy, but it didn't have the "entire system slows down and stops responding" issues you saw a decade or so ago.
Basically, devs aren't being punished (by tooling) for being inefficient, so they don't notice when they are, and newer devs never realise they need to.