I once spent two days implementing a caching layer for an API endpoint that got twelve requests per hour. Twelve. The cache hit rate was 98%, and it saved approximately 0.3 seconds per day of total compute time. I could have shipped a feature instead.
The premature optimization trap
New developers worry about the wrong things. "Should I use a HashMap or a TreeMap?" "Is this O(n) or O(n log n)?" "Should I add Redis for caching?" These questions matter at scale. They don't matter when you have fifty users.
The fastest code is the code that ships. An imperfect solution in production beats a perfect solution in development. Every time.
When optimization actually matters
Optimization matters when you can measure a problem. Not when you imagine one. Not when you read a blog post about how Company X handles millions of requests. When your actual monitoring shows an actual bottleneck.
Here's my rule: build the simplest version that works. Deploy it. Monitor it. When something is actually slow — when real users are waiting, when your server CPU is at 80%, when your database is struggling — then optimize. And optimize the measured bottleneck, not the thing you assume is slow.
The things that are always worth optimizing
Some things should be fast from the start, not because of premature optimization, but because they're architectural decisions that are hard to change later:
- Database indexes. Add them for any column you filter or sort by. This is free performance.
- Image sizes. Serve appropriately sized images. A 4MB hero image on a mobile site is never acceptable.
- N+1 queries. If you're making 100 database calls in a loop, fix it now. Use joins or batch queries.
- Bundle size. Don't ship 2MB of JavaScript to render a landing page.
These aren't optimizations — they're avoiding obviously bad patterns. There's a difference between "make it faster" and "don't make it needlessly slow."
Profile, don't guess
When you do need to optimize, profile first. Use your browser's DevTools. Use your database's query analyzer. Use htop on your server. Find the actual bottleneck.
Developers are notoriously bad at guessing where performance problems are. The function you think is slow usually isn't. The one you never considered usually is. Data beats intuition.
Ship the thing
Your users care about features. They care about reliability. They care about the product doing what it's supposed to do. They don't care that your API responds in 12 milliseconds instead of 15.
Ship first. Measure. Optimize what matters. Repeat.