It’s 4 pm on a Friday. I’ve approved my last PR of the week. I’ve reviewed more code today than I wrote. I got a lot done, or at least, the dashboard says I did. But I’m sitting here, and I can’t think straight. Not in the dramatic, existential-crisis way. Just… tired. A kind of tired I didn’t use to feel from this job.
I use AI every day. I’m not performing some Luddite rebellion from my standing desk. Claude is open. I reach for it constantly, and it helps. That’s not the issue.
The issue is what happened to the rest of the job while nobody was looking.
The Shift Nobody Named
Writing code is a generative act. You’re building something. You’re in flow, that state where two hours vanish, and you look up, and the thing works, and you feel good. That used to be most of the day.
Now most of the day is evaluation. Reviewing AI-generated output. Reverse-engineering logic that came from somewhere outside your own head, except the “somewhere” isn’t even a colleague you can ask questions to. It’s a model that was very confident and very fast and may or may not have understood what it was doing.
These are two different cognitive gears. Generation gives you energy. Evaluation takes it. And the human brain isn’t built for what we’re asking it to do right now.
Working memory holds about four to five items at once. That’s it. Every PR you review asks you to load up a new context, different feature, different assumptions, different patterns, and hold it all in your head long enough to decide if it’s right. Do that a few times, and you’re fine. Do it dozens of times a day and you’re not reviewing anymore. You’re reacting.
There’s a name for this: decision fatigue. Your ability to make good judgments degrades with every decision you make. By afternoon, fatigued reviewers are approving things with an “LGTM” that their morning selves would have flagged. Not because they stopped caring. Because the brain ran out of capacity to care with precision.
We’re stuck in this mode for hours at a time now, context-switching across PRs that all look plausible but need real attention to verify. Developers on high AI-adoption teams are merging 98% more pull requests, but PR review time increased 91%. The volume doubled. The brain didn’t.
You can’t maintain the same standard of care at twice the volume. Something has to give. And right now, what’s giving is us.
The Dashboards Look Great
Here’s the part that makes it worse: from the outside, everything looks like it’s working. More PRs merged. More tickets closed. Velocity is up. Leadership sees acceleration and assumes the team is thriving.
What they don’t see is that the humans behind those numbers are absorbing the cognitive cost of all that speed. Fortune reported that AI gave workers roughly six extra hours a week. Nobody got to keep them. The workload expanded to fill every minute that got freed up. That’s not productivity. That’s a treadmill that got faster.
And I get it. If you’re a VP staring at a Jira board that’s never looked greener, why would you question it? The metrics are saying exactly what you want to hear. But the metrics don’t measure the developer who stopped reading PRs line by line because there are just too many. They don’t measure the senior engineer who used to catch subtle architectural issues but now has to triage so much volume that she’s pattern-matching instead of thinking. They don’t measure the mass-approval that happens at 4:47 pm on a Friday because everyone’s cooked.
Addy Osmani called this “comprehension debt”, when teams ship code faster than they can understand it. I’d call it something simpler.
We’re tired.
Still Here, Still Caring
This isn’t a post about quitting. It’s not about checking out or going back to writing everything by hand. It’s about the exhaustion that hits people who are still trying to do good work inside a system that keeps raising the tempo without asking if anyone can keep up.
The developers who still read every line of a PR. Who still flags the thing that “probably works fine” but doesn’t sit right. Who are tired because caring takes more effort than it used to.
TechCrunch ran a piece in February with a headline that stuck with me: “The first signs of burnout are coming from the people who embrace AI the most.” Read that again. The ones burning out aren’t the skeptics. They’re the ones who leaned in. The ones who did the thing everyone told them to do.
38% of developers say reviewing AI-generated code is harder than reviewing code from a colleague. Honestly? That tracks. When a teammate writes something, there’s a conversation behind it. A shared context. A Slack thread. When AI writes something, you’re on your own with a diff and a gut feeling.
It’s Friday. We did the work. We reviewed the code. We caught the things that needed catching. We’re tired. And on Monday, we’ll do it again, because that’s what giving a damn looks like.
But someone should probably notice.
Leave a Comment