Technical Debt in the Age of AI-Assisted Coding
AI coding assistants have made us faster. That part's undeniable. But faster at what, exactly? I've watched teams ship features in half the time while quietly doubling their maintenance burden. The trade isn't always worth it, and I'm not sure we're being honest enough about that as an industry.
This isn't an anti-AI position. I use coding assistants every day. But I've started being deliberate about when and how, because I've seen what happens when you're not.
The Speed-Understanding Tradeoff
When I write code by hand, I understand every line. I know why each function exists, what edge cases it handles, what assumptions are baked in. The code takes longer to produce but it's cheap to maintain because it lives in my head as much as in the file.
When I accept AI-generated code, I often understand the shape of it but not the details. And the costs are sneaky. When AI-generated code breaks — and it does break — I have to reverse-engineer what it was trying to do before I can even start debugging. That adds real latency to incident response, and at 2 AM with an outage, you feel every minute of it. Changing the code is friction too: if I didn't write it, I have to understand it first, and AI-generated code often does things slightly differently than I would have, which increases the cognitive load even when the code is technically fine.
The worst trap is test coverage. I catch myself writing fewer tests for code I accepted rather than authored, because it "looks right." That's a mistake I keep almost making. I caught myself doing it yesterday, actually — accepted a function, glanced at it, thought "yeah that handles it," and almost moved on. Almost.
Then there's architectural drift. AI assistants optimize for local correctness. They don't see the broader system. They don't know that you're using a particular abstraction pattern throughout the codebase or that you've made specific decisions about how errors should propagate. Over time, enough accepted suggestions accumulate into an architecture that nobody actually designed — it just grew.
Vibe Coding
I've seen a pattern emerge in teams that go all-in on AI assistance. I've started calling it vibe coding, and I mean that mostly as a warning.
Here's how it works: the developer describes what they want in natural language, accepts the generated code, runs it, sees if it works, iterates until it seems right. The feedback loop is fast. The output often functions. But the developer never actually reads the code. They're not writing software — they're curating it, which sounds fine until something goes wrong.
Vibe coding is genuinely fine for throwaway scripts, quick prototypes, personal tools you'll use once. It's dangerous for production systems. The developer has outsourced not just the typing but the thinking. And when the system breaks — not if — they don't have the mental model to diagnose it. They go back to the AI and ask it to fix what the AI built, which sometimes works and sometimes makes things worse in new and creative ways.
The tell is simple: ask a vibe coder to explain how their code handles a specific edge case. They often can't, because they never thought about it. The AI might have handled it. Or it might not have. Nobody knows until it fails.
Where AI Actually Helps
I use AI assistance liberally for boilerplate and scaffolding — generating the skeleton of a file, the structure of a class, the shape of a module. I fill in the actual logic manually. Boilerplate is pure time savings with low risk.
Test generation is the other place where AI earns its keep. Given a function signature and some context, it generates reasonable test cases faster than I can, and I review them to make sure coverage is real rather than just high. Documentation too — docstrings, README sections generated from code — I'd rather spend that time on something that requires judgment.
Regex and one-liners are another obvious win. The syntax is tricky, the logic is simple, and honestly I don't want to spend 20 minutes on Stack Overflow trying to remember how lookaheads work.
Where I'm cautious: business logic. The core of what the system does. I need to understand this, and I need to be the one who thought through the edge cases. Security-sensitive code too — authentication, authorization, cryptography — I review every line and I don't accept suggestions without understanding them. Performance-critical paths need profiling regardless of where the code came from. Error handling is where AI assistants are most optimistic; they generate the happy path beautifully and handle failure modes as an afterthought, which is exactly the wrong priority.
The Maintenance Math
Code is written once and read many times. If AI helps me write something 3x faster but makes it 2x harder to read, I haven't saved time — I've borrowed it. The interest payments come due during every code review, every bug fix, every time a new engineer tries to understand how the system works.
I've started rewriting AI-generated code to match the style and patterns of the existing codebase even when the generated code is technically correct. Consistency reduces cognitive load for the whole team. A technically correct function that reads like it came from a different codebase is still a maintenance burden.
What I Actually Do
Before I touch the keyboard, I sketch the approach. What are the functions? What data flows where? What are the error cases? AI can't do this part for me, and I've learned not to let it try. Then I use AI to generate scaffolding — the file structures, the boilerplate, the function signatures. Then I write the core logic myself. Then I use AI to generate tests, which I review. Then I read every line before committing, as if a junior engineer wrote it. Which, in a real sense, is what happened.
The junior engineer analogy is one I find useful. AI-generated code has the same profile as junior code: it often works, it's usually missing some judgment about edge cases, it doesn't necessarily fit the existing patterns, and it needs review before it goes anywhere.
The Team Problem
AI-assisted coding changes team dynamics in ways we're still figuring out. Senior engineers benefit most — they have the judgment to know what to accept and what to reject, and they use AI as a genuine force multiplier. Junior engineers face a harder challenge. If they're accepting suggestions without understanding them, they're not developing the judgment that makes senior engineers valuable. The fast feedback loop of vibe coding is actively bad for learning to think through problems.
Speed without understanding isn’t productivity. It’s debt with a variable interest rate, and eventually the rate goes up.
Post thisI've started requiring juniors to explain any AI-generated code before it gets merged. Not to be punitive — to make sure they actually read it. Code reviews also take longer now. You can't assume the author understands what they submitted, which means the reviewer has to ask different questions.
AI-assisted coding isn't universally good or bad. It's a tool with real tradeoffs, and the teams pretending there are no tradeoffs are accumulating debt invisibly. Speed without understanding isn't productivity. It's debt with a variable interest rate, and eventually the rate goes up.