3 min read

Taming Your Powerful Coding Agents

Working with AI coding agents feels like managing a talented contractor who starts fresh each day—fast but lacking context. They expose API keys, break existing features, and reinvent the wheel. Here's how adapting basic software engineering practices can guide their power in the right direction.
Taming Your Powerful Coding Agents
"Oh wow, it all works in minutes. Thanks."
"Wait… are you exposing the API keys in every request?"
"These data should be persisted, not hard-coded. It's a live stats graph, you know?"
"The issue is fixed, but now X and Y are broken…… Ok, now the issue is still there, and X and Y are still broken."
"Gahhhhhh ... stop deleting random stuff!"

Sound familiar?

Claude, Amp, you name it. I've found working with all the mainstream coding agents is generally the same: A talented contractor who can type at warp speed starts fresh each day

"Fresh" means the context limitation of a model is very real. We wouldn't expect someone new to stay new after working on a project for 3 months, but we should expect and prepare for that on an AI agent. Their behavior can also change after a day, due to the unseen tuning behind the scenes and the enormous amount of new data being pumped into them daily. They will need to learn the context before doing the assigned task –– not "relearn", since every day is new.

"Contractor" means they tend to not go deep into architecture design thinking since it's not their business unless being asked, prefer the most obvious solution over the optimal one, and don't question the task spec even when it conflicts – even though they are surely capable of doing better. They would remove random blocks since the blocks happen to get in their ways without realizing the blocks are there blocking for a purpose. From what I see, it's also where vibe-coding often hits the ceiling. The project is stuck at seemingly simple issues that their coding agents can't fix without breaking other things, or just can't fix.

"Typing at warp speed" means their throughput is unparalleled. However, the flip side is that they don't care about DRY, since duplicating is too easy. They'd prefer code addition to reduction. Using an existing library is never their first option; reinventing one is.

Saying all that, AI agents are truly capable of writing great code; just that we need to guide their power in the right direction. How? Just like what we do to onboard new developers and safeguard our codebase: adapting the good old software development best practices.

Design for maintainability. Sound architecture matters. Although the actual architecture varies case by case, at the very least we should think of separating the persistence layer, application state, UI, and business logic. That modularity makes it easier for both humans and agents to reason about the system and prevents destructive “fixes.”

Once the architecture is clear, insist on unit and integration test coverage, as close to 100% as possible. This is where the power of warp typing speed manifests. Maintaining this level of test coverage used to require hard effort and discipline, but it's a breeze for AI agents.

Keep your specs alive. Just like when we onboard a new developer, can they understand what the project is about, what are the coding standards, and where we are at by the documented materials? Maintain clear goals, API references, and expected behaviors—ideally somewhere always visible and current.

Configure static code analyzing tools early on. Enforce consistent coding style and eliminate discouraged patterns from the very beginning, whether it's your code or agent-generated code.

Review the code. From my own experience, code review still captures a good amount of deficits like the good old days. Yes, I know a vibe coder might frown upon me now, but it's such a joyful, interactive way to learn.

Manage the development cycle more deliberately. Design a cycle that fits the project so the project can move forward like a well-oiled machine. Break features into clear, testable tasks. Group tasks into shippable milestones. Watch for and resist feature creep. Iterate on realistic timelines and ship incrementally. With AI agents doing the heavy lifting of the implementation side, we get to enjoy the released capacity that we often don't put into project management enough.

Last but not least, encourage alternatives. Make it clear that proposing alternatives is welcome. It might sound controversial, but no one can do more harm than a diligent team member who always does exactly what you say at all costs. Unfortunately, from my own experience maybe 99% of the time an AI agent still does whatever I said even with the encouragement, but that remaining 1% is where the breakthrough resides.

AI coding agents are powerful, but not magical. Before the day everything is generated on-the-fly without human-readable code, the software engineering best practices will continue to be relevant.