Working Effectively With AI Coding Agents

Provide as much context as possible

Describe what you want in clear, concrete detail. Include examples. Link to relevant files, standards, and conventions the agent should follow.

Before you let the agent write code, ask it to restate your requirements. Read the response. If it captured your intent, proceed. If not, clarify what it missed.

Tell it what to do—not what not to do

“Don’t do X” rarely produces good results. Positive instructions are far more reliable. Specify the desired outcome, constraints, and acceptance criteria.

Get the architecture right

Tuning a function is easy; reworking an architecture is not. Invest in the design upfront and verify that the agent is implementing your architecture. How it writes small helpers matters less than whether the system boundaries and data flows are correct.

Ask questions to gather context

If you’re not yet sure what you want, use the agent as a thinking partner. Discuss the project, the stack, the docs, even Reddit threads. This both sharpens your own understanding and gives the agent richer context.

Ground the model in facts

Training cutoffs and hallucinations are real. LLMs aren’t omniscient—and they do make mistakes. Counteract this by grounding the agent with up-to-date, verifiable facts: documentation, specs, tickets, code comments, scientific papers, web search results, etc.

Keep task scope tight

As prompts and context grow, signal can get diluted. Break work into small, well-bounded tasks so the context contains only what’s relevant. Smaller tasks make it easier to review, test, and course-correct.

Refresh the context regularly

Think “one task, one context.” If the current thread gets noisy or off track, shelve the partial changes and start a fresh context with a narrower scope and cleaner prompt.

Sync state with git diff

If you like the current code but the conversation has drifted, open a new context and have the agent run git diff (or paste the diff) to catch up without reintroducing the old noise.

Let LLMs review each other’s code

Create a “review mode” prompt describing what you care about: correctness, readability, performance, security, edge cases, etc. Run reviews with a second model—or the same model in review mode—and ask targeted questions like:

“What risks or failure modes do you see?”

“Which assumptions should be documented?”

“What important details might be missing?”

Use TDD

Once the architecture is solid, start with tests. Have the agent draft end-to-end or unit tests, review them yourself, then let the agent implement the feature and iterate against the tests. This keeps development anchored to real behavior.

Summary

AI coding agents aren’t replacing software engineers anytime soon—and that’s a good thing. They still need thoughtful direction, review, and course correction to produce great work.

What they do offer is speed on well-defined tasks. Offload the mechanical, boilerplate, and repetitive work so you can spend more time on design, problem-solving, and asking better questions. Used this way, LLMs and agents make development faster and often higher quality—because you can iterate more, learn faster, and keep your focus on the hard parts.

Adnan Mujkanovic

Adnan Mujkanovic

Full Stack Overflow Developer / YAML Indentation Specialist / Yak Shaving Expert
Gotham City