As a veteran software engineer, I was genuinely excited by AI-assisted coding. Autocomplete felt like magic at first - type a few words, hit tab, and the AI completed the function or paragraph with surprising accuracy. Over time the underlying models matured, and what many call vibe coding became real. With careful use, my throughput increased drastically. What once took two weeks I could now finish in an afternoon: well architected, fully tested, and neatly documented.
Yet every day, my LinkedIn feed is full of complaints from developers who felt burned. They describe their AI buddy forgetting promises, deleting chunks of code for no reason, fixing one bug while creating ten others, or looping endlessly and racking up costs without progress. Many walked away, vowing to return to hand-coding. That is a pity. As technologists, developers already possess the skills to drive AI better than most. Walking away now is like holding the key to a treasure vault but leaving empty-handed. We are so close. All that’s needed is to apply the same intuition and engineering discipline we already use for systems design to tame the beast.
Through trial and error, I have learned lessons on how to vibe code effectively. This post is about lessons learned, pitfalls to avoid, and practical tips to make AI your strongest coding ally.
The key is remembering that AI excels at bounded, well-defined tasks - writing a function, fixing a small bug, or generating tests. Humans should make architectural decisions, steer the process, trim context, and orchestrate the project end to end. At least for now, vibe coding works best at the task level - many people get this the other way around.
Effective Prompting
1. Structure complex prompts as XML
This is especially relevant when vibe coding with Claude/Sonnet. Since their training data is heavily XML-based (ChatGPT prefers Markdown), they handle XML much better. The Claude team officially recommends this approach here↗, and AWS reinforces it in their Bedrock prompting guide here↗.
Using XML tags like <instruction>
, <context>
and <verification>
, you can clearly communicate your intent and even influence the problem-solving pathway. This significantly improves accuracy and first-pass success. It has been a game-changer for me.
Example elements you can use:
<instruction>
- the single task to perform<context>
- files, constraints, domain facts<verification>
- tests, acceptance checks, failure modes to avoid
Sample prompt
<instruction> Create a React component for user profile display. </instruction> <context> - User data includes name, email, avatar. - The component must be responsive. - Use Tailwind CSS for styling. - Place the component under src/components/profile/UserProfile.tsx. </context> <verification> - Component renders without runtime errors. - All user data is displayed correctly with sensible fallbacks. - Layout adapts to mobile and desktop. - Add unit tests with React Testing Library. </verification>
2. Keep refining the original prompt
Instead of chasing the AI with a trail of clarifications, refine the master prompt directly. Here’s a sample workflow:
- Ask the question with a tentative prompt.
- Observe the response and note misconceptions or mistakes.
- Rather than sending multiple corrective prompts, go back to the original prompt (edit with the pencil icon) and refine it by adding a do-not-do list.
- Build the do-not-do list by asking the AI at the end of each failed attempt: “Summarise what you tried, and why you think it didn’t work.” Feed this back into the master prompt.
After a few iterations, the master prompt evolves into a clear logical flow, stripped of incorrect paths and far more likely to succeed. This approach keeps your history clean, your intent sharp, and makes it easier to backtrack or refine changes later.
3. When the AI seems to ignore instructions
Sometimes the AI appears to ignore a clear instruction. For example, I once asked it to use an in-house utility function, but it went ahead and pulled in an external library anyway. Rather than simply telling it to correct the mistake, I found a better approach:
- Ask why it happened. This helps reveal how the AI interpreted and prioritised my instructions.
- Have it suggest improvements to the original prompt, phrased in a style it is more likely to follow in future.
- Revise the original prompt accordingly, and apply the same refinement to all future prompts.
This has been a useful exercise. I often discover a gap between how I phrase a request and how the AI interprets it. Many of its suggestions highlight the need for stronger and more directive language - for example, explicitly marking a section as ### CRITICAL ###
.
4. Guard that precious context window
Many people treat the AI chat window like a casual chit-chat box, reusing the same thread for one task after another. That is a mistake. The context window is one of your most valuable resources, and you should protect it like a hawk. Provide only the context that is essential. Extend it deliberately, with clean intent and precise language. Most importantly, watch closely for that first sign of overflow - when the AI becomes confused, forgetful, or inconsistent.
A simple trick for early detection is to add a sentinel at the end of your master prompt, for example:
'Prefix all responses with 🐟 so I know the rules are being applied.'
Then, monitor each response. The moment the 🐟 disappears, you know the AI is struggling to retain the full context.
Equally important is the habit of starting new threads proactively. Whenever you close out a major problem or reach a milestone, begin a new thread. This narrows the scope, eliminates drift, clears forgotten assumptions, and removes stale inferences - keeping both you and the AI sharply aligned on the task at hand.
Quality Control and Testing
5. Escape the fix-one-break-many hell
Use automatic regression testing. Think of it as having a tireless tester with lightning-fast reflexes, clicking every button and checking every screen to ensure nothing breaks. While 100% coverage is unrealistic, you can maintain a concise but meaningful regression suite that exercises the critical paths of your system in minutes. This gives development a solid foundation to move forward.
When to run tests:
- After every new feature is built
- After every bug fix or refactor
- On every commit to your CI/CD pipeline
For end-to-end coverage, Cypress is a strong choice. It drives the frontend while also exercising the backend and database, ensuring realistic and cross-layer testing.
Alternatively, have the AI run selected tests and iteratively apply fixes until all pass. This works especially well when the AI is “caught in the spotlight” - immediately seeing the failing tests it caused. This requires running tests in headless mode so that errors are directly reported in the same terminal where the AI can access them. The trade-off is that you, as the developer, might miss subtle details about what happened. In rare cases, you may also hit the AI’s auto-iteration limit.
Full-stack Workflow & Project Management
6. Separate concerns between frontend and backend
Initial development
- Begin by separating the frontend (FE) and backend (BE) into two repositories and develop them independently
- Focus on FE first - it is often more interesting and engaging for the app creator and forms the foundation for all end-to-end testing.
- While working on the standalone FE, simulate the backend using mock datasets, browser local storage, and session cache. Use static test data and tools such as Mockaroo to generate realistic API responses.
Techniques for merging repositories
Once both parts are mature, merge them into one. Before merging however, request a mindmap that explains how the joining repository (typically the BE) will integrate into the receiving repository (typically the FE). Provide this mindmap to the receiving repo during the merge process to guide integration.
Mindmaps are invaluable when dealing with complex new repositories. Start with a high-level overview, then progressively drill down into specific areas by requesting detailed breakdowns of individual components.
7. Keep the README active
Treat the project’s README as a living document and a hand-off guide for the next developer. Key sections may include:
- The app's primary features
- An overview of the repo structure (where everything is roughly)
- The component hierarchy (common components and where they are used)
- Data flow and core functionalities (you may use a few user scenarios to illustrate these)
- Design systems, if applicable (e.g. colour schemes, typography definitions, reusable styles, etc.)
Whenever a significant design decision is finalised, or a new feature is implemented, prompt the AI to update the README. Point it to the relevant section so the changes are captured in context. This practice maintains a single source of truth and prevents the AI from drifting off course throughout the development cycle.
8. Version control your (hard-earned) progression
Use a combination of git stash and git commit to maintain clear, permanent records of your code progression. This makes it straightforward to revert changes or run A/B tests on specific features or design decisions.
For added clarity, request a short changelog from the AI after each significant update, and tag releases at key milestones. This ensures your work remains traceable, auditable, and easy to revisit later.
Common pitfalls and recovery patterns
Pitfall: Dumping too much context at once
Result: Derailing or hallucination
Recovery: Reduce to only relevant files and facts
Pitfall: Asking for vague outcomes
Result: Generic code
Recovery: Specify inputs, outputs, constraints, and tests
Pitfall: Letting the assistant iterate unseen
Result: Hidden regressions
Recovery: Run tests on every tranche and inspect diffs
Pitfall: Chasing micro-fixes in a bloated thread
Result: Context rot
Recovery: Consolidate into a fresh prompt with a clear do-not-do list
Pitfall: Over-trusting generated refactors
Result: Subtle behavioural drift
Recovery: Insist on behaviour-preserving tests and incremental diffs
Key Takeaways
Provide solid context. Test vigilantly. Iterate with intent. Keep it simple.
The real skill is not vibe coding itself - it is engineering AI the way we engineer software: with structure, clear processes, and disciplined feedback loops. Developers already hold the keys. The challenge is choosing to use them with strategy and consistency.
Happy vibing!