- Joined
- May 12, 2022
- Messages
- 196
- Likes
- 374
- Degree
- 1
When you first use an AI agent for coding, it feels like magic. You test it with a simple request, and it executes perfectly. Soon, you're off to the races, building that SaaS idea you've always dreamed about. And then it happens. The AI fucks something up. Your code is broken, bro, and you don't know how to fix it.
At this point in time, AI tech has not yet reached a point where you can let the agent run wild in a complex system and expect good results. Pure "vibe coding" invariably results in a convoluted mess in your codebase, and neither you nor the AI will be able to figure out what the heck is going on.
However, if you are able to maintain a high level oversight over the WHAT, HOW, and WHY of your codebase, you can indeed use AI to fully orchestrate a working piece of high quality software.
This thread is for discussing how to to effectively leverage AI for coding as the technology continues to advance.
Here are some of my strategies:
1. Document the codebase religiously.
Use the AI to update the documentation every time you complete a change, or do it manually if you can. ALWAYS ensure the documentation is in-sync with development. Always provide that documentation as context to the model when making requests.
The different providers each have their own way of doing this. I've found .md files to be universally effective. Also, Cursor Rules are very powerful for preserving context. On Claude Code, you can keep CLAUDE.md files in critical directories, and Claude will always find them.
I like to always keep a forward looking Known Issues, and Feature Ideas section in my documentation, and keep this up-to-date. For complex features, I have the model write its plan down before doing any code changes.
Use your custom instructions to tell the model exactly how to document your codebase according to your preferences. I've found it helps to use words like "specific", "detailed", "developer-facing" etc.
Anything you find yourself repeating to the model in numerous prompts. Add it to the documentation!
2. Protect your context window
One common pitfall in using AI across the board is the context window. Sometimes, it simply forgets the original request and starts going off the rails. Yet, it will tell you with full confidence that it knows exactly what it is supposed to be doing.
The most basic way to mitigate this is to open a new chat. To ensure the model doesn't get lost in the fresh chat, this is where your documentation comes in handy. Before opening a new chat, request a documentation update with specific actions to take to get back on track. Then feed that into the next chat and keep going.
There are more complex ways to do this, such as using MCP servers, and using subagents to delegate tasks. But for the average user, opening a new chat and keeping detailed documentation is good enough.
3. Plan first, code second
It is always tempting to just let the AI do whatever it wants to do from the opening request. But, more often than not, this will cause breaking changes. To mitigate this, I always start in "plan mode" or "architect mode" or whatever the name is for the discussion mode of the provider.
Start by asking questions targeted at getting the AI to understand what you want to accomplish and gather context. Carefully read the plan that it gives you and PUSH BACK against anything that doesn't seem right, or sounds confusing, overengineered, or just plain retarded. This WILL happen and is to be expected.
Keep the back-and-forth going until you agree 100% with the plan, and then approve the execution. Let the model work until it is done.
4. Code Reviews
Often times, when making many changes, the model will stop before actually getting to the end of the plan but claim it has completed the plan. This is frustrating but it's a limitation of the technology right now.
To mitigate this, I like to tell the model to check its work when its done. When it comes at you with "Excellent, now the feature is 100% implemented! Your codebase is now yada yada yada." Don't blindly believe them.
Say, "Are you sure about that? Check your work for true completion and anything you missed along the way"
It will usually go back and correct itself. Otherwise, you find bugs in the system from incomplete refactors and will be fixing them later when they come up.
5. USE GIT
If you fuck around with this technology for long enough, you are going to end up in a situation where you accidentally let the AI permanently delete something that you really needed.
Github is there to save your ass. Every time you update the documentation, commit and push to git. Make it part of your natural workflow. You will thank me later.
-------------
REUSABLE PROMPTS
Here are a few re-usable prompts that I have saved. They are effective because they force the model to gather context and consider the implications.
1. Full Codebase Audit After Making Changes
2. Adding a new feature
3. Single feature audit
--------
The above is just the basics. It gets a LOT deeper than what I laid out there, and as I keep following this tech and using it extensively I plan to use more of the advanced features. I have been experimenting with sub-agents and stuff but haven't fully gotten it working how I want it to yet.
Would love to hear about some things that other people have discovered.
At this point in time, AI tech has not yet reached a point where you can let the agent run wild in a complex system and expect good results. Pure "vibe coding" invariably results in a convoluted mess in your codebase, and neither you nor the AI will be able to figure out what the heck is going on.
However, if you are able to maintain a high level oversight over the WHAT, HOW, and WHY of your codebase, you can indeed use AI to fully orchestrate a working piece of high quality software.
This thread is for discussing how to to effectively leverage AI for coding as the technology continues to advance.
Here are some of my strategies:
1. Document the codebase religiously.
Use the AI to update the documentation every time you complete a change, or do it manually if you can. ALWAYS ensure the documentation is in-sync with development. Always provide that documentation as context to the model when making requests.
The different providers each have their own way of doing this. I've found .md files to be universally effective. Also, Cursor Rules are very powerful for preserving context. On Claude Code, you can keep CLAUDE.md files in critical directories, and Claude will always find them.
I like to always keep a forward looking Known Issues, and Feature Ideas section in my documentation, and keep this up-to-date. For complex features, I have the model write its plan down before doing any code changes.
Use your custom instructions to tell the model exactly how to document your codebase according to your preferences. I've found it helps to use words like "specific", "detailed", "developer-facing" etc.
Anything you find yourself repeating to the model in numerous prompts. Add it to the documentation!
2. Protect your context window
One common pitfall in using AI across the board is the context window. Sometimes, it simply forgets the original request and starts going off the rails. Yet, it will tell you with full confidence that it knows exactly what it is supposed to be doing.
The most basic way to mitigate this is to open a new chat. To ensure the model doesn't get lost in the fresh chat, this is where your documentation comes in handy. Before opening a new chat, request a documentation update with specific actions to take to get back on track. Then feed that into the next chat and keep going.
There are more complex ways to do this, such as using MCP servers, and using subagents to delegate tasks. But for the average user, opening a new chat and keeping detailed documentation is good enough.
3. Plan first, code second
It is always tempting to just let the AI do whatever it wants to do from the opening request. But, more often than not, this will cause breaking changes. To mitigate this, I always start in "plan mode" or "architect mode" or whatever the name is for the discussion mode of the provider.
Start by asking questions targeted at getting the AI to understand what you want to accomplish and gather context. Carefully read the plan that it gives you and PUSH BACK against anything that doesn't seem right, or sounds confusing, overengineered, or just plain retarded. This WILL happen and is to be expected.
Keep the back-and-forth going until you agree 100% with the plan, and then approve the execution. Let the model work until it is done.
4. Code Reviews
Often times, when making many changes, the model will stop before actually getting to the end of the plan but claim it has completed the plan. This is frustrating but it's a limitation of the technology right now.
To mitigate this, I like to tell the model to check its work when its done. When it comes at you with "Excellent, now the feature is 100% implemented! Your codebase is now yada yada yada." Don't blindly believe them.
Say, "Are you sure about that? Check your work for true completion and anything you missed along the way"
It will usually go back and correct itself. Otherwise, you find bugs in the system from incomplete refactors and will be fixing them later when they come up.
5. USE GIT
If you fuck around with this technology for long enough, you are going to end up in a situation where you accidentally let the AI permanently delete something that you really needed.
Github is there to save your ass. Every time you update the documentation, commit and push to git. Make it part of your natural workflow. You will thank me later.
-------------
REUSABLE PROMPTS
Here are a few re-usable prompts that I have saved. They are effective because they force the model to gather context and consider the implications.
1. Full Codebase Audit After Making Changes
Are there redundant, deprecated, or legacy methods that remain in this codebase? Is there duplication or scattered logic? Is there confusion, mismatches, or misconfiguration? Overengineering, dead code, or unnecessary features? Is the system completely streamlined and elegant, with clear separation of concerns, and no security vulnerabilities?
Audit DEEPLY and provide a clear, actionable response, and a highly specific plan to resolve any issues you discover.
Do not guess or make assumptions. Review the codebase directly. Keep the big picture in mind always, tracking how files interact and the overall structure of the system. You will need supporting evidence and data flow from the codebase to approve the plan and move forward.
If the intended functionality is muddy or difficult to understand, STOP, ask the user to clarify. The intended functionality must be CRYSTAL CLEAR.
2. Adding a new feature
We would like to add [FEATURE] to this system. This feature should [DESCRIBE]. It must align with our existing system of [EXPLAIN].
Create a detailed implementation plan that outlines each file that must be touched, and specific changes that must be made.
We are looking for a clean, seamless implementation strategy. You must conduct thorough research during this planning phase. Your plan should not contain any analysis or code review. I expect analysis and code review to be completed by the time you present your plan.
Prepare a detailed action plan for my review. Together we will finalize and refine the plan for execution.
3. Single feature audit
Audit the [FEATURE] system for completeness, security standards, and correct wiring, specifically related to [REQUIREMENT].
Identify dead or redundant code, scattered or overly complex logic, and areas where things can be simplified but maintain functionality. Also note gaps in implementation or incomplete refactors, loose ends, and sources of confusion.
Present a focused audit with a step-by-step action plan that outlines the current implementation, discovered errors, opportunities for improvement, and potential for optimization.
Prepare to enhance, debug, or refactor the system as needed according to user feedback in order to ensure a robust and reliable operation that is flexible, extensible, easy to maintain, and crafted with precision.
--------
The above is just the basics. It gets a LOT deeper than what I laid out there, and as I keep following this tech and using it extensively I plan to use more of the advanced features. I have been experimenting with sub-agents and stuff but haven't fully gotten it working how I want it to yet.
Would love to hear about some things that other people have discovered.