Table of Contents
Famous first words
I work extensively with AI, and as it is a new tool that introduces an entirely different way of working, it sometimes takes time to adapt to these new circumstances. In this context, I am still exploring the best approach to integrate AI or agentic chatbots into my development workflow. Recently, I reached another milestone that I’d like to share.
Preliminary notes: You may think that “the plAIbook helps you to vibe code everything. You are wrong. You still need a profound understanding of architecture and design patterns. The more complex a project gets, the harder it is to maintain with AI only. This is not a fire-and-forget solution.
Also, as this is not the only but main focus of my developing work: This is slightly focused on creating little PoC`s using Python and/or Flask. But I believe it’s quite easy to adapt the process for other languages, frameworks or platforms.
What are the landmarks here?
- a meta prompt that returns your necessary prompts for your actual coding Agent
- a fully fledged pre-prompt instruction
- an environment that makes it easy to refactor or dive into the code on your own
- an MCP server (yeah, buzzwords are required here) that helps the Agent to supervise its own work
You will find a more technical summary, all prompts and scripts there: click me
This said, with no further ado, let’s dive into the details.
The Process
The developing process is separated into three phases:
Phase 1 - Prepping
First we need to prep our environment, means: Setup Visual Studio Code. I assume that you have a Copilot subscription. Though, the whole process will work without it and it should also work with one of the derivatives, like Windsurf or Cursor.
1. Add pre-prompt instructions
Assumining you have created an empty folder, declared it as a workspace and inited a git repo there, this is how you proceed:
Take the pre-prompt instruction (from here) and put them to .github/instructions/global.instructions.md
.
Hint #1: If you click the cog wheel at the top of Copilot’s chat window and select instructions you can add even more specific instructions. They will be placed in the very same folder.
Hint #2: As these are global instructions, you may also want to place them to
copilot-instructions.md
- which is the native VSCode way - I prefer the first way, as this allows you to maintain multiple instruction sets.
As you can see the pre prompt uses a lot of abbreviations and doesn’t seem very natural. That’s because it’s sufficient enough for the machine to understand it’s task. However, let me guide you through some benefits and features I built in:
- functions are not bigger than 20 lines, files don’t exceed 200 lines
- I am pursuing a modular approach with features separated into files and subfolders
- code must be human readable with appropriate inline documentation, files are structured and grouped into functional sections
- common security, logging and error handling measures are implemented, like of course try/except blocks, rate limiting, sanitizing inputs and so on
- the agent creates commit messages on its own after every minor change
- extensive tests for functions and API endpoints
- lean environment with minimal dependencies
- extensive but not bloated documentation, with a
PROCESS_MAP.md
showing the overall architecture and flow of the application or anUNFINISHED.md
file outlining current work, which is quite helpful if the agent stopped working or you want to get back to the current state at a later point - Docker implementation which easily enables you to take everything and run it anywhere else
- utilizing MCP to maintain supervising tasks, cleaning up, testing and so on
- telling the agent to be sceptical and ask user for additional details
- trying to prevent “bug looping” where the agent implements the same fix multiple times without success (this is another reason why
UNFINISHED.md
exists) - and last but not least but most important: No useless “Sorry” & “Apologies” or offering “This is the final solution” - you know how annoying this is, when you read it the twelfth time and the app is still not running ;-)
Note: As already said: This has a strong focus on Python and Web app development, but most parts are also easily applicable if you want to, like, create a pure JavaScript app or even mobile apps.
2. Enable MCP
The model control protocol
(MCP) is a fancy name for an interface that the Agent can talk to. On one hand it may help you to control other programs, like imagine connecting the Agent via MCP to Adobe Premiere and telling it to cut your home videos. We are using it to allow the Agent to supervise and revise its own output!
First create a new file .vscode/mcp.json
with this content - I will explain it in a second:
1{
2 "servers": {
3 "ai_coding_agent": {
4 "type": "stdio",
5 "command": "python3",
6 "args": ["./.mcp_tools/mcp_server.py"]
7 }
8 }
9}
Now create a folder named .mcp_tools
and put the following files into it:
mcp_server.py
- the server itselfload_context_files.py
- allows the agent to refresh its pre-prompt brainrun_tests.py
- allows the agent to run tests against the appupdate_process_map.py
- allows the agent to update the process map
(You may restart your Visual Studio Code)
Now open the command palette to find our MCP server (MCP: List Server
) and start it (or use the very handy inline-commmands VSCode is offering you). The output window should return something like that:
Hint: In some environments the command may need to be adjusted to
python
instead ofpython3
, depending on your Python installation.
Perfect, the server is running. You can conform that by clicking the little cog wheel at the top of your chat window, which opens the extension bar (usually on the left) - and as you can see, VSCode already offers a lot of different MCP solutions. At the very bottom is our own:
Test your agent directly using this tool reference, which acts as a magic word, to reload pre prompt:
1#load_context_files
which should be confirmed properly by the bot (you can use natural language too, but I prefer calling tools directly).
I suggest you checkout the mcp_server.py
which is not too complex. It defines tools the MCP server offers to your bot, right now there are three: load_context_files, run_tests and update_process_map
Hint: You may have noticed “Toolset” - You can combine sets of different MCP tools into one toolset to perform multiple actions. Also, if you look for more sophisticated interactions, of course you can feed information into your MCP server tools. Like not only telling it to run tests but actually provide data to run the tests with and submitting them as arguments.
Phase 3 - Planning
In this phase we will create 3 to 5 sequential, self-contained prompts. I mean, not we, but the AI, apparently. For this I created a “meta-prompt”. Throw it against your most favorite AI agent and wait for the result. This is a strategy that I suggest to everyone. Don’t try to wrap your head around writing the perfect prompt. Ask someone who should know it - better than you: the LLM!
Hint: There’s one cruccial information that you should take with you at this point: Don’t was time working on prompts on your own, manually. Utilize the power of AI to get the perfect prompt!
So, we can keep this short: use the meta prompt
to get a small handful of prompts for your actual developing progress. Those prompts are reflecting different phases of your developing process.
You’ll find my meta prompt
here. Just fill out the four required fields:
- [NAME] - of your application
- [GOAL] - what’s this app about?
- [FEATURES] - what does it offer?
- [USERS] - who are the target users?
Keep goal and features tight. Don’t fall into the feature creep rabbit hole. Your goal should be a good working first draft of your app. Focus on that and then add features one at a time.
Now run your meta prompt
and collect your actual prompts for your Copilot agent.
Phase 4 - Execution
The rest should be a walk in the park. (Until it isn’t (˵ ͡° ͜ʖ ͡°˵)). Apparently you start pasting your prompts to the agent and watch it working the task.
You may leave it working like that, but I strongly suggest to keep watching. When it’s done, don’t just paste the next prompt. Try to understand what has been built here, not in detail, but for later troubleshooting and feature expanding, it’s quite helpful to understand the architecture. You can always ask the machine, but it will probably not know the great picture.
As mentioned above, the pre prompt covers a lot not only technical issues but also tactical directives that should improve how the agent works and talks to you.
Summary
A practical guide to integrating AI and agentic chatbots into the software development workflow, focusing on Python and Flask, with step-by-step instructions for setting up environments, prompts, and automation tools.
Main Topics: AI-assisted coding development workflow automation Python Flask MCP server
Difficulty: Intermediate
Reading Time: approx. 8 minutes