Training — RIDE
The Framework That Changes Everything
Video Lesson Coming Soon
A video walkthrough for this module is in production. For now, dive into the written content below.
What You'll Learn
- ✓ The RIDE Framework deep dive
- ✓ R — Role: identity with implied expertise
- ✓ I — Instructions: step-by-step process design
- ✓ D — Dos & Don'ts: observable, verifiable rules
- ✓ E — Examples: why showing beats telling
- ✓ The Escalation section
- ✓ 3 complete annotated system prompts
- ✓ Copy-paste starter template
In this module 9 sections
Most Agent Builders Skip This Step — Don't
This module is the most important in the course. The system prompt is the difference between an agent that produces mediocre work and one that produces excellent work. A brilliant system prompt shapes every output and reveals the ceiling of what your agent can achieve.
What Is a System Prompt?
The system prompt is the foundational instruction document your agent receives before any task begins. It is not a welcome message. It is not a description of your service. It is the rulebook, role description, process guide, and quality standard all wrapped into one document.
Think of it as the onboarding packet you would give to a brilliant new employee on their first day — except this document is the only thing they will ever read. They will never learn by watching you work. They will never pick up habits from the office. They will never ask a more experienced colleague for advice. Everything they know about how to do this job comes from this document and the examples you provide.
That is both the power and the responsibility of the system prompt. Get it right, and your agent performs like a seasoned professional. Get it wrong, and no amount of good AI technology will save you. The good news: there is a framework that makes getting it right straightforward.
The RIDE Framework
Interactive: Five Sections of Your System Prompt
Interactive — tap to explore
RIDE stands for Role, Instructions, Dos and Don'ts, and Examples. Plus a critical fifth section — Escalation — that sits outside the acronym but inside every good system prompt.
R — Role: Who the agent is I — Instructions: How it does the work D — Dos and Don'ts: The rules it always follows E — Examples: What excellent output looks like Escalation — What to do when uncertain
Every section serves a specific purpose. Together, they give your agent everything it needs to produce professional-quality work consistently. We are going to walk through each section in detail, with templates you can fill in and real examples showing weak versus strong versions. By the end of this module, you will have a complete system prompt written and ready to use.
R — Role: Giving Your Agent an Identity
Interactive: Good vs Bad Role Statements
Interactive — tap to explore
Your role is your introduction. It is the first element of your system prompt. You are not writing a resume. You are defining expectations — who this agent is, what it specializes in, and what professional standard you are setting.
I — Instructions: Teaching Your Agent How to Work
Instructions are the step-by-step process for how your agent approaches work. Your agent needs to know the process before it starts.
Step-by-step matters because it forces you to think through the workflow. Write your instructions as a numbered list, not as prose. Each step should be clear enough that the agent knows exactly what to do next, and in what order.
D — Dos and Don'ts: The Rules That Never Bend
Rules govern behaviour beyond the standard process. They are the always-on constraints that apply to every task, every time, regardless of the specific brief.
The key principle for writing good rules is that they must be observable. Write well is not a rule — there is no way to check whether it was followed. Use active voice in at least 90% of sentences is a rule — anyone can verify it. The more concrete your rules, the more reliably the agent follows them.
Rules fall into three categories.
Quality Rules define what a good output looks like. Use active voice. Back every claim with a specific example or data point. Keep paragraphs to four lines maximum. Use subheadings every 250-300 words for content over 600 words. Never use words like leverage, delve, synergy, holistic, transformative, cutting-edge, game-changer. This forces the model to use more natural language.
Format Rules define how the output should be structured. Deliver in clean markdown with H2 for main sections and H3 for subsections. Include a word count at the end. Use bold sparingly. Never use more than two levels of nested bullet points.
Communication Rules define how the agent interacts with clients. Confirm your understanding of the brief before starting on any job over 500 words. Notify when a task is complete with a one-sentence summary. Never promise specific outcomes to clients. Never comment on the client's business unless asked. Never mention AI, tools, platforms, or automation in any client-facing communication.
That last rule deserves emphasis. Never mention AI in client-facing communication. Clients are hiring you for a service. The moment a client learns they are talking to an AI agent, the perceived value drops, regardless of how good the output is. You deliver quality work. That is what matters.
Start with 8-12 rules across all three categories. Too few and the agent has too much room to make bad decisions. Too many and the important rules get lost in noise. Your rules section is a living document — it grows smarter as you learn what your agent needs.
E — Examples: The Most Powerful Training Tool You Have
Rules tell your agent what to do. Examples show it. The difference between telling and showing is significant.
A rule like write in a warm but authoritative tone is interpreted differently by different AI models. An example of writing that achieves a warm but authoritative tone is unambiguous. The agent reads it and internalises the standard — the vocabulary choices, the sentence rhythm, the balance between personality and professionalism.
Two excellent examples do more training work than ten pages of rules.
Your examples should be the best work you have ever produced in this service category — because that is the bar you want your agent to aim for. Not average work. Not good enough work. Your best.
If you are starting fresh and do not have past work, create two examples manually. Yes, this takes time. Yes, it is worth it. These examples are the foundation of your agent's quality.
For each example, include three things: the input (the brief or request), the output (the excellent work), and optionally, annotations explaining what makes this example strong.
Format: Example 1: Input and Output. Example 2: Input and Output. The annotations are optional but valuable, especially in the beginning. They teach the agent not just what good output looks like, but why it is good.
How many examples? Two is the minimum. Three to five is the sweet spot for most services. Beyond five, you hit diminishing returns and start using up context space that could be better spent on instructions or rules. Quality beats quantity every time. Two excellent examples beat five mediocre ones.
Examples go at the end of your system prompt. The agent reads the role, understands the process, absorbs the rules — and then sees concrete demonstrations of everything working together.
Beyond RIDE: The Escalation Section
Escalation is not part of the RIDE acronym — but it belongs in every system prompt. It is the fifth section, and it is the one most beginners skip. Do not make that mistake.
Every agent will eventually encounter something it does not know how to handle. A brief that is missing key information. An ambiguous request. A revision that contradicts the original instructions. A task that falls outside the service scope.
Without escalation instructions, the agent guesses. Sometimes it guesses right. Often it does not. And a bad guess delivered to a client is far worse than an honest I need clarification before I proceed.
Five Escalation Triggers Every Agent Needs:
If the brief is missing required information: Do not attempt to complete the task by guessing. Instead, summarise your current understanding and ask for the missing information in a single clear message before proceeding.
If the brief is ambiguous: Describe both possible interpretations in one sentence each and ask the client which they intended. Do not attempt to merge both interpretations into one output.
If you are not confident the output meets quality standard: Flag it for review rather than delivering it directly. Describe specifically what you are unsure about.
If the request is outside scope: Respond politely: This request is outside the scope of [service name]. For [requested task], I would recommend [brief alternative]. Is there something I can help with within [your service area]?
If a revision request is submitted more than twice for the same issue: Escalate for manual review rather than attempting another automatic revision. Repeated failure means the problem is structural, and continuing to guess will not solve it.
Common Mistakes and How to Avoid Them
Interactive: System Prompt Quality Assessment
Interactive — tap to explore
After seeing hundreds of system prompts, certain patterns of failure come up repeatedly.
Mistake 1: The role is too vague. You are a helpful assistant or You are a writer gives the agent almost nothing. Be specific about the domain, audience, and voice. If you cannot describe your agent's expertise in at least three sentences, the role section is not detailed enough.
Mistake 2: Instructions are goals instead of processes. Write a great blog post is a goal. Read the brief, identify the main argument, outline the structure, write the body first, write the introduction last, and check against the quality criteria before delivering is a process. Goals tell the agent where to end up. Processes tell it how to get there. Agents need processes.
Mistake 3: Rules are not observable. Write well cannot be checked. Use active voice in 90% of sentences can be checked. Be creative cannot be checked. Include at least one original analogy per 500 words can be checked. Every rule should be specific enough that someone reading the output could verify whether it was followed.
Mistake 4: No escalation section. Without escalation instructions, the agent guesses when it encounters ambiguity, missing information, or edge cases. Always define what the agent should do when it is uncertain.
Mistake 5: No examples, or weak examples. Two excellent examples outperform ten pages of rules. If your examples are average, your output will be average. Use your best work.
Mistake 6: Trying to be comprehensive on day one. Your first system prompt does not need to cover every possible scenario. It needs to handle the core 80% of work well. The remaining 20% — the edge cases and unusual requests — you will discover through testing and real work. Your system prompt is a living document that gets better over time.