The RTGO Prompt Framework
The gap between a prompt that produces something usable and one that wastes your time is almost never about the model. It’s about structure.
Not length, not complexity — structure. The specific pieces of information you include, and whether they address the four fundamental questions the model needs answered before it can generate a precise response: who is doing this, what are they doing, what are they trying to achieve, and what should the result look like.
That’s what the RTGO framework codifies. Role. Task. Goal. Output. Four fields, in that order, each one doing a specific job.
What RTGO Is and Why Four Components
Most prompt frameworks feel invented in a meeting. RTGO is derived from what experienced prompt engineers do instinctively when they write prompts that work.
Every effective prompt contains some version of these four signals, even when they’re not labeled. RTGO just makes them explicit so you can construct them deliberately rather than accidentally.
The framework also solves a specific failure mode: it separates Task (what you’re doing) from Goal (why you’re doing it). This distinction sounds minor. It isn’t. The task describes the action; the goal describes the purpose behind it. They point the model at different things, and including both significantly changes the output.
The Four Components
Role: Who the Model Is for This Prompt
Role does two things simultaneously: it establishes the expertise level the model should reason from, and it defines the communication register it should use.
Without a role, the model averages across everyone who has ever discussed your topic — experts, beginners, generalists, specialists, technical writers, and Reddit commenters. With a specific role, it weights toward the professional context you actually need.
A useful role names more than a title. It includes a domain, an experience signal, and — if the task is style-sensitive — a behavioral description. “You are a senior UX researcher” is a role. “You are a senior UX researcher with a background in enterprise software, who writes findings reports for non-technical executive audiences” is a role that constrains meaningfully.
The behavioral signal is what most people skip. It tells the model not just who the persona is, but how that person operates. That’s the part that shapes tone and reasoning style.
Task: What You Are Asking the Model to Do
Task is the specific action, not the topic. This is the most commonly conflated component in weak prompts.
“Marketing strategy” is a topic. “Write a 90-day content marketing plan” is a task. The difference is a verb, a scope, and a product. A task always implies doing something to something. If your task description could be rephrased as a noun phrase, it’s probably still a topic.
Strong task statements use precise action verbs: write, rewrite, summarize, extract, classify, compare, critique, convert, generate, outline, prioritize. Vague verbs — help, discuss, talk about, explore — leave too much of the action undefined.
The task should also carry implicit scope. “Summarize this document” leaves length undefined. “Summarize this document in three bullet points” is complete.
Goal: The Purpose Behind the Task
This is the component that separates RTGO from simpler frameworks, and it earns its place.
Goal answers the question: what is this output for? Who will read it, what decision it will inform, what outcome it needs to produce. The same task executed toward different goals produces legitimately different outputs — and should.
Ask a model to “write a summary of this research paper.” Then ask it to write the same summary “so that a non-technical executive can decide whether to fund further research.” The second prompt adds a goal. The model now knows the audience, the stakes, and the direction the output needs to push the reader. The summary it produces will be materially different — shorter, more outcome-focused, less methodologically detailed.
Goal is where you explain the real-world context that makes a technically correct output actually useful. Without it, the model produces what’s statistically average for the task type. With it, it produces what’s appropriate for your specific situation.
Output: The Shape and Format of the Response
Output is the explicit blueprint of what you want the model to return: length, format, structure, and any content constraints.
The model has no default format preference. It generates the format that is most statistically common for the type of content produced. For a business document, that might be paragraphs. For a technical question, it might be a bulleted list. For a comparison, it might be prose. These defaults are often wrong for your specific context.
Specifying output turns “a reasonable result” into a usable one. Common output specifications:
- Length: word count, number of items, page limit
- Structure: numbered list, table, JSON, sections with headers, a single paragraph
- Tone or register: formal, conversational, technical, plain language
- Exclusions: what the output must not include (avoid caveats, do not use passive voice, no headers)
The exclusion half of output is underused and highly effective. Telling the model what to leave out eliminates specific failure modes before they appear, rather than fixing them in a follow-up prompt.
A Complete RTGO Prompt, Built Step by Step
Here’s the same prompt built progressively to show what each component adds.
Role only:
You are a senior product marketing manager.
Output: generic, professional tone, no specific direction.
Role + Task:
You are a senior product marketing manager.
Write a product one-pager for a new B2B SaaS tool.
Output: a plausible one-pager, mostly boilerplate structure, no differentiation.
Role + Task + Goal:
You are a senior product marketing manager.
Write a product one-pager for a new B2B SaaS tool.
The goal is to give our enterprise sales team a leave-behind that addresses the top three objections we consistently hear from IT procurement: security, integration complexity, and total cost of ownership.
Output: now the content has direction. The model focuses on the objections, not the features.
Role + Task + Goal + Output:
You are a senior product marketing manager.
Write a product one-pager for a new B2B SaaS tool.
The goal is to give our enterprise sales team a leave-behind that addresses the top three objections we consistently hear from IT procurement: security, integration complexity, and total cost of ownership.
Output: maximum one page (approx. 300 words). Use three sections, each headed by the objection it addresses. Plain prose, no bullet points. Professional but readable by non-technical readers.
Output: the one-pager you’d actually hand to a sales rep.
Each component earned its place. None is redundant.
How RTGO Relates to Longer Prompt Frameworks
RTGO doesn’t compete with more comprehensive frameworks — it’s a subset of them. Frameworks that include Context, Constraints, Examples, and Negative Instructions cover more ground and produce better results on complex, high-stakes tasks.
RTGO is optimized for a different scenario: fast, reliable prompts for everyday tasks that don’t warrant a full structured prompt session. It’s the framework that fits in your head. Most people can internalize “Role, Task, Goal, Output” quickly enough that it becomes a mental checklist they run before any prompt, not a template they paste from a file.
For tasks where you need more precision, adding a Context field between Goal and Output — background information the model must use — gets you most of the way to a complete prompt without requiring the full framework. As covered in The Anatomy of a Perfect Prompt, context is the component most commonly omitted, and its absence is the primary driver of generic output.
Where to Use RTGO and Where Not To
RTGO adds value proportional to how much the output quality matters and how much interpretative freedom the task leaves the model.
Use RTGO fully when:
- The output will be used directly (a document, an email, a report, a code review)
- The task has style or format requirements that aren’t obvious defaults
- You’re running the same type of prompt repeatedly and want consistent results
- The output will go to a specific audience where appropriateness matters
Abbreviate it when:
- You’re doing exploratory work and want to see what the model produces with minimal constraint
- The task is fully specified by a precise instruction (e.g., “convert this CSV to JSON”)
- You’re asking a factual question with an objectively correct answer
The diagnostic question: would two different, reasonable people interpret this prompt differently and produce legitimately different outputs? If yes, RTGO helps narrow the distribution. If no, structure adds overhead without value.
Building RTGO Into a Repeatable System
The highest-leverage use of RTGO is not individual prompts — it’s templates. Once a prompt structure produces reliable output for a recurring task, the four components should be frozen with variable placeholders for the parts that change.
A recurring task that fits RTGO well — weekly competitor analysis, monthly content briefs, client email drafts — becomes a template where Role, Goal, and Output stay constant, and only Task (and possibly some context within it) varies between runs.
If you want a faster way to build and test these templates, the Prompt Scaffold tool provides dedicated fields for each component with a live assembled preview. You fill in Role, Task, Context, Format, and Constraints separately, see the constructed prompt in real time, and copy it in one click. It’s particularly useful when you’re iterating on a template across several runs and want to isolate the effect of changing one component at a time.
Once you have a working template, running high-volume variants of it at scale has real cost implications. The LLM Cost Calculator lets you model how your prompt length — including your Role and Goal scaffolding — scales across different models before you commit to an architecture.
The Component RTGO Leaves Out
RTGO is four fields, not six. The two it doesn’t include — Examples and Constraints — aren’t unnecessary; they’re additions that sit on top of the base framework.
Examples (few-shot demonstrations) are the highest-leverage upgrade when output style matters more than content. When a description of what you want can’t fully convey it, showing one concrete example communicates everything the description leaves implicit. This is covered in depth in Zero-Shot vs Few-Shot Prompting.
Constraints (explicit rules the output must follow) operate differently from Output specifications. Output describes shape; Constraints describe rules. “Maximum 200 words” is Output. “Do not use passive voice, do not reference competitor names, avoid any claims we cannot substantiate” are Constraints. Adding them to your RTGO base makes the framework essentially complete.
The reason RTGO doesn’t include them by default: they’re task-specific additions, not universal requirements. Role, Task, Goal, and Output are present in virtually every effective prompt. Examples and Constraints are deliberate upgrades for specific situations.
A final practical note: the order matters. Writing Role first anchors everything that follows. If you write Task first, you’ll write it from the perspective of yourself-as-user. Writing Role first forces you to think from the model’s operating position — which produces a more precise task description as a direct consequence.
Related reading:
- Role Prompting: Give Your AI a Job Title — A detailed breakdown of the Role component and how to write one that actually constrains output
- The Anatomy of a Perfect Prompt — The full six-component framework with worked examples of each component interacting
- Zero-Shot vs Few-Shot Prompting — When to add examples on top of your RTGO base, and how to write them effectively
- Prompt Scaffold — Structured fields for assembling and previewing prompts built on the Role → Task → Context → Format → Constraints model