MIS 432  ·  Student Resource  ·  7 min read

WritingGood Prompts

You open ChatGPT, Claude, Gemini — whichever one you use. You type: “Write me a discussion post about leadership for my management class.” You hit send. What comes back is technically correct, reasonably well-written, and almost completely useless — generic paragraphs aimed at nobody in particular, in a tone that fits no assignment you were actually given.

The problem is not the AI. It is the prompt. And the reason most prompts fail is not that they are too short or too vague — it is that they are written entirely from inside your own head, with no consideration for what the model actually knows, assumes, or needs.

Two concepts from cognitive psychology explain both the problem and the fix.

Theory of Mind and the Curse of Knowledge

Theory of mind is the ability to attribute mental states — beliefs, intentions, knowledge, desires — to others, and to understand that those states can differ from your own. When you prompt an AI, theory of mind is exactly the skill required. The model has no idea who you are, what you are working on, or what your audience is. It only knows what you put in front of it.

Metacognition — literally “thinking about thinking” — is the inward-facing counterpart. It is the ability to step back and evaluate your own thought process rather than just executing it. In prompting, metacognition is what kicks in when you stop and ask: am I even approaching this the right way?

Together they expose a specific failure mode called the curse of knowledge. Once you know something, it becomes almost impossible to imagine not knowing it. Your prompt makes complete sense to you — but the model has none of your context. The fix is not a longer prompt. It is one written with the model’s knowledge gaps in mind, not just your own.

Below are six concepts that will change how you prompt, illustrated through a running example: a student preparing a presentation on how Spotify’s recommendation system works.


1. Context & Hidden Assumptions

“The model starts from zero”

The model does not know you are a business student, that your presentation is for a non-technical audience, or that your professor wants you to focus on business implications rather than technology. Every prompt also carries unstated assumptions about format, length, and tone — and the model fills those gaps with its own defaults. Both problems have the same fix: make the invisible visible.

✕ Too vague
“Help me explain how Spotify knows what songs to recommend.”
✓ Much better
“I am a business student giving a 5-minute presentation on Spotify’s recommendation system. My audience has no technical background. Explain the core idea in plain English in 3–4 sentences — no jargon, just the business logic.”
The fix

Treat every prompt like a briefing to a smart new colleague on their first day. State your situation, your goal, your audience, and any constraints on format or length. Do not assume any of it is already known.


2. Intent

“The model will guess what you want”

Philosophers and linguists have long argued that all communication involves inference — listeners do not just decode words, they try to figure out what the speaker really means.¹ AI models do the same. A vague prompt produces the model’s best guess at your intent, and that guess is usually the most generic possible interpretation.

✕ Too vague
“Write something about Spotify Wrapped for my presentation.”
✓ Much better
“Write a 3-sentence talking point for a slide. I want my classmates to understand that Wrapped is not a separate product — it runs on the exact same data as Spotify’s recommendations, which is why it costs almost nothing extra to produce.”
The fix

State your goal explicitly. Do not make the model infer what you are trying to accomplish — tell it directly.


3. Audience

“The model picks a default reader”

If you do not specify an audience, the model defaults to a general reader. That default is rarely the right one. Theory of mind requires adjusting what you say based on what you believe the listener knows and needs — and the same principle applies to every prompt you write.

✕ Too vague
“Explain how Spotify’s data pipeline works.”
✓ Much better
“Explain how Spotify uses listening data to improve recommendations over time. Write for business students with no CS background — use a real-world analogy instead of technical terms.”
The fix

Name your audience and describe what they know and do not know. The more specific you are, the more precisely the model can calibrate its response.


4. Perspective-Taking

“Read your prompt as the model would”

This is the core theory of mind skill: simulating another’s point of view before acting. Before you send a prompt, read it as if you are the model receiving it cold — no prior context, no knowledge of your project, no idea what “good” looks like to you. This one habit catches most prompt failures before they happen.

✕ Too vague
“Make it less technical.”
✓ Much better
“The following slide script is too technical for my business school audience. Rewrite it in plain English, cut it to under 60 words, and focus on why it matters for Spotify as a business: [paste text]”
The fix

Before hitting send, ask — if I received this prompt cold, what would I produce? If the answer surprises you, rewrite the prompt.


5. Iteration & Meta-Prompting

“Prompting is a conversation, not a command”

Good prompting works the same way good collaboration does — through back-and-forth and progressive refinement. Your first prompt is a starting point. Most people stop after one exchange. The best outputs usually come after three.

First prompt
“Explain why Spotify needs so many users for its recommendation system to work.”
Follow-up
“Good — now make it more concrete. Add a specific example of what happens when a brand new user with no listening history opens the app for the first time.”
Follow-up
“Perfect. Now cut it to three sentences and end with a question that would make a business student stop and think.”

This is also where metacognition comes in. Instead of just prompting, you can step back and ask the model to help you design a better prompt — using its own knowledge of what makes a good input to improve yours. Researchers call this meta-prompting, and it is one of the most underused techniques available.

Try this before you start

“I need to create a presentation on how Spotify’s recommendation system works for a business school audience with no technical background. Before I ask you to help me write it — what information would you need from me to give me the best possible output?”

The fix

Treat your first prompt as a starting point, not a final answer. And before you start, try asking the model what it needs from you first — that single move often produces better results than any prompt you could write alone.


6. Verification

“Confident doesn’t mean correct”

Psychologists studying human communication describe a concept called epistemic vigilance — the tendency to actively evaluate whether information you receive is actually reliable.² Most people apply this instinctively in conversation. Few apply it to AI output, because the writing is polished and the tone is confident. But confidence is not accuracy. A hallucinated fact looks identical to a correct one.

✕ Risky
Copy the output onto your slide. It says Spotify has 602 million users. Sounds right. Move on.
✓ Much better
“Give me three key stats about Spotify’s scale. For each one, tell me how confident you are and where I should go to verify it.”
The fix

Before you use any specific number, date, company claim, or research finding from an AI output, verify it independently. If you would cite it in a paper, confirm it from a real source first.


The Checklist

Before you send your next prompt

1.
Context — What does the model need to know about my situation that it could not possibly know on its own?
2.
Intent — Have I stated my actual goal, not just the surface task?
3.
Audience — Who is this output for, and what do they already know?
4.
Perspective — If I received this prompt cold, what would I produce?
5.
Iteration — Have I tried asking the model what it needs from me first?
6.
Verification — What claims in this output do I need to check before I use them?

The student who writes the second prompt almost always gets a better result than the one who accepted the first answer. Theory of mind tells you how to model the model. Metacognition tells you how to step back and improve your own strategy. Used together, they turn a frustrating tool into a genuinely useful one.

Notes

¹ This idea is developed in H.P. Grice’s cooperative principle and theory of conversational implicature (1975).

² The concept of epistemic vigilance is developed in Sperber & Mercier (2012).

References: Premack & Woodruff (1978); Grice (1975); Flavell (1979); Sperber & Mercier (2012).