You open ChatGPT, Claude, Gemini — whichever one you use. You type: “Write me a discussion post about leadership for my management class.” You hit send. What comes back is technically correct, reasonably well-written, and almost completely useless — generic paragraphs aimed at nobody in particular, in a tone that fits no assignment you were actually given.
The problem is not the AI. It is the prompt. And the reason most prompts fail is not that they are too short or too vague — it is that they are written entirely from inside your own head, with no consideration for what the model actually knows, assumes, or needs.
Two concepts from cognitive psychology explain both the problem and the fix.
Theory of mind is the ability to attribute mental states — beliefs, intentions, knowledge, desires — to others, and to understand that those states can differ from your own. When you prompt an AI, theory of mind is exactly the skill required. The model has no idea who you are, what you are working on, or what your audience is. It only knows what you put in front of it.
Metacognition — literally “thinking about thinking” — is the inward-facing counterpart. It is the ability to step back and evaluate your own thought process rather than just executing it. In prompting, metacognition is what kicks in when you stop and ask: am I even approaching this the right way?
Together they expose a specific failure mode called the curse of knowledge. Once you know something, it becomes almost impossible to imagine not knowing it. Your prompt makes complete sense to you — but the model has none of your context. The fix is not a longer prompt. It is one written with the model’s knowledge gaps in mind, not just your own.
Below are six concepts that will change how you prompt, illustrated through a running example: a student preparing a presentation on how Spotify’s recommendation system works.
The model does not know you are a business student, that your presentation is for a non-technical audience, or that your professor wants you to focus on business implications rather than technology. Every prompt also carries unstated assumptions about format, length, and tone — and the model fills those gaps with its own defaults. Both problems have the same fix: make the invisible visible.
Treat every prompt like a briefing to a smart new colleague on their first day. State your situation, your goal, your audience, and any constraints on format or length. Do not assume any of it is already known.
Philosophers and linguists have long argued that all communication involves inference — listeners do not just decode words, they try to figure out what the speaker really means.¹ AI models do the same. A vague prompt produces the model’s best guess at your intent, and that guess is usually the most generic possible interpretation.
State your goal explicitly. Do not make the model infer what you are trying to accomplish — tell it directly.
If you do not specify an audience, the model defaults to a general reader. That default is rarely the right one. Theory of mind requires adjusting what you say based on what you believe the listener knows and needs — and the same principle applies to every prompt you write.
Name your audience and describe what they know and do not know. The more specific you are, the more precisely the model can calibrate its response.
This is the core theory of mind skill: simulating another’s point of view before acting. Before you send a prompt, read it as if you are the model receiving it cold — no prior context, no knowledge of your project, no idea what “good” looks like to you. This one habit catches most prompt failures before they happen.
Before hitting send, ask — if I received this prompt cold, what would I produce? If the answer surprises you, rewrite the prompt.
Good prompting works the same way good collaboration does — through back-and-forth and progressive refinement. Your first prompt is a starting point. Most people stop after one exchange. The best outputs usually come after three.
This is also where metacognition comes in. Instead of just prompting, you can step back and ask the model to help you design a better prompt — using its own knowledge of what makes a good input to improve yours. Researchers call this meta-prompting, and it is one of the most underused techniques available.
Treat your first prompt as a starting point, not a final answer. And before you start, try asking the model what it needs from you first — that single move often produces better results than any prompt you could write alone.
Psychologists studying human communication describe a concept called epistemic vigilance — the tendency to actively evaluate whether information you receive is actually reliable.² Most people apply this instinctively in conversation. Few apply it to AI output, because the writing is polished and the tone is confident. But confidence is not accuracy. A hallucinated fact looks identical to a correct one.
Before you use any specific number, date, company claim, or research finding from an AI output, verify it independently. If you would cite it in a paper, confirm it from a real source first.
The student who writes the second prompt almost always gets a better result than the one who accepted the first answer. Theory of mind tells you how to model the model. Metacognition tells you how to step back and improve your own strategy. Used together, they turn a frustrating tool into a genuinely useful one.
Notes
¹ This idea is developed in H.P. Grice’s cooperative principle and theory of conversational implicature (1975).
² The concept of epistemic vigilance is developed in Sperber & Mercier (2012).
References: Premack & Woodruff (1978); Grice (1975); Flavell (1979); Sperber & Mercier (2012).