Prompting the Unknown Unknowns
What Donald Rumsfeld Can Teach Us About AI's Impact on Students' Brains
The Empty Page
On February 12, 2002, Donald Rumsfeld stood at a Pentagon briefing and inadvertently described the future of AI literacy. "There are known knowns," he said. "There are things we know we know. We also know there are known unknowns; that is to say, we know there are some things we do not know. But there are also unknown unknowns… the ones we don't know we don't know."
Twenty-three years later, his framework perfectly captures how students navigate ChatGPT.
Jen, a college junior, at her desk at 2 AM. The cursor blinks on an empty document. Five hours until her essay on social mobility is due. Her brain feels like wet concrete. She opens ChatGPT in another tab and types: "Write an essay about social mobility in America."
This is her known known. She knows she needs an essay, she knows ChatGPT can write one.
Three minutes later, she has 500 words. She pastes them into her document, changes a few phrases, adds her name. Done.
Down the hall, Marcus faces the same assignment but operates in the realm of known unknowns. He knows he doesn't fully understand social mobility's mechanisms, so he prompts differently:
"What economic factors should I consider when analyzing social mobility?"
Then:
"How does education affect this?"
Then: "Help me think through a counterargument to education as the primary driver."
He's using ChatGPT to explore what he knows he doesn't know, maintaining cognitive ownership while filling knowledge gaps.
But it's Aisha, an international student from Nigeria, who stumbles into the unknown unknowns. The possibilities she didn't know existed. She discovers through experimentation that ChatGPT can preserve her analytical voice while helping navigate academic English:
"Can you help me express this idea more clearly in academic English while keeping my argument structure?"
She's found a collaborative mode she didn't know to look for.
A month later, brain scans reveal something Rumsfeld might have appreciated; the students didn't just get different grades, their minds had literally changed in different ways.
The real unknown unknown wasn't what ChatGPT could do, but what it was doing to the cognitive architecture of the people who used it.
Act I: The Adoption Curve
The numbers tell a story of unprecedented technological adoption. Surveys from late 2024 suggest that a significant majority of students report using AI tools in some capacity for their academic work. ChatGPT's growth trajectory has outpaced nearly every consumer technology in history. In university surveys, students describe it as serving multiple roles as tutor, editor, and brainstorming partner.
The technology appeared to solve ancient academic challenges. Writing anxiety decreased. Starting assignments became easier. The dreaded blank page seemed conquered. Performance Expectancy scores, which measure how useful students believe a technology to be, remained consistently high throughout the academic year.
Yet beneath this widespread adoption, a puzzle emerged. Academic integrity services reported detecting AI-generated content in a relatively small percentage of submitted work, despite surveys indicating near-universal usage. If most students were using ChatGPT but only a fraction of papers showed clear AI authorship, what exactly were students doing with these tools?
The answer would emerge from an unexpected source. Neuroscience laboratories.
The MIT Brain Study: Observing Thought in Action
Dr. Nataliya Kosmyna's team at MIT Media Lab attempted something unprecedented. They monitored brain activity while students engaged in real-time writing tasks. The study involved 54 participants divided into three groups. Unassisted writing, Google-assisted research, and ChatGPT-assisted composition.
Using 32-channel EEG arrays, researchers tracked neural connectivity patterns, alpha waves associated with creative thinking, and theta rhythms linked to memory formation.
The initial findings were striking.
The ChatGPT group showed markedly different neural patterns compared to both unassisted writers and those using traditional search engines. Neural connectivity measures decreased substantially, while alpha and theta wave activity showed significant reduction.
The researchers had discovered what Rumsfeld might have called the ultimate unknown unknown. Students thought they were getting smarter essays, but their brains were getting weaker at generating original thought.
The neural connectivity data revealed what no one had thought to look for.
But the most surprising finding came during the recall test. Twenty minutes after completing their essays, participants were asked to reproduce a single line from their work. The unassisted writers and Google users showed high recall rates. ChatGPT users struggled significantly, with most unable to remember content they had ostensibly "written" just minutes before.
One participant's response became emblematic of the phenomenon. When shown her essay, she studied it with genuine puzzlement. "This doesn't feel like something I wrote," she said.
From a neurological perspective, her confusion made perfect sense.
The Homogenization Question
As researchers dove deeper into the content students produced, they noticed an unexpected pattern.
When asked philosophical questions like "What makes us truly happy?", ChatGPT-assisted essays converged on remarkably similar themes. Career success, personal relationships, and financial security appeared with predictable regularity.
Essays written without AI assistance showed far greater variability, including unexpected perspectives on loss, failure, and meaning that rarely appeared in AI-assisted work.
Cultural homogenization emerged as another concern. Researchers at Cornell documented how international students' writing about cultural traditions became increasingly generic when filtered through AI assistance. A student writing about Diwali might begin with rich sensory details about "the smell of ghee lamps and the sound of firecrackers until dawn" but receive AI suggestions that smoothed these into "a time of joy and family gathering," subtly erasing cultural specificity.
Yet this homogenization wasn't uniform.
Students who engaged in what researchers termed "dialogical prompting" maintained more of their individual voice.
The key seemed to be whether students were delegating their thinking or using AI to refine thoughts they'd already developed.
Act II: The Complexity Emerges
By spring 2025, longitudinal studies were revealing a more complex picture than initially suspected.
Research from the University of Bremen found that regular ChatGPT users scored lower on final exams, but with important caveats[11]. The effect was most pronounced among students who used what researchers called "single-shot prompting" or simply copying and pasting assignment questions.
However, a subset of students showed different outcomes entirely. Analysis of ChatGPT conversation logs revealed three distinct usage patterns.
Single Copy-Paste users, who simply transferred assignment prompts directly to ChatGPT, showed the poorest outcomes.
Single Reformulated Prompt users, who at least rephrased questions in their own words, demonstrated marginally better results.
But Multiple-Question Prompting users, who engaged in extended dialogues with the AI, showed remarkably different outcomes.
These students were doing something fundamentally different. Rather than asking "solve this problem," they were engaging in what looked like Socratic dialogue. One engineering student's log showed this progression. First, asking about the core challenge in a problem. Then requesting explanation of different approaches. Then challenging the AI's suggestion with a potential failure point. Finally, asking for verification of just the initialization step before attempting the rest independently.
This student was using ChatGPT as a thinking partner rather than a thinking replacement. And crucially, her exam performance and retention improved rather than declined.
The Cognitive Spectrum
The emerging picture was more nuanced than a simple binary. Students weren't just "using" or "not using" AI; they were distributed along a spectrum of cognitive engagement.
The spectrum of AI engagement maps perfectly onto Rumsfeld's taxonomy.
On one end, students operating with "known knowns," fully aware they're outsourcing their thinking and accepting the trade-offs. In the middle, those grappling with "known unknowns," aware they need better strategies but unsure what they are. And scattered throughout, the "unknown unknowns," students who don't realize they're accumulating cognitive debt until they face an in-class exam.
At the other end were students maintaining cognitive agency while leveraging AI capabilities. They showed preserved or even enhanced neural activity in certain regions, maintained memory function, and improved performance over time. These students were learning to be cognitive directors rather than passive recipients.
Between these extremes lay the majority of students, oscillating between delegation and direction depending on factors like time pressure, assignment type, and their understanding of effective AI interaction.
The most important takeaway is that movement along this spectrum was possible in both directions.
The Intervention Studies
The most hopeful findings came from intervention research. When students who had been heavily reliant on ChatGPT were taught specific prompting strategies, their outcomes changed dramatically.
A controlled study at TU Delft introduced a "cognitive scaffolding" approach.
Students learned to follow a specific sequence. First, generate initial ideas without AI. Second, develop a rough structure independently. Third, engage ChatGPT for specific challenges or refinements. Fourth, critically evaluate AI suggestions. Fifth, synthesize their own conclusion.
Students following this protocol showed markedly different brain activity patterns. Rather than the diminished connectivity seen in passive users, these students maintained or even enhanced neural engagement. The AI became what researchers called a "cognitive amplifier" rather than a replacement.
Singapore's educational system took these findings further, integrating "AI collaboration literacy" into their national curriculum. Students as young as 14 learned to distinguish between appropriate and inappropriate AI use, craft prompts that maintained cognitive engagement, and use AI to challenge rather than replace their thinking.
The curriculum emphasized metacognition. Students maintained logs of their thinking process, explicitly marking which ideas were originally theirs, which emerged from AI interaction, and which represented synthesis.
This "cognitive audit trail" helped students remain aware of their own intellectual contribution.
Act III: The Path Forward
The MIT team's final experiment provided the most intriguing insight. They switched their experimental groups mid-study. Students who had been writing unassisted gained access to ChatGPT, while those dependent on ChatGPT had to write without it.
The results revealed an asymmetry. ChatGPT-dependent students struggled significantly when forced to write unassisted. Their brains had adapted to external scaffolding, making independent ideation difficult.
However, students who gained ChatGPT access after developing unassisted writing skills showed enhanced performance. Their neural activity actually increased, suggesting that prior cognitive development created capacity for productive AI collaboration.
This "cognitive foundation" principle suggests a pedagogical approach. Students need to develop core thinking skills before integrating AI assistance. It's analogous to learning arithmetic before using calculators, or understanding grammar before relying on spell-check.
Beyond the Binary
The narrative of AI as either cognitive savior or destroyer misses the nuanced reality.
Students like Aisha, our international student, remind us that AI's impact varies with context. For her, ChatGPT served as a linguistic bridge, helping express complex thoughts while maintaining intellectual ownership.
For students with learning differences, AI might provide necessary scaffolding without diminishing cognitive engagement.
Professional contexts offer additional perspective. Journalists using AI for initial research but crafting their own narratives, programmers leveraging code generation while maintaining architectural thinking, and researchers using AI for literature synthesis while developing original hypotheses all demonstrate productive human-AI collaboration.
The key distinction isn't whether AI is used, but how cognitive labor is distributed.
When AI handles routine aspects while humans maintain creative and critical thinking, both capabilities can be enhanced. When AI replaces core cognitive functions, atrophy follows.
Practical Implications
For educators, these findings suggest specific strategies. Rather than focusing on AI detection or prohibition, successful approaches emphasize teaching cognitive direction.
This includes requiring initial human thought before AI assistance, having students submit their AI conversations alongside assignments, making thinking processes visible through documentation, and assessing iterative refinement rather than just final products.
For students, the implications are equally clear. Learning effective prompting isn't just a technical skill; it's a cognitive survival strategy.
This means understanding when AI helps versus hinders, developing the ability to decompose complex problems into appropriate queries, maintaining critical evaluation of AI outputs, and preserving space for independent thought.
For institutions, the challenge involves systemic change. This includes integrating prompt literacy into core curricula, developing assessment methods that value process alongside product, training faculty in AI-aware pedagogy, and creating policies that encourage thoughtful rather than prohibitive approaches.
The Divide
Jen, from our opening scene, is now a senior. She struggles with in-class writing assignments, her mind untrained in independent ideation. Marcus has developed what he calls "AI-augmented thinking," using ChatGPT to pressure-test ideas he generates independently. Aisha has learned to maintain her unique perspective while using AI to navigate linguistic challenges.
Three students, three relationships with the same technology, three different cognitive futures.
The evidence suggests ChatGPT's impact on cognition isn't technologically determined but literacy dependent.
The divide forming isn't between those with access and those without, but between those who understand cognitive collaboration and those who default to cognitive delegation.
Unlike many educational divides, this one is bridgeable through knowledge. Every student can learn to maintain agency while leveraging AI capabilities. Every mind can find its place on the spectrum between human creativity and machine efficiency.
But perhaps Rumsfeld missed a category, one that only makes sense in the age of AI. Call them the "unknown knowns," the things we once knew how to do that we're unknowingly forgetting.
Jen used to know how to stare at a blank page until an idea emerged. Marcus once knew the satisfaction of puzzling through a complex argument alone. Aisha knew how to wrestle with language until it surrendered to her meaning.
These capabilities—ideation, struggle, cognitive persistence—are becoming unknown knowns. Skills atrophying so gradually we don't notice their absence until we're asked to write without our digital prosthetic.
The real tragedy isn't that students are using ChatGPT. It's that they're losing capacities they don't know they're losing, creating a generation that might never discover what their unassisted minds were capable of. The ultimate unknown unknown.
Twenty years from now, someone will stand at a podium and describe the cognitive landscape we're creating today. They might say there were students who knew they were outsourcing their thinking. There were students who knew they needed better AI strategies.
But there were also students who never knew what their own minds could have become.
The cursor still blinks. The question isn't whether students will use ChatGPT. They already are. The question is whether they'll use it in a way that enhances their uniquely human capacities or allows those capacities to atrophy through disuse.
The choice, unlike Rumsfeld's unknowns, is refreshingly known.
We can teach students to be cognitive directors, or we can watch them become cognitive dependents.
The difference lies not in the tool but in the wisdom of its use.


