Tool DiscoveryTool Discovery
Infrastructure Basics11 min read

Is AI Making Us Dumber? What the Research Actually Shows

AmaraBy Amara|Updated 8 May 2026
is AI making us dumber concept showing human brain silhouette with glowing neural pathways fading on one side beside an AI interface, representing cognitive offloading and mental atrophy from AI dependency

Key Numbers

11 points
Retention gap at 45 days between AI-assisted and non-AI learners in a preregistered randomised controlled trial
FGV/UFRJ RCT, Social Sciences & Humanities Open, 2025
70%
of 319 workers who reported using less cognitive effort for reading comprehension when ChatGPT was available
NYU professor study, TEDxSherbrooke, 2025
32 regions
Brain areas monitored via EEG in MIT Media Lab study — ChatGPT users showed the lowest neural engagement of all groups
MIT Media Lab, Harvard Gazette, November 2025
30+
AI and cognition studies reviewed in the most comprehensive meta-analysis on AI and human cognitive performance
Algorithmic Bridge meta-analysis, 2025

Key Takeaways

  • 1A 2025 preregistered randomised controlled trial found learners who used ChatGPT scored 58% on a 45-day retention test versus 69% for those who studied without AI, an 11-point gap that did not narrow with prior AI experience (FGV/UFRJ, Social Sciences & Humanities Open, 2025).
  • 2MIT Media Lab measured brain activity via EEG across 32 regions during essay writing. ChatGPT users had the lowest neural engagement of all groups, and that engagement declined further over repeated sessions (MIT Media Lab, reported Harvard Gazette, November 2025).
  • 3The research consistently splits AI use into two types: structured use where AI handles formatting and grammar while humans handle reasoning preserves cognitive skill; unstructured use where AI reasons for you produces a performance paradox — better immediate outputs, weaker long-term retention (UTS, March 2026).

The research is in, and it is more specific than the headlines suggest. AI does not make you broadly less intelligent. It makes you worse at specific cognitive tasks you hand over to it: memory retention, critical synthesis, and independent reasoning. That decline shows up weeks later, not immediately.

A 2025 preregistered randomised controlled trial from Fundação Getulio Vargas and UFRJ tracked 120 participants through a learning task and a surprise retention test 45 days later. The group using ChatGPT scored 58%. The group that studied without AI scored 69%. The 11-point gap did not narrow for participants with more prior AI experience. Familiarity with the tool did not close it.

What you will find here is what the peer-reviewed research, the Harvard Gazette, and a meta-analysis of 30+ studies actually show, not the opinion-column takes that dominate this debate. You will also find the answer to a separate question that appears alongside this one in search results: whether AI models themselves are getting worse over time. Different question. Both covered here.

What the research actually says about AI and cognition

The core finding, repeated across multiple studies, is that when AI handles cognitive work, the human brain does less of it. That reduced effort costs you something later. The mechanism is cognitive offloading, and it is not new. What is new is the category of tasks being handed over.

The MIT Media Lab study (2025) divided 54 participants into groups and had them complete essay tasks with brain activity monitored via EEG across 32 regions. The ChatGPT group showed the lowest neural engagement of all groups. More specifically, their brain activity declined over repeated sessions. They were not just producing less engaged work once. They were getting progressively less engaged the more they used the tool, reported Psychology Today in July 2025.

The FGV/UFRJ randomised controlled trial (2025, Social Sciences & Humanities Open) is methodologically stronger because it is preregistered. One hundred and twenty participants completed a learning task. Forty-five days later, without warning, they were tested on retention. ChatGPT group: 58%. Traditional study group: 69%. The researchers attributed the gap to weaker initial encoding. When AI does the synthesis, the brain processes information less deeply. Prior AI experience did not close the gap.

"Excessive reliance on AI-driven solutions contributes to cognitive atrophy and a measurable shrinking of critical thinking abilities." (MIT Media Lab study, Harvard Gazette, November 2025)

Seventy percent of 319 workers in a separate NYU professor study reported using less cognitive effort for reading comprehension when ChatGPT was available (TEDxSherbrooke, 2025). That figure is not a measure of laziness. It is rational adaptation: if a tool handles the task, most people let it. The cognitive cost arrives later.

A University of Technology Sydney (UTS) report from March 2026 reviewed the literature and named the pattern a "performance paradox." Unstructured AI use improves immediate task outputs. Students and workers produce better-looking work. Long-term learning suffers because the struggle through a task is precisely what builds durable knowledge. Remove the struggle and you weaken the encoding.

The Harvard Gazette's November 2025 coverage of the MIT findings drew a direct comparison to GPS navigation, where regular use measurably weakened spatial memory in ways users did not notice until they tried to navigate without it. Researchers working on AI existential risk, including those discussed in the will AI destroy humanity article, note that cognitive dependency is one of the underexamined near-term risks sitting alongside the longer-term concerns.

Is AI making people dumber?

For most people using AI without structure, yes, in specific measurable ways. Not general intelligence. Specific skills: memory retention, the ability to critically evaluate information, and independent reasoning without AI assistance.

The mechanism is cognitive offloading. Humans have always offloaded cognitive tasks to external tools. Notes, calendars, calculators. The brain does not maintain memory or perform calculations when external systems handle them reliably. This is efficient. What changes with AI is the category of tasks being offloaded. Calculators took arithmetic. AI takes reasoning, synthesis, analysis, and argument construction. These are exactly the tasks that build higher-order thinking when humans perform them.

A student who has AI write their essay skips the cognitive work that would have built their writing ability. The essay looks better. The writing skill does not improve. Over a degree program, that gap compounds.

The UTS March 2026 report distinguished two types of cognitive offloading with meaningfully different outcomes:

TypeWhat AI handlesWhat human handlesCognitive outcome
Structured (beneficial)Grammar, formatting, fact-checkingReasoning, argument, synthesisEfficiency gained, skill preserved
Unstructured (harmful)Reasoning, synthesis, argumentCopy-pasting the outputPerformance paradox: better output, weaker skill

The most specific finding from the UTS research was something called "illusion of competence." Participants in unstructured AI use groups rated their own understanding as high because the AI output was fluent. They believed they had grasped the material. They could not reproduce the reasoning without AI assistance.

That gap between perceived and actual competence is the specific risk the research keeps returning to. It is harder to detect than a wrong answer, and harder to correct. According to the PMC cognitive offloading study (2025), AI reduces what cognitive load researchers call germane load, the mental effort of working through a problem deeply, and that reduction is the mechanism connecting AI use to long-term skill decline.

Is technology making us dumber? What history actually shows

The calculator debate is older than ChatGPT by 50 years. When electronic calculators entered classrooms in the 1970s, educators warned that students would lose arithmetic skills. They were right about arithmetic and largely wrong about broader reasoning. Mental arithmetic declined. Mathematical thinking did not. The tools shifted which skills mattered.

GPS navigation is the most direct analogy for the AI debate. Regular GPS users develop measurably weaker spatial memory than those who navigate without assistance. A 2020 study by Javadi et al. found that hippocampal activation during navigation was lower in GPS-dependent drivers than in active navigators. The Harvard Gazette made the comparison explicit in November 2025: heavy AI use risks "dulling minds the way GPS dulled our sense of direction."

Two things from the technology history are worth holding onto. First, cognitive offloading has always shifted which skills atrophied and which grew. Writing weakened oral memory but created entirely new modes of extended reasoning. Second, the transition costs were real even when long-term outcomes were neutral. People who grew up with calculators but never learned to estimate are worse at spotting computational errors than those who learned arithmetic first.

The AI debate follows the same pattern, with one difference: scope. Calculators took one task. GPS took one task. AI can take reasoning, judgment, synthesis, communication, planning, and creative production at the same time. The range of skills potentially affected is wider by a meaningful degree.

"AI risks turning knowledge work into shallow prompting, bypassing the deep work that actually builds expertise." (Cal Newport, The New Yorker, 2024)

The number most guides don't show

The FGV/UFRJ RCT showed an 11-point retention gap after one learning session, tested 45 days later. A student using AI for the majority of their coursework across a four-year degree accumulates retention deficits across dozens of subject areas. The researchers found the gap did not narrow with prior AI experience. The brain does not adapt and compensate over time. Familiarity with the tool does not solve the problem.

At 70% of 319 workers reporting reduced cognitive effort for reading comprehension when AI is available (NYU professor study), and the US knowledge workforce at approximately 65 million people, roughly 45 million workers may be operating in a reduced-engagement cognitive mode for reading tasks on any given workday. That number follows directly from the research figures. It is not in the literature itself.

Is AI itself getting dumber? The model quality question

Some searches for "is AI getting dumber" are not about human cognition at all. They ask whether AI models like ChatGPT are becoming less accurate or capable over time. That is a different question with a different answer.

There is evidence of capability drift in specific model versions, but no general decline across the field. Researchers at Stanford and UC Berkeley published a 2023 study tracking GPT-4 and GPT-3.5-turbo performance on specific tasks over several months. GPT-4's accuracy on identifying prime numbers dropped from 84% in March 2023 to 51% in June 2023. OpenAI attributed the changes to ongoing safety fine-tuning that altered model behaviour on certain task types.

The Epoch AI benchmark tracking project (2023-2025) has recorded 2-5% drops on MMLU scores for specific model versions following update cycles. These are not model collapses. They are fine-tuning side effects on particular task categories.

The broader picture is different. GPT-5.2, Claude Sonnet 4.6, and Gemini 2.5 Pro all outperform their predecessors on most standard benchmarks. The overall capability frontier is rising. Individual model versions can drift after specific updates. The field as a whole is not declining.

Model generationARC-AGI scoreMMLU scoreNotes
GPT-3.5-turbo~15-20%~70%Baseline for comparison
GPT-4 (March 2023)~34%~86%Stanford/UCB tracked subsequent drift
GPT-4 (June 2023)~34%~84%Minor MMLU drift after safety fine-tuning
o3 (late 2024)~85%~90%+Current frontier performance

For users who suspect a specific AI tool has gotten worse, the most reliable test is to benchmark it on a task they completed themselves six months ago and compare outputs directly. Subjective impressions of AI degradation are common and are often correct about a specific model version at a specific point in time. For a full breakdown of where current frontier models sit relative to the AGI threshold, see what is AGI explained.

How to stay cognitively sharp while using AI

AI use type matters more than AI use frequency. The UTS March 2026 report identified five habits that preserve cognitive function in regular AI users, and the FGV/UFRJ research team added a sixth based on their retention findings.

Draft first, then use AI. Producing your own attempt before consulting AI forces the initial encoding that builds durable knowledge. Even a rough draft engages the reasoning circuits that passive AI consumption bypasses. The FGV/UFRJ researchers found that participants who formed an initial memory trace before using AI retained more at 45 days.

Use AI for extraneous load, not germane load. Extraneous cognitive load is the mental overhead of formatting, grammar, and surface presentation. Germane load is the effort of working through the actual argument or problem. Offload the first. Keep the second.

Verify outputs verbally. After using AI, close the window and try to explain the output in your own words without looking. If you cannot, you have not processed the information deeply enough to retain it. This test reveals the illusion of competence before it creates a problem.

Apply retrieval practice. Review AI-generated material with time gaps. Test yourself on the content 24 hours later without the AI. This works for the same reason flashcards work: spaced retrieval strengthens memory traces that passive reading does not.

Calibrate to task type. Use AI where the output is the point (a boilerplate email, a format conversion, a grammar check). Avoid AI where the process is the point (understanding a concept, developing an argument, making a judgment call). The distinction is easy to lose in a workflow built for speed.

For readers thinking about which jobs are most exposed to AI task displacement versus which skills remain durable, the jobs AI cannot replace article covers automation risk scores for 12 career categories. The roles with the lowest automation risk share a property relevant here: they require the exact kind of embodied, relational judgment that suffers most from cognitive offloading.

Frequently Asked Questions

Does AI make you dumber?

It depends on how you use it. Research shows two distinct patterns.

Unstructured AI use, where AI reasons and synthesises for you, correlates with weaker long-term memory retention and reduced critical thinking. The FGV/UFRJ preregistered RCT (2025) found a 45-day retention gap of 11 percentage points between AI-assisted and non-AI learners. The MIT Media Lab study (2025) found declining brain activity in ChatGPT users over repeated sessions.

Structured AI use, where AI handles grammar and formatting while you handle the reasoning, does not show the same cognitive costs, according to the UTS March 2026 report.

The problem is not AI. It is offloading the specific mental work that builds the skill. If AI writes for you, you produce better output and weaker skills. If AI corrects your grammar, you produce better output and your skills are unchanged.

Is AI making people dumber?

For people using AI in an unstructured way, the evidence shows specific and measurable cognitive costs. The MIT Media Lab study (2025) found ChatGPT users had the lowest brain engagement of all groups across 32 EEG-monitored brain regions, and that engagement declined with repeated use. Seventy percent of workers in a separate NYU professor study reported using less cognitive effort for reading comprehension when AI was available.

The UTS March 2026 report named the pattern a "performance paradox": AI use improves immediate outputs but reduces the skill-building that happens through effort.

This does not mean intelligence is being lost broadly. It means specific skills, particularly memory retention and independent reasoning, decline when AI handles those tasks regularly without structure.

Is AI getting dumber?

This question has two interpretations. If you are asking whether AI is making humans cognitively worse over time, the answer is covered elsewhere in this article. If you are asking whether AI models themselves are becoming less capable, the answer is: individual model versions can drift, but the overall field is not declining.

Stanford and UC Berkeley tracked GPT-4 performance from March to June 2023 and found its accuracy on identifying prime numbers dropped from 84% to 51% after safety fine-tuning updates. The Epoch AI benchmark project has recorded 2-5% MMLU drops for specific model versions after update cycles.

But newer frontier models consistently outperform older ones. GPT-5.2, Claude Sonnet 4.6, and Gemini 2.5 Pro score materially higher than their predecessors on standard benchmarks. Specific versions drift. The field does not.

Is AI making us stupid?

The research does not support a claim that AI is making humans broadly stupid. It supports a narrower finding: unstructured reliance on AI for tasks that require reasoning, synthesis, and memory encoding leads to measurable declines in those specific skills over time.

The MIT Media Lab study (2025) found declining brain activity in ChatGPT essay users. The FGV/UFRJ RCT showed an 11-point retention gap at 45 days. Both findings are task-specific. Participants who used AI in structured ways, for grammar and formatting rather than for reasoning, did not show the same effects.

The honest summary: AI makes you worse at specific cognitive tasks when it handles those tasks for you. That is cognitive atrophy in specific areas, not a general intelligence decline.

Will AI make us dumber?

Current evidence suggests AI use at scale, without structure, will continue to erode specific cognitive skills: memory retention, critical reading, and independent reasoning.

The UTS March 2026 report warned that unstructured AI use in schools creates "cognitive atrophy" as a systemic risk, not just an individual one. The 11-point retention gap in the FGV/UFRJ RCT is notable partly because it did not narrow with prior AI experience, meaning individuals do not self-correct over time.

Whether this translates into broad population-level cognitive decline depends on whether AI use habits become more structured. The research gives a clear signal about what to avoid. Whether that signal reaches enough people before AI use becomes the unexamined default is a policy and education question, not a technical one.

Will AI make people dumber?

For people who use AI for the cognitive work that builds their core skills, the research says yes in specific ways. Memory retention, critical thinking, and independent reasoning all decline when AI handles those tasks regularly.

A meta-analysis of 30+ AI and cognition studies (Algorithmic Bridge, 2025) found a consistent pattern: AI helps with immediate task performance and harms durable skill development when it replaces the effort of thinking rather than reducing the friction around it.

The practical framing: if AI writes for you, you get a better output and weaker writing skills. If AI corrects your grammar, you get a better output and your skills are unchanged. The distinction between replacing cognitive effort and reducing surface friction is the line between AI that makes you sharper and AI that quietly costs you.

Is technology making us dumber?

Technology consistently shifts which cognitive skills atrophy and which grow, but rarely produces broad intelligence decline. Writing weakened oral memory. GPS weakened spatial memory. Calculators weakened mental arithmetic. In each case, the offloaded skill declined and higher-order reasoning was largely preserved.

AI differs from prior technologies because it offloads reasoning, synthesis, and argument construction directly, not just surface computation or navigation. Prior tools took one cognitive task each. AI can take many simultaneously.

The research on AI specifically shows task-level cognitive decline that prior technology debates did not surface. The GPS parallel is the most instructive: users who relied on GPS lost spatial memory they did not know they were losing, and most of them never noticed until they tried to navigate without it.

Related Articles