Does Using ChatGPT Make You Dumb?! Exploring MIT’s Groundbreaking Research on AI’s Impact on Learning and Cognition

Does Using ChatGPT Make You Dumb!

In the rapidly evolving landscape of artificial intelligence, large language models (LLMs) like ChatGPT have become indispensable tools for millions. Their ability to generate coherent, context-aware, and personalized responses has revolutionized how we access information, learn new topics, and even create content. But what if this convenience comes at a cognitive cost? What if, by relying on AI to do the heavy lifting, we are inadvertently dulling our own intellectual capacities?

Matthew Berman, a passionate AI educator and thinker, recently delved into a fascinating research paper from MIT titled Your Brain on CHAD CPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. This extensive work, spanning over 200 pages, investigates how using ChatGPT and similar AI assistants for essay writing affects our brains, memory, creativity, and sense of ownership over our work. In this article, we unpack the key insights from the research and share Matthew’s thoughtful reflections on what this means for learners, educators, and all of us navigating an AI-augmented future.

Table of Contents

🔍 Understanding the Research Questions and Experimental Setup

The MIT study was designed to answer four pivotal questions about the cognitive implications of using LLMs in educational contexts, specifically for essay writing:

  1. Do essays written with an LLM differ significantly from those written using only one’s brain or traditional search engines?
  2. How does brain activity vary between those who write essays using LLMs, search engines, or no tools at all?
  3. What impact does LLM usage have on memory retention of the essay content?
  4. Does using an LLM affect the writer’s perceived ownership of their essay?

To explore these questions, the researchers split participants into three groups:

  • Brain Only: Participants wrote essays relying solely on their own knowledge and cognition, without any external tools.
  • Search Engine: Participants used traditional web search engines like Google to find information to help write their essays.
  • LLM: Participants used OpenAI’s GPT-4 as their sole source of information and assistance.

After the initial essay writing, there was a fascinating fourth phase where groups switched methods to examine if the cognitive effects of LLM use persisted even after discontinuation.

🧠 How LLMs Change the Way Our Brains Work

One of the most striking findings of the study is how LLM usage alters brain activity during essay writing. Participants who wrote using only their brains showed intense engagement of memory and planning networks. Their brains were actively recalling, integrating, and creatively generating content. This “bottom-up” process involved gathering details and building a coherent narrative from the ground up.

In contrast, those using LLMs demonstrated a “top-down” brain activity pattern. Instead of constructing ideas from scratch, their brains focused on integrating and filtering the AI-generated content. This reduced the intensity and scope of neural communication by approximately 55%, indicating significantly less cognitive effort.

While this might sound like a relief to some, it raises a critical question: if our brains are doing less of the heavy lifting, are we truly learning and internalizing the material? The research suggests that the answer is no, or at least not to the same degree.

📉 Cognitive Load: The Double-Edged Sword of AI Assistance

The concept of cognitive load—the mental effort required to process information—is central to understanding the trade-offs of LLM usage. Traditional web searching imposes a moderate cognitive load. Users must sift through multiple sources, evaluate credibility, and integrate diverse information, which stimulates active thinking.

LLMs, on the other hand, reduce this cognitive load by streamlining information into concise, context-aware responses. This leads to a 32% lower cognitive load compared to search engine users, along with reduced frustration and effort. Productivity also increases by 60%, as users can complete tasks faster and engage for longer periods.

But here’s the catch: this ease of access encourages passive consumption rather than active engagement. The mental processes that build deep understanding and long-term memory—known as germane cognitive load—are diminished. Students using LLMs for scientific inquiries, for example, produced essays with lower quality reasoning compared to those who used traditional search engines, highlighting the importance of active cognitive processing.

📚 The Impact on Memory and Knowledge Retention

One of the most alarming results from the study concerns memory retention. When asked to quote sentences from their own essays without looking, 83% of LLM users struggled or failed outright. None could accurately recall or quote their own writing. In stark contrast, participants from the brain-only and search engine groups achieved near-perfect recall and quoting accuracy.

Even more concerning was the persistence of this effect. When LLM users switched to writing essays without AI assistance, their memory and comprehension remained impaired relative to those who started without AI. This suggests that early reliance on LLMs can cause a lasting “cognitive debt,” where knowledge is shallowly encoded due to outsourcing mental effort to AI.

💡 Ownership and Creativity: The Human Element at Risk

Ownership—the feeling that you genuinely wrote the essay—is a subtle but profound aspect of learning and creativity. The study found that while the brain-only and search engine groups almost unanimously felt full ownership of their essays, LLM users’ sense of ownership was fragmented. Half felt full ownership, but many reported partial or no ownership at all.

Teachers grading the essays noticed a pattern as well. Essays generated with AI assistance were often grammatically perfect and well-structured but lacked personal insights or clear, original statements. They described these essays as “soulless.” This observation resonates across AI-generated content, whether in writing, music, or video—there is often a missing human spark that distinguishes truly creative work.

This raises an important point for the future: as AI-generated content becomes ubiquitous, human taste, individuality, and creativity will be the defining factors that set us apart. AI tools are just that—tools to augment human potential, not replace it.

🔄 Web Search vs. LLMs: Different Tools for Different Tasks

The study also highlights the complementary roles of traditional search engines and LLMs. Web search excels when users need to explore multiple sources, verify facts, and gather comprehensive, source-specific data. It encourages active information seeking and critical evaluation.

LLMs shine when users want quick, synthesized, and context-aware explanations or brainstorming assistance. They streamline the retrieval process, making it easier to get to the core answer without sifting through multiple documents.

However, both approaches have their pitfalls. Web search requires domain knowledge and strategic behavior to avoid misinformation, while LLMs risk hallucinations—fabricated or inaccurate information—and often do not provide reliable citations. Even when citations are provided, they may be incorrect or misaligned with the content.

👥 The Social Learning Dimension and Echo Chambers

Another critical insight from the research is the impact of LLM use on social learning. Collaborative discussions, peer interactions, and teacher feedback play a pivotal role in deep learning and memory formation. The convenience of AI-generated answers can reduce the opportunities for such meaningful human-to-human engagement, potentially weakening critical thinking skills and fostering procrastination.

Moreover, both web search and LLMs can exacerbate the echo chamber effect. This phenomenon occurs when algorithms feed users more of the same content they engage with, limiting exposure to diverse perspectives. Just like social media algorithms, LLMs’ conversational nature can reinforce existing biases and opinions, making it harder to break out of intellectual silos.

⚖️ Balancing AI Assistance and Cognitive Engagement

The key takeaway from the MIT study—and Matthew Berman’s reflections—is that the impact of LLMs on cognition depends heavily on how they are used and the user’s underlying competence. Higher competence learners tend to use LLMs as tools for active learning, engaging in iterative exploration and critical oversight.

Lower competence learners may rely on immediate AI-generated responses, bypassing the mental effort necessary for deep understanding. This distinction between using AI as a tool versus outsourcing thinking is crucial.

Interestingly, participants who started writing essays with their own brain and later used LLMs demonstrated better memory retention and metacognitive engagement than those who began with LLM assistance. This suggests a pedagogical strategy: encourage foundational cognitive work before introducing AI tools.

❓ Frequently Asked Questions (FAQ)

Q: Does using ChatGPT make you dumb?

A: Not inherently. ChatGPT and other LLMs reduce cognitive load and can increase productivity by providing quick, synthesized responses. However, overreliance without active engagement can impair memory retention, creativity, and deep understanding.

Q: How does AI assistance affect memory?

A: The study found that users who relied on LLMs struggled to recall or quote their own AI-assisted writing, indicating shallow encoding of information due to outsourcing cognitive effort to AI.

Q: Can AI tools replace traditional learning methods?

A: No. While AI tools are powerful aids, traditional methods involving active research, critical thinking, and social learning remain essential for deep knowledge acquisition and creative expression.

Q: What is the best way to use ChatGPT for learning?

A: Use ChatGPT as an assistive tool rather than a crutch. Start by engaging deeply with the material yourself, then use AI to clarify, brainstorm, or explore alternative perspectives. Always verify AI-generated information with reliable sources.

Q: Are AI-generated essays detectable?

A: Often, yes. Teachers and readers can sometimes detect AI-generated content due to its flawless grammar but lack of personal insight or unique voice. This “soullessness” can be a giveaway.

Q: What is the “echo chamber” effect with AI?

A: AI models and search engines can reinforce users’ existing beliefs by providing more of the same type of content, limiting exposure to diverse views and potentially fostering intellectual silos.

🚀 Looking Ahead: The Future of Human and AI Collaboration

Rather than fearing AI as a force that makes us “dumber,” it is more productive to view it as a catalyst for redefining how we think and learn. The MIT research underscores a shift from generating knowledge from scratch to overseeing and orchestrating AI agents. This shift demands new skills—critical oversight, fact-checking, and reflective thinking—to ensure the accuracy and depth of AI-assisted work.

In this emerging paradigm, human creativity, judgment, and individuality will be more valuable than ever. AI can handle routine cognitive tasks, freeing us to focus on higher-order thinking, emotional intelligence, and innovation. The challenge is to strike a balance that harnesses AI’s strengths without sacrificing cognitive engagement and ownership.

Matthew Berman’s optimistic view resonates here: AI tools are not replacements but amplifiers of human potential. By understanding their limitations and integrating them thoughtfully into our workflows, we can unlock new frontiers of learning and creativity while preserving the uniquely human elements that make knowledge meaningful.

📌 Final Thoughts

The MIT study, as explored by Matthew Berman, offers a timely and nuanced perspective on the cognitive trade-offs of using ChatGPT and similar AI assistants. It challenges us to reflect on our relationship with technology and to cultivate mindful, strategic approaches to AI-assisted learning.

Whether you are a student, educator, or lifelong learner, the key is to remember that AI is a tool—not a substitute for human thought. Engage actively, question deeply, and maintain your intellectual curiosity. Doing so will ensure that AI empowers rather than diminishes your cognitive abilities.

For those interested in diving deeper, Matthew Berman’s detailed analysis and ongoing AI insights can be found on his YouTube channel and newsletter. Embracing AI thoughtfully will be one of the defining challenges—and opportunities—of our generation.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Read

Subscribe To Our Magazine

Download Our Magazine