A recent MIT research paper entitled “Your Brain on ChatGPT” describes mental atrophy from using LLMs. It certainly has its place at work but there may be some longer-term risks.
MIT researchers recently published on preprint arXiv server the research paper Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. It’s been getting a lot of press lately and has implications for how we use it at work.
Let’s first understand the study. “This study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools).” Basically, people had different amounts of help writing essays. The abstract notes, “Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. . . Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels.” In other words, extensive LLM users make brain be less writing good [sic].
Let’s also understand what it’s not. It’s about writing essays; participants had 20 minutes to write an essay (using SAT essay prompts) while hooked up to an EEG (electroencephalogram). The objective was to study implications in education, not at work. It’s also not yet (at the time of this article) peer reviewed. While 54 people participated in the study, only 18 people did the 4th session (in which they did the opposite approach to writing in terms of whether or not an LLM was used).
Some of the key findings are as follows.
“These results suggest that AI assistance in writing may free up cognitive resources (reducing memory load) and allow the brain to reallocate effort toward executive functions, whereas traditional Search Engine-based writing engages the brain's integrative and memory systems more strongly. This dichotomy reflects two distinct cognitive modes: externally scaffolded automation versus internally managed curation.” In other words, both AI and search each do some cognitive processing for us. If you need to write something basic and free your mind for other executive function tasks, LLMs may be the right way to go.
“This resonates with findings that frequent AI tool users often bypass deeper engagement with material, leading to ‘skill atrophy’ in tasks like brainstorming and problem-solving [96]. In short, Session 4 participants might not have been leveraging their full cognitive capacity for analytical and generative aspects of writing, potentially because they had grown accustomed to AI support.” This is what people have been worrying about. Like using a book summary instead of reading the actual book, you might not connect and learn as deeply.
What does this mean for us at work? At work we don’t learn in the same way we do at school, but we do learn at work. Business schools who use case studies often describe it as learning through practice. LLMs remove certain types of practice. Having excel do formulas for you atrophies your math abilities, but rarely do you need to do statistical analysis by hand, so why not have it do it for you even if your raw math abilities wane?
If it’s material you don’t need to know deeply, does it matter? If you’re doing a first pass review of a dozen vendors in the end you will likely forget eleven of them. Do you need to know them well or can an LLM summarize them for you? I’d argue that it's fine to not have to know that content deeply (and I have personally used it in such a way).
On the other hand, we have to make decisions at work every day, including brainstorming and problem-solving; what happens when those skills atrophy? And what happens when we get disengaged from the material, how does that impact our decision making?
Perhaps LLMs can brainstorm for us. But the participants using external tools (in this specific experiment) tended to be less original in their thinking. Further, “Such an interpretation aligns with concerns that over-reliance on AI can erode critical thinking and problem-solving skills: users might become good at using the tool but not at performing the task independently to the same standard. Our neurophysiological data provides the initial support for this process, showing concrete changes in brain connectivity that mirror that shift.” In other words, using it as a crutch can diminish those skills; additionally, the results may satisfice, but not be top level work.
All my work is my own, with no AI used (although I do run a spelling and grammar check), unless otherwise noted. To date none of my writing uses LLMs, but I do use it for my article images (and note them as such). I did, however, ask ChatGPT to summarize this article (after I wrote my own analysis). The results are listed at the end of this article (formatting, bolding, and grammar are from ChatGPT).
The reality is one, non-peer reviewed study can’t tell us much. Even what the research concludes should be taken with a grain of salt, for now. It certainly makes sense that the concerns need to be investigated, and there could be a risk that extended use of LLMs can weaken cognitive abilities over time; but how much and when is still very much an open question.
ChatGPT’s summary of Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task
1. Reduced Cognitive Engagement
Implication: When you use LLMs to draft content, plan strategy, or write reports, your brain may become less mentally active than if you did the task yourself.
2. Memory and Comprehension Deficits
Implication: Users of AI tools had poorer recall of what they wrote and lower comprehension of their own output.
3. Decreased Sense of Ownership
Implication: Professionals may feel less connected to work created with heavy LLM assistance.
4. Homogenized Communication
Implication: AI tends to produce formulaic or stylistically similar content across users.
5. Cognitive Debt Is Cumulative
Implication: The longer you rely on AI for thinking-heavy tasks, the more it may dull your mental edge, even after stopping.
Mitigation: Use LLMs for low-value or repetitive work, and intentionally tackle high-cognitive-load tasks yourself (e.g., strategy, product thinking, hiring decisions).
It’s critical to learn about corporate culture before you accept a job offer but it can be awkward to raise such questions. Learn what to ask and how to ask it to avoid landing yourself in a bad situation.
Investing just a few hours per year will help you focus and advance in your career.
Groups with a high barrier to entry and high trust are often the most valuable groups to join.