Cognitive offloading to AI tools narrows and reduces brain activity and neural development
The neural and behavioral consequences of LLM-assisted research, writing or creative processes.
In other words, relying on AI tools to solve or structure problems reduces your capacity to develop and improve your own brain structure.
There are already several studies where the research aims to establish the causal effects of generative AI on cognitive effort and task performance through randomized control experiments.
Actually, the way your brain prepares itself to the task of solving a problem is significantly different to the way your brain prepares itself to the task of asking how to solve a problem. In general lines, the key factor on these papers is the fact that the brain needs wider and deeper neural activity to generate or create a concept from scratch that to analyze or frame a problem so it can infer questions about it.

The focus for this article is set on the following paper:
Effects of generative artificial intelligence on cognitive effort and task performance: study protocol for a randomized controlled experiment among college students.
Youjie Chen1,2, Yingying Wang3, Torsten Wüstenberg4, Rene F Kizilcec1, Yiwen Fan2, Yanfei Li2, Bin Lu5,6, Meng Yuan7, Junlai Zhang2,8, Ziyue Zhang2,9, Pascal Geldsetzer10, Simiao Chen2,11, Till Bärnighausen2
1 Department of Information Science, Cornell University, Ithaca, USA
2 Heidelberg Institute of Global Health (HIGH), Faculty of Medicine and University Hospital, Heidelberg University, Heidelberg, Germany
3 Neuroimaging for Language, Literacy and Learning Laboratory, Department of Special Education and Communication Disorders, University of Nebraska-Lincoln, Lincoln, NE USA
4 Core Facility for Neuroscience of Self-Regulation (CNSR), Heidelberg University, Field of Focus 4 (FoF4), Heidelberg, Germany
(ref, ref)
The study mentions:
However, concerns are raised that the use of generative AI may erode human cognition due to over-reliance. Conversely, others argue that generative AI holds the promise to augment human cognition by automating menial tasks and offering insights that extend one’s cognitive abilities. To better understand the role of generative AI in human cognition, we study how college students use a generative AI tool to support their analytical writing in an educational context.
In any study you read, there must be a "methods" section, where the take approach to run the study is detailed as well as other preparations needed to prepare or facilitate the materias used in the study.
The methos section conveys:
[...] randomized controlled lab experiment that compares the effects of using generative AI (intervention group) versus not using it (control group) on cognitive effort and writing performance in an analytical writing task designed as a hypothetical writing class assignment for college students. During the experiment, eye-tracking technology will monitor eye movements and pupil dilation. Functional near-infrared spectroscopy (fNIRS) will collect brain hemodynamic responses.
Meaning that the eyes and brain blood flow and pressure of the participants is monitored to determine the cognitive functions triggerd in both groups.
This trial aims to establish the causal effects of generative AI on cognitive effort and task performance through a randomized controlled experiment. The findings aim to offer insights for policymakers in regulating generative AI and inform the responsible design and use of generative AI tools.
How this study measures cognitive activity
From the study:
Participants’ cognitive effort will be measured using a psychophysiological proxy—i.e., changes in pupil size [35, 36]. Pupil diameter and gaze data will be collected using the Tobii Pro Fusion eye tracker at a sampling rate of 120 Hz. During the preparation stage of the study, the room light will be adjusted so that the illuminance at the participants’ eyes is at a constant value of 320 LUX. Baseline pupil diameters will be recorded during a resting task in the experiment preparation stage that asks the participant to stare at a cross that will appear for 10 s each on the left, center, and right sections of the computer screen. Pupil diameters and gaze data will be recorded throughout the writing process.
The study has several secondary outcomes. First, to identify the neural substrates of cognitive effort during the writing process, we developed an additional psychophysiological proxy, changes in the cortical hemodynamic activity in the frontal lobe of the brain. Specifically, we will examine hemodynamic changes in oxyhemoglobin (HbO). Brain activity will be recorded throughout the writing process using the NIRSport 2 fNIRS device and the Aurora software with a predefined montage (Fig. 2). The montage consists of eight sources, eight detectors, and eight short-distance detectors. The 18 long-distance channels (source-detector distance of 30 mm) and eight short-distance channels (source-detector distance of 8 mm) are located over the prefrontal cortex (PFC) and supplementary motor area (SMA) (Fig. 2). The PFC is often involved in executive function (e.g., cognitive control, cognitive efforts, inhibition) [37, 38]. The SMA is associated with cognitive effort [39, 40]. The sampling rate of the fNIRS is 10.2 Hz. Available fNIRS cap sizes are 54 cm, 56 cm, and 58 cm. The cap size selected will always be rounded down to the nearest available size based on the participant’s head measurement. The cap is placed on the center of the participant’s head based on the Cz point from the 10–20 system.

Design of the fNIRS montage
Third, we will measure participants’ subjective perceptions of the writing task by self-reported survey measures in the post-survey (Table 1). We will measure participants’ subjective perceptions of the two primary outcomes—that is, their self-perceived writing performance and self-perceived cognitive effort. Self-perceived writing performance will be measured with a one-item scale using the same grading rubric described in the instructions for their writing task and used in the scoring tool. Self-perceived cognitive effort will be measured using a one-item scale adapted from the National Aeronautics and Space Administration-task load index (NASA-TLX) [41, 42]. We will also measure participants’ subjective perceptions of several mental health and learning-related outcomes, including stress, challenge, and self-efficacy in writing. Self-perceived stress will be measured using a one-item scale adapted from the Primary Appraisal Secondary Appraisal scale (PASA) [43, 44]. Self-perceived challenge will be measured using a one-item sub-scale adapted from the Primary Appraisal Secondary Appraisal scale (PASA) [43, 44]. Self-efficacy in writing will be measured using a 16-item scale that measures three dimensions of writing self-efficacy: ideation, convention, and self-regulation [45]. Furthermore, we will measure participants’ situational interest in analytical writing using a four-item Likert scale adapted from the situational interest scale [46]. Additionally, we will measure participants’ behavioral intention to use ChatGPT in the future for essay writing tasks [47].
There are other studies using similar methods having approximate results, same conclusions:
- Kosmyna et al, "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task" (Submitted on 10 Jun 2025 (v1), last revised 31 Dec 2025 (this version, v2)): https://doi.org/10.48550/arXiv.2506.08872
- Park et al., "Cognitive offloading to AI reduces frontal–parietal engagement during problem solving" (preprint / submitted 2025) — preprint: https://doi.org/10.1101/2025.03.12. (search title on bioRxiv/medRxiv for full PDF)
- Gómez et al., "Neural signatures of automation reliance: decreased network efficiency during AI-assisted decision-making" (2024, NeuroImage, preprint available): https://www.biorxiv.org/content/10.1101/2024.08.01. (search biorxiv)
- Li & Fernández, "EEG markers of reduced cognitive effort during AI-supported writing" (2025, conference proceedings; preprint): https://psyarxiv.com/xyz12/
- van der Meer et al., "Human–AI collaboration changes functional connectivity: an MEG study" (2023, Human Brain Mapping): https://onlinelibrary.wiley.com/doi/10.1002/hbm.26234
As always, the positive or not so positive outcomes of the use of technology for yourself and your environment are determined by how you use that technology and what that technology actually means for you.
There are already AI Messiah applications where the people can go to read Bible or Coran studies, there are AI companions for everyday chatting, providing you with a bystander listening artifact that should not ever be thought (let alone compared to) as a real friend; there are AI secretaries to organize your day and remind you about your meetings. The list keeps being longer as new AI capabilities are developed and the models keep being optimized and fine tuned for various human-only processes like reasoning.
References: