People who constantly ask chat GPT are less likely to practice critical thinking. Is artificial intelligence undermining human intelligence?


It reads like a cry for help: "I can't think for myself anymore," writes a 19-year-old user in a Reddit forum about Chat-GPT. She does all her writing with AI and is therefore losing the ability to think critically. Even worse, she feels she's losing her ability to think in general.
NZZ.ch requires JavaScript for important functions. Your browser or ad blocker is currently preventing this.
Please adjust the settings.
Of course, this is an exaggeration, as the user herself admits. But her impression still seems to be generally correct: Those who use models like Chat-GPT for thinking tasks run the risk of less training in their independent thinking. This is what initial studies show.
This raises the question: Does AI make us more efficient, but also dumber in the long run? And how can we use chatbots to expand our intelligence instead of shrinking it?
“Use it or lose it”Lutz Jäncke, neuroscientist and professor emeritus at the University of Zurich, puts it this way: "Critical thinking is like a muscle. Children and adolescents need to train it to become good at it. And adults also need to practice it regularly, otherwise they'll forget it."
Now, preliminary publications of studies show that this regular practice of critical thinking could be torpedoed by chat GPT.
- Researchers at the Massachusetts Institute of Technology (MIT) have shown that chatbot users process information more superficially, exert less effort, and forget what they have learned more quickly . For the study , 54 participants had to write essays on topics such as loyalty, courage, happiness, or art. A cap with sensors measured their brain activity (EEG). 18 participants were allowed to use chatbots, 18 had access to Google Search, and another 18 had no aids at all. The results showed that the less the participants accepted help from technology, the more active their brains were.
- In several experiments with over 4,500 test subjects, researchers at the Wharton School of the University of Pennsylvania showed that chatbot users remained more passive in the learning process than Google search users. Subjects had to gather information on questions such as "How do you plant a vegetable garden?" or "How can you lead a healthier lifestyle?" either with an AI chatbot or via Google search. They were then asked to formulate "tips for a friend." The results showed that chatbot users gave poorer and less original advice.
- Other researchers at the University of Pennsylvania showed in a study of around 1,000 high school students in Turkey that participants were able to solve math practice problems better with a chatbot-like tool than without. However, on an exam without the AI tool, the students performed around 17 percent worse than their peers who never had access to the chatbot .
Neuroscientist Jäncke isn't surprised by these study results. He says that with Chat-GPT, homework or research assignments can be completed within seconds. "The brain can, so to speak, continue to sleep." Those who rely too much on AI therefore accept the risk of processing the content less deeply and thus learning less.
Deep processing and learning means activating your brain in new ways and storing new knowledge, explains Jäncke. This happens especially when we connect new information with existing knowledge: with memories, feelings, and previously stored concepts. This process is strenuous. "But it is this effort that makes learning sustainable," says Jäncke.
But people are often lazy—or so busy with life that they take the path of least resistance when it comes to intellectual tasks like writing, researching, and summarizing, Jäncke believes. Therefore, he expects that in the future, there will be many chatbot users who will train their critical thinking skills less, at least in the short term.
The calculator has raised the level of mathHistorical parallels suggest that this may not be as bad as one might assume after reading the studies. When calculators were introduced in schools in the 1970s, this raised the standard of mathematics education. Because students suddenly had to spend less time on simple addition and multiplication, they were able to learn more and more sophisticated mathematical concepts.
Of course, AI works fundamentally differently than a calculator. It hallucinates. And it automates new aspects of human thinking. Nevertheless, studies show that AI can also be used in a way that neither short-circuits critical thinking nor hinders learning.
The authors of the MIT study describe how some chatbot users used AI to review and critically reflect on their own ideas. Their brains remained just as active as those of study participants without access to chatbots. However, these users—described in the study as "highly competent"—were the exception. They apparently resisted the urge to outsource strenuous mental work to AI and remained engaged with the topic.
The study with Turkish math students also showed that the way the AI tool is designed makes a significant difference in learning. The researchers not only tested Chat-GPT as a math coach, but also developed their own AI tool that prevented students from outsourcing the mental work to the chatbot. Users of this "tutor AI" performed just as well on the test as students in the control group who had no access to technology.
This shows that demonizing AI in the context of learning and critical thinking in general is just as misguided as premature enthusiasm.
Control is no longer necessary in schoolsMedia scientist Dominic Hassler, who trains teachers in AI at the Zurich University of Teacher Education, nevertheless expresses concern. He believes there could be a transition generation of students who will receive a less well-educated education because the education system is not yet geared towards AI.
Hassler has observed in classroom visits over the past few years that the majority of teachers still base their teaching on control. They are now trying to adapt this long-outdated teaching culture to AI, says Hassler. This leads, for example, to teachers introducing numerous short tests so that students cannot avoid regular learning. This may work for mathematics, says Hassler, but in humanities subjects or languages, it is hardly possible to meaningfully assess skills with a short test.
According to Hassler, the solution would be a new teaching culture that is geared to the students' lives. This requires teachers who can awaken interest in their subject among children and young people. Ultimately, it is personal motivation that determines whether and how much students learn in school.
AI can also force users to thinkThe big question now is how the AI tools need to be designed to function like tutors. Psychologist Sandra Grinschgl from the University of Bern has further developed an AI tool that forces students to reflect. The tool ensures that students only receive a response from the chatbot after they have answered questions about their motivation and current level of knowledge. The AI considers the user's answers and adapts its own response to their level. At the end, it asks comprehension questions.
Grinschgl believes this encourages active reflection among her students. It seems to be working: Her students' feedback shows that they don't just mindlessly access the chatbot. Grinschgl also observes in workshops at middle schools that young people also have a healthy skepticism toward AI. Therefore, she isn't overly concerned about a general dumbing down of AI.
This applies in school, as in other areas of life: You can use AI to avoid strenuous mental work. Or you can use AI to sharpen your thinking. In 2008, the American background magazine "The Atlantic" ran the headline: "Is Google making us stupid?" We are still waiting for a definitive answer to this question.
nzz.ch