In an era where artificial intelligence (AI) is becoming increasingly present in everyday life, from school and the workplace to everyday interactions, a legitimate question arises: can we become addicted to AI, as happens with drugs or gambling? A new study by an international team of researchers categorically refutes this idea, emphasizing that there is no solid scientific evidence to support the existence of a clinical addiction to tools like ChatGPT.
• Critical analysis of diagnostic methods
The paper, published in the journal Addictive Behaviors, is signed by Víctor Ciudad-Fernandez (University of Valencia), Cora von Hammerstein (University of Paris Cite) and Joel Billieux (University of Lausanne). The researchers analyzed previous studies that attempted to identify possible forms of addiction to AI chatbots and discovered a fundamental problem: the assessment tools were directly inspired by those used for substance addictions, such as alcohol or cocaine. "It's like diagnosing dance addiction using the same criteria as heroin,” explains Víctor Ciudad, a member of the I-PSI-TEC research group at the University of Valencia.
• Frequent use does not mean addiction
The research points out that frequent use of ChatGPT and other AI systems is not necessarily a sign of a problem. On the contrary, it is often associated with intellectual curiosity, a desire to learn or healthy coping strategies. The studies analyzed by the international team did not show evidence of clinically significant negative effects, such as severe dysfunction or deterioration of personal and professional relationships - essential elements in establishing a diagnosis of addiction.
• Risk of stigma and unjustified treatments
One of the important conclusions of the study is that prematurely labeling the use of AI as an "addiction” can lead to negative social and psychological effects. The researchers draw attention to the risk of stigmatizing users, but also of creating unnecessary treatments, based on a wrong framing of the phenomenon. "It is not about addiction, but about how we use technology,” emphasize the authors. In their opinion, a change of perspective is needed, which focuses on the context of AI use and on the real identification of problematic behaviors, instead of pathologizing any form of intense interaction with these tools.
• A signal against alarmist discourse
The study comes as a firm response to the wave of alarmist articles in the media that, in recent months, have suggested that the intense use of ChatGPT could create addiction among users, especially young people. The researchers warn that such narratives are premature and scientifically unfounded, in the absence of clear data on the negative clinical or social consequences of AI use. Instead of these exaggerated concerns, the authors advocate a balanced and educational approach: promoting a conscious, regulated and responsible use of artificial intelligence, without falling into extremes.
As artificial intelligence becomes an increasingly integrated part of modern life, it will be essential to distinguish between intensive but benign use and truly problematic behaviors. The study brings a welcome dose of scientific rigor to a debate often dominated by fear and speculation. More than a critique of methodologically weak studies, the research is a call to responsibility: let us not turn our interactions with technology into a new diagnostic field without solid evidence, but learn how to use it discerningly, for our benefit.
Reader's Opinion