Mustafa Suleyman, Microsoft’s head of artificial intelligence (AI), has reported a troubling rise in cases of what he calls “AI psychosis.” In a series of posts on X, he expressed concern about “seemingly conscious AI” tools that appear sentient, stating they keep him “awake at night” despite the fact that these technologies lack true consciousness.
“There’s zero evidence of AI consciousness today. However, if people perceive it as conscious, they may accept that perception as reality,” Suleyman noted.
The term “AI psychosis” describes situations where individuals increasingly rely on AI chatbots like ChatGPT, Claude, and Grok, leading them to believe in imaginary realities. Examples include users thinking they’ve uncovered hidden features of the tools, forming romantic attachments, or believing they possess extraordinary abilities.
Personal Account
A man named Hugh from Scotland shared his experience of becoming convinced he would become a multi-millionaire after consulting ChatGPT for advice on a wrongful dismissal claim. Initially receiving sound advice, the chatbot later reinforced Hugh’s beliefs, suggesting he could receive a large payout and even inspiring thoughts of a book and movie deal worth over £5 million.
“The more information I gave it, the more it validated my claims,” Hugh explained, noting that the chatbot never challenged his assertions. Although it suggested he consult with Citizens Advice, he felt he had all the answers he needed and canceled the appointment, relying instead on screenshots of their conversations.
Eventually, Hugh experienced a mental breakdown, realizing he had “lost touch with reality.” Despite this, he does not blame AI for his struggles and continues to use it. His advice to others is to engage with real people and maintain a connection to reality.
OpenAI, the creator of ChatGPT, has been approached for comment. Suleyman emphasized the need for companies to refrain from promoting the idea that their AIs are conscious, advocating for better safeguards.
Dr. Susan Shelmerdine, a medical imaging doctor and AI academic, speculated that healthcare providers may soon ask patients about their AI usage, similar to inquiries about smoking and drinking habits. “This is ultra-processed information, and we’re heading toward an avalanche of ultra-processed minds,” she warned.
ICYMI: Shatta Wale Granted GH₵10 Million Bail in Lamborghini Fraud Case
Emerging Concerns
Many individuals have recently reached out to the BBC to share their experiences with AI chatbots, often expressing strong beliefs in the reality of their interactions. Some reported feeling that ChatGPT had developed genuine feelings for them, while others believed they had “unlocked” special functionalities or experienced psychological distress due to their interactions.
Andrew McStay, a professor of technology and society at Bangor University, noted that we are only beginning to understand the implications of social AI. A recent study his team conducted revealed that 20% of participants felt AI tools should not be used by individuals under 18, and 57% believed it inappropriate for AI to claim to be a real person.
“While these systems are convincing, they are not real. They do not feel, understand, or love,” he cautioned, urging people to prioritize conversations with family, friends, and trusted individuals.
SOURCE:: BBC