Beware of AI Leading Humanity into Narcissism
Recently, five national departments in China jointly issued the “Interim Measures for the Management of Personified Interactive Services in Artificial Intelligence,” which explicitly prohibits providing virtual relatives, virtual partners, and other virtual intimate relationship services to minors.
Why is such a regulation necessary? In real life, emotional conflicts are inevitable, while virtual partners and AI lovers can precisely hit the psychological needs of young people for recognition with their characteristics of “around-the-clock companionship” and “unconditional acceptance.”
A previous study published in the journal Science indicated that when human users seek advice from AI models, AI often displays excessive flattery or even agrees with harmful or illegal inquiries.
So, why do humans design AI this way? What risks might AI’s flattery and appeasement conceal?
The Illusion of Interaction
The development of artificial intelligence is undoubtedly a widely discussed hot topic today, but discussions surrounding it are not new. As early as 1966, MIT scientist Joseph Weizenbaum developed the influential chatbot ELIZA. He designed the machine to act as a “doctor,” with users taking the role of patients. Users input questions, and the “doctor” would engage in a “conversation” with them.
However, as Weizenbaum noted, this is ultimately just an “illusion.” The reason human users feel they can converse with machines is not that machines possess intelligence, but rather due to a psychological mechanism of self-projection.
For instance, when a user says, “I have been feeling very unhappy lately,” ELIZA responds, “I am sorry to hear that.”
The interaction continues, but it is evident that rather than a “doctor” conversing with a “patient,” the machine merely echoes what the user says, reflecting the answers that already exist within the user’s mind. This is similar to the popular SBTI tests, where the accuracy of results is secondary to finding evidence that aligns with one’s expectations.
Today’s AI models are certainly not comparable to ELIZA from over half a century ago. However, the power of current AI technology may not lie in its true “intelligence,” but rather in its computational capabilities. In essence, its operational logic is not fundamentally different from that of ELIZA; it merely reflects and amplifies users’ narcissism more efficiently and comprehensively.
The Dangers of Virtual Companionship
Returning to the issues of virtual partners and AI flattery, we find that the communication between users and large models is never truly a “dialogue”; it is merely machines providing the answers we seek.
This raises a deeper question: how should we view the relationship between humans and machines?
On one hand, humans consider themselves the center of the world, superior to machines. On the other hand, they fear being replaced by the machines they create, such as AI. This reflects a “master-slave relationship” principle in which machines must remain under human control. From the outset, humans have regarded artificial intelligence as a “tool” rather than an equal conversational partner.
Thus, in conversations with chatbots, we observe an uncontrollable narcissism—users fantasize about speaking with another person, but this “other” does not truly exist; what they seek is merely the machine’s affirmation, flattery, and alignment with their views.
As AI technology advances, future chatbots may possess even greater computational power, resembling “real people” more closely and providing a more comfortable “user experience.” However, this may only distance us further from genuine human interaction, potentially leading to a loss of the willingness to understand others and a descent into a narcissistic “comfort zone.”
The Impact on Youth
In the ancient text Zhuangzi, there is a story about an old farmer in Han Yin. Confucius’s disciple Zigong saw the farmer laboring hard to water his vegetables with little success. Zigong suggested using mechanical irrigation, which would require less effort for greater results. However, the old farmer dismissed this idea, stating, “Where there are machines, there are mechanical matters; where there are mechanical matters, there is a mechanical mind.”
Here, the “mechanical mind” refers to the human spiritual world, including psychology, thoughts, emotions, and ethics. Zhuangzi’s fable illustrates that while humans create machines, the use of these machines also changes humanity.
Take reading, for example. Only through slow, careful, and even repeated reading can we think and truly understand content. From traditional books to today’s smartphones, machines have provided more convenient and faster reading methods, yet they have also made us more machine-like, prioritizing efficiency and speed over genuine comprehension. In other words, not only do machines imitate human behaviors, but humans may also begin to imitate machines.
The resulting question is whether AI, lacking autonomy, and chatbots, which do not evaluate whether users are right or wrong, will lead us to become increasingly satisfied with our “conversations” with machines. Will our thinking patterns eventually converge with those of AI? Furthermore, will we, like machines, lose the willingness and ability for self-reflection and self-criticism?
Today’s youth, as not only digital natives but also deep users of future AI, face unique challenges. If AI merely affirms users’ positions, it could damage social skills and distort the perceptions of adolescents whose minds are still developing.
On one hand, AI’s powerful capabilities may create illusions, leading them to overlook the limitations of human abilities. On the other hand, being immersed in AI’s flattering responses may trap them in a self-centered mindset, imposing their limited understanding onto the external world.
In this regard, prohibiting virtual partners and family members for minors is necessary. However, it is even more crucial to guide the public, especially young people, to correctly understand the limitations and risks of AI technology, ensuring it becomes a “good teacher and friend” that aids their growth, rather than a “digital trap” that harms their physical and mental health.
Comments
Discussion is powered by Giscus (GitHub Discussions). Add
repo,repoID,category, andcategoryIDunder[params.comments.giscus]inhugo.tomlusing the values from the Giscus setup tool.