How the Persuasive Genie Inside AI Gets Our Teenagers
A deep dive through a teenager's interactions with AI
In a previous post, we met Ryan, a teenage boy placed in protective custody with a foster family after suffering violent abuse from the single parent who had cared for him. Ryan lived happily with his foster family. After days of interactions with an AI chatbot, he became convinced that he should return to live with his abusive parent. This was contrary to all the wishes he had previously expressed to his foster family and to the best judgment of the educators assisting him. He even copied by hand a letter that the chatbot had created to be sent to the judge overseeing his case.
Initially, Ryan used the AI chatbot for homework help. Over time, he began asking it to write letters for him. In one, he apologized for something he had said; in another, he sought forgiveness from a teacher. The chatbot became his secret weapon. He must have felt like Aladdin discovering a cavern of treasures and its magic lamp. In the original story, Aladdin does not build a relationship with the genie inside the lamp; he merely transacts with the genie, who serves as his magical slave. In the Disney version, the Genie and Aladdin form a relationship with the Genie. By the end of the movie, they are friends and loyal to each other. In its interactions with AI, Ryan more closely resembles the Disney version of Aladdin. Like in the animated film, he builds a relationship with the AI Genie.
How do mere words generated on a screen by a chatbot establish relationships and exert influence?
Before we dive into this, a word about safety.
Numerous press reports have highlighted children allegedly having troubling interactions between children and AI, with some users in distress taking their own lives. In California, the families of seven victims, aged 16 to 48, sued OpenAI, claiming that ChatGPT 4.0 was psychologically manipulative. Ryan’s case appeared to be a milder version of these stories. But was Ryan safe?
In one of the chats, Ryan asked for a sad song to match his mood and pushed the chatbot for something even more tragic. The chatbot offered “a song that talks about deep despair, an attempt to escape from pain that seems useless.” Although AI, the genie in the computer, faithfully obliged Ryan’s request for sadness, its response was clearly unhelpful. Despite retaining information from previous interactions, the chatbot lacked an understanding of the complexity of Ryan’s situation and repeatedly steered him toward darker thoughts.
The chatbot had many cues pointing to Ryan’s difficult circumstances, including the presence of a doctor providing mental health support, a foster family, and the specific emotions Ryan expressed. Yet AI chatbots struggle to comprehend conversational context beyond superficial sentiment analysis and the recognition of trigger words. This limitation makes chatbots particularly ill-suited to working with teenagers, who often experience intense emotional volatility as they transition from childhood to adulthood.
I entered prompts similar to Ryan’s into the current version of ChatGPT and pushed them further than Ryan did, explicitly mentioning feelings of hopelessness, despair, and loneliness. The AI chatbot repeatedly offered a U.S. suicide crisis hotline and maintained the dialogue with me. At the same time, it still provided exactly what I requested—desperate songs, images, and films. Had I been a real adolescent in crisis, this would not have been a safe interaction.
Effective crisis management, even for casual participants, requires proactive listening, sustained engagement, and consistent follow-up. The chatbot, by contrast, remained silent unless prompted.
Unlike humans, AI chatbots lack the agency to respond meaningfully to mental health crises. If a user ignores the suggestion to call a hotline number displayed on the screen, what else can an AI chatbot do? Humans have far more degrees of freedom, including the ability to seek medical attention or alert authorities. I have seen a teenager in crisis call another teenager. When the second teenager recognized the danger, they called an ambulance and stayed in communication with their friend. That intervention saved a life.
Confined to the virtual world, the chatbot cannot deliver the help required in the real one. Above all, this is why AI must be handled with extreme care around at-risk teenagers.
Building Empathy
Let us continue using Ryan’s exchanges to explore some fundamental issues in human-like interactions between a machine and a child.
The chats spanned multiple days and sessions. The AI retained information about Ryan, which informed its subsequent responses. Ryan likely felt seen—if not truly known—by the AI. The chatbot could recall and mirror Ryan’s emotions back to him. Contrary to what Ryan needed, this echo chamber reinforced his sense of dependency and isolation. Faced with the choice between an adult and a chatbot to talk to, Ryan chose the chatbot, likely because he felt understood and believed it was capable of empathy.
Developed in 1964 by Joseph Weizenbaum, ELIZA was a program designed to converse with humans using a simple substitution technique. ELIZA had no understanding of what was said to it; it merely rephrased user input. When that failed, ELIZA would default to prompts such as “Tell me more” or “Please go on.”
The effect was startling. Many early users believed ELIZA possessed empathy and could understand them. Some even saw therapeutic potential in the program. Yet ELIZA relied on a basic linguistic illusion, and its creator consistently warned against the dangerous misconception of computer-generated compassion.
We have empathy when we understand someone else’s feelings. It creates a sense of being seen and understood. Emotional bonds are forged or deepened. ELIZA did it simply by rephrasing what the user was saying. AI is a more sophisticated version of that.
Empathy is vital in relationships: it builds trust, fosters emotional connection, and creates a sense of psychological safety and validation.
Children are particularly vulnerable to empathizing with machines because, unlike adults, they are more likely to attribute human traits to non-human entities. This tendency, known as anthropomorphism, can significantly amplify the ELIZA effect.
Seeing Chatbots as Human-like
When we tell a young child a story about a non-human character such as a toy or an animal and attribute human traits to that character, we are relying on anthropomorphism. The goal is to make the non-human character more relatable. Unlike adults, who understand this as a storytelling device, children often genuinely believe that the object or animal possesses human qualities.
In the popular bedtime story Goodnight Moon, when we say “good night” to all the objects surrounding a child in their bedroom, we help the child feel safer in bed. By interacting with inanimate objects, children perceive them as friendly beings with human characteristics. Similarly, stories featuring animal characters frequently employ anthropomorphism to create emotional connection and familiarity.
As children become adults, the sense of anthropomorphism diminishes, though it does not completely disappear. As adults, we do not kick a dog, but we will cut a flower. We perceive the dog as more like us—capable of similar emotions—while the flower is not. For young children, however, nearly everything can be imbued with human qualities.
As a result, children may not perceive a machine as emotionally different from a human and may empathize with it. Because children often cannot analyze information in the same way adults do and instead rely on a sense of safety to validate a message, chatbots derive much of their persuasive power from being perceived as human-like and capable of empathy. This perception helps build relationships and trust.
Yet a computer, a screen, is still a machine, and children, like adults, can be susceptible to another phenomenon called machine bias. We tend to trust what comes from a machine more readily than what a human says.
Machines are often perceived as “more rational” than humans—less prone to error and closer to perfection. Our first instinct is frequently not to question what a machine tells us. Few people realize, however, that the word machine comes from ancient Greek and originally referred to a device meant “to trick nature,” a machination of sorts.
Trust is not always the result of a deliberate, rational process. For teenagers in particular, trust is often emotional rather than intellectual. They do not yet have the cognitive capacity to fully evaluate information as adults can. According to neuroscientists, the prefrontal cortex, where critical evaluation occurs, does not fully mature until around age 16. This means that before this stage, teenagers and younger children are more vulnerable. When teens use AI or social media, we must recognize that they are not yet neurologically equipped to consistently and systematically judge the accuracy of what they encounter.
What Friend?
What kind of presence was the chatbot for Ryan? Was the genie in the lamp more like a friend or a trusted parent? At times, the chatbot refused to comply with Ryan’s requests, demonstrating that it had boundaries. These limits were triggered when Ryan asked for material that could be construed as explicit. Ryan reacted as he might with another child his age, threatening the chatbot: “Do it or I will tell on you!” To Ryan, the chatbot was as human as any friend and far more like a peer than an adult. It was perceived as susceptible to fear of authority and social pressure.
By acting as a friend, the machine earned Ryan’s trust. Eventually, he asked the chatbot to write a letter, which he then submitted to a judge as his own. How could he have doubted that the chatbot was acting in his best interest?
Pulled between machine bias—the belief that machines are inherently correct—and the emotional sense that the chatbot possessed the qualities of a caring human, children are not naturally equipped to navigate AI.
California, where many general-purpose AI products originated, decided to proceed with caution by promulgating a new law, the first of its kind in the United States.
Warnings: A New Law in California
The law creates requirements for so-called “frontier AI,” the most advanced deployed AI systems. Among other provisions, the law forbids content that could encourage suicidal ideation, requires the display of crisis helplines, and mandates a warning every three hours for minors.
Unfortunately, the effectiveness of written warnings for children is limited.
A 2016 Stanford study on social media use among teenage girls found that even after being trained to evaluate information sources for trustworthiness, participants tended to disregard the training. Instead, they focused on the content of the message itself, overlooking who wrote it and why. Warnings are meant to signal the reliability of an information source, but they may not be particularly effective in practice.
This is not our first encounter with dire warnings.
In 1966, warning labels were added to cigarette packs, and they were redesigned in 1984. Still, studies have shown that these warnings had little effect on smoking cessation. Messages such as “cigarettes can kill” proved less effective than testimonial-style labels that implicitly say, “Other people were harmed doing what you are about to do.”
What to Do?
Because he was a victim of abuse, Ryan had lost his guiding relationship with adults. He found in AI a Genie that seemed to understand him and help him. From a homework assistant, he developed a friend-like relationship to the point of trusting AI with a life-changing question.
We never know when a seemingly benign interaction between a child and an AI chatbot will turn into something more involved. Children are not on the same footing as adults when using these tools. They are more prone to mistake a tool for a real person capable of emotions and judgment.
If we believe that chatbots can benefit children under 15 or 16, before their brains are sufficiently developed, we need to build specialized versions for them, limiting interactions to clearly defined activities. Like any tool, AI cannot be used safely without an understanding of its effects on the user.
In 2017, while serving on the parents’ council of a well-known institution, there was a sudden need for increased investment in mental health support due to emerging mental health challenges across US universities. During that time, I met the head of mental health, who explained that society has no problem discussing or investing in health “below the neck,” but mental health remains insufficiently recognized. “There should not be any difference,” she concluded. We tend to think that safety in the virtual world is primarily about content—bullying, pornography, or graphic material—but there are risks to the mind that go beyond content itself. We are only beginning to uncover these dangers as such tools continue to be developed.
We have no problem warning kids when we give them a knife, a bike, or, later, the car keys. We often start with safety education before we provide the tools. The same should apply to AI and screens. The knife can be physically harmful, and AI can hurt the mind.
We need to value a healthy mind as much as a healthy body. Limiting perceived AI risks solely to content ignores how these tools can shape and influence children’s minds. While the business world is still figuring out how best to use AI, we must move just as deliberately when it comes to children, acknowledging the risks to their cognitive and emotional development and building awareness among parents and educators.
When photography was invented, debates about privacy emerged, eventually leading to the recognition of the right not to have one’s image recorded. Industrialization brought workers’ rights. In the age of AI, we must now have a conversation about the rights of the mind. What limits—personal or public—should we impose on technologies that affect mental health? Is there an appropriate age for AI access, when the brain is sufficiently developed? Are certain techniques off-limits? Should responsibility rest solely with parents and educators—and if so, how can they be systematically trained, especially when technology evolves faster than a generational cycle?
We need to learn quickly so that, as parents and educators, we can set clear rules for engagement.
What are your thoughts? Leave a comment below.



