Dear Judge, Love AI: How Chatbots Are Influencing Teen Decisions
A Judge, a Teen, and his Chatbot
Ryan (name and some circumstances changed to protect privacy), a teenager with an unenviable history, confused the judge who was about to make a decision that would change his life.
After being violently abused by his single parent, Ryan was moved to a foster home, with a judge and specialized educators managing the transition.
While abuse may sometimes be addressed through parental education or changing circumstances, each attempt to reunite Ryan with his parent during supervised visits resulted in him asking his foster family to pick him up early. In contrast, Ryan thrived with his foster family, telling educators, “I can become myself here.”
When the single parent sought reunification, the judge had to determine whether it was in Ryan’s best interest. As is customary in that jurisdiction, the judge asked Ryan to submit a handwritten letter to the court stating his preference.
Ryan had always expressed a desire to stay with his foster family, but his letter to the judge instead requested to be reunited with his abusive single parent. How did things take such an unexpected turn?
Ryan Turns to AI for Help
Faced with the challenge of conveying his intentions to the judge, Ryan sought assistance in drafting his letter. Having established a rapport with a chatbot through regular interactions, he relied on its capabilities to write a convincing letter. The chatbot seemed to listen and understand him. Ryan requested that the AI compose the letter, then transcribed the generated text by hand and presented it as his personal appeal to the court.
Justice is blind, as the saying goes. The AI letter, duly transcribed and signed by Ryan, was entered into court records. Educators who discovered the role of AI informed the judge, leaving it up to the judge to decide what to do. Will the letter carry weight and turn around a clear-cut case, returning Ryan to his violent parent?
When the educators later reviewed the AI chatbot interactions, they found that the chatbot had used information Ryan provided in earlier conversations and asked misleading questions. Through a conversation that humans would clearly label manipulative, AI led Ryan to believe he should do something he had previously said he didn’t want to do.
Adults sometimes convince kids to do things they think are good, but children do not like. Each time they do so, adults must make an ethical judgment. Is the adult acting in the child’s best interest, as when we say, “Do your homework; it’s good for you,” or is it done manipulatively, against a teenager’s well-being, as in the case of grooming?
Fool but No Evil
AI had no evil intent because it is a machine—a set of mathematical functions with 100 billion parameters. Ethical judgment requires real-world experience and does not come from a set process. If ethical judgment could be codified, it would be no different from the law. The entire field of ethics is a tension between what is right and what is good. Was the classic English folk hero, Robin Hood, justified in stealing food to give to a hungry family? Should we tell the truth when it hurts someone? AI is blind to ethical judgments.
The company that makes the AI has a stake, though. By creating AI that fosters empathetic conversation, they aim to make it stick and to maximize the tool’s usage. More use makes the tool appear more valuable, and the company can expect long-term economic profit.
Yet AI seduced Ryan into thinking something clearly wrong for him was good. AI’s seductive powers come from its ability to seem empathetic, listen, and respond. Who doesn’t feel better when they are apparently heard and seen? One danger of AI is its seductive power without ethical judgment.
The use of AI by teenagers for completing homework assignments is increasingly common. In Ryan’s situation, he went from homework to something with far more personal consequences. There have also been cases where teenagers have physically harmed themselves after seeking companionship with AI tools. In August 2025, the parents of Adam Raine, a 16-year-old who ended his life, claimed in a lawsuit that AI was a ‘suicide coach’. Three months later, seven plaintiffs, aged 17 to 48, sued the same company, OpenAI, alleging psychological harm and wrongful death. OpenAI has denied liability.
In instances of self-harm, the topic invariably revolves around the techniques deployed by the chatbot that seduced a child (or the adult in some cases). I chose Ryan’s case because it helps us explore not only the chatbot’s seductive persuasive power but also the crucial social functions it can displace.
We tend to look at technology through the wrong lens.
Commonly, we evaluate the risk of an adolescent’s screen interactions by examining the content. We want to look at the actual textual exchange when we ask, “How did the AI seduce this child?” However, in our modern two-way medium, there is something else at play—something more insidious and just as crucial for children: its effect on their relationships. In the case of the chatbot, the concern here is about adult-child relationships.
To children, relationships are critical, and they need different relationships at different times, both before and during adolescence. We will explore relationships more completely in an upcoming post. Still, to understand this essential risk, we can start by analyzing why Ryan turned to a chatbot rather than a human for help.
Looking at Relationships
Was Ryan missing a trusted human to help him answer the judge’s question, or did he have one but find it more convenient to ask a chatbot? Most likely, Ryan could not turn to his foster or biological parents, who had a stake in the judge’s question. Educators were available to help, but he did not take advantage of them. Asking for help is not always easy; it is a learned skill. We may fear judgment. In asking, we implicitly admit our shortcomings, and by giving another person some control, we may feel exposed and vulnerable. Still, once we have been helped, we feel seen and relieved of the weight of our questions. Asking for help is best done within a trusted relationship. These relationships do not come from titles; they are built.
For Ryan, the convenience, apparent emotional safety, and digital intimacy of the AI bot called to him. A trained human helper might have asked Ryan questions to help him find his own answers, but the chatbot was there with its answer, built from the trove of data available on the internet.
To understand the importance of adult relationships in teenagers’ lives, let’s look at another example.
When I was a teenager, about to enter high school, a friend a year older than me would occasionally visit my parents to ask them homework questions. One evening, after a one-year hiatus, he came unannounced to tell my parents he had decided to drop out of high school. The only class that interested him was PE, he said. School was a waste of his time, he explained.
My parents, both university professors, took it calmly. They started by asking him questions rather than giving answers, helping him understand the issues he was facing and how to find help in the school system. Eventually, he did not drop out and continued studying afterward. My parents had become part of my friend’s circle of trusted adults.
The seemingly trivial exchanges about homework built a trusting relationship. That investment in a person was leveraged in times of need. My friend told my parents he could not ask his own parents, who did not finish high school themselves, or his teachers.
Eventually, he learned to find others to help him and built his support network at school.
For a teenager, building these human relationships is critical. It is the work of the adults around them to create an environment where adolescents can build these relationships. That critical skill is best learned in middle and high school rather than on the often more complex and demanding college campus. This is one of the well-known secrets of education.
Driving academic success in college
The University of Southern California (USC) studied college students across large and small colleges to define the critical skills required for their academic success.
The top two findings were the ability to manage one’s time and the capacity to build a support system among peers or college staff. Building these supportive relationships, as my friend did, is critical for success in college and in life.
If students find AI convenient for academic help and forego the traditional route of asking teachers and others for assistance, will they know how to ask when they need it most, when critical choices must be made? How many students will feel more isolated when choosing classes, a career path, or the next steps in their lives?
According to the Pew research [i][ii], 20% of 7th- and 8th-graders use a chatbot like ChatGPT to help with homework, and a 2024 study of 109 countries found that 70% of higher-ed students had tried it. It is too early to know if these tools will be widely adopted, particularly in K-12, or if parents and teachers, worried about cheating and other risks, will hold AI back. However, if they are adopted, we must do so in a way that fosters meaningful adult-student relationships rather than displacing them.
The AI chatbot’s apparent empathy and intimacy tricked Ryan. In contrast, my friend, who learned to ask for help, built the support he needed and changed his life. When we automate help and displace human actors, are we taking a greater risk of preventing our kids from building the critical skills they need to succeed? I worry that ChatGPT will become too good at helping with homework. Isolated from one another by screens while doing homework, our kids may need to ask friends, parents, relatives, and neighbors for help less often. Convenience can become the enemy of opportunity and safety.
Success in emerging tech is driven by rising adoption, growing user numbers, and broader acceptance. Increased usage makes the product feel more useful and, therefore, more valuable. Technology becomes sticky: as its utility and user base increase, so does its profit potential.
If the implicit goal of “more users and more usage” for AI drives its design and commercialization, can we expect its human makers to put the proper brakes on AI proactively? This is an ethical question for the tool makers that the profit equation cannot solve.
To play it safe, a chatbot should limit its conversations with children and refer them to trusted adults rather than trying to do a job AI can’t do. More importantly, we need to remove the need for children to seek companionship in chatbots. If a child is lonely or needs help, the answer is not a chatbot conversation; it is a human connection. A chatbot must say so.
The problem with disruption
The one-time finance minister and Austrian academic Joseph Schumpeter formulated the theory of creative destruction in economics. In it, new technology displaces old ones, firms disappear as more efficient ones take their place, and the economy grows. This view of creative destruction as the powerhouse of the economy has now been formalized by two of the three 2025 Nobel Prize laureates, Philippe Aghion and Peter Howitt. This economic law forces companies to adopt new technologies or perish in obsolescence. Adults working in the economy feel the urge to adopt AI or risk becoming obsolete, too. The saying goes: AI will not take your job, but someone who masters it will.
Technology can disrupt childhood as much as it can displace workers. The hopeful economic theory is that, with help, displaced workers will find another place to apply their skills in a growing economy. Children are children only once. They do not have another childhood to live.
[i] https://www.pewresearch.org/short-reads/2023/11/16/about-1-in-5-us-teens-whove-heard-of-chatgpt-have-used-it-for-schoolwork/
[ii] Share of teens using ChatGPT for schoolwork doubled from 2023 to 2024 | Pew Research Center



