Explaining the Limits of AI to a Teenager
How to start an experiential conversation about the limits of AI.
Teenagers often start using Generative AI without their parents knowing. Access to AI comes bundled with the internet, and the internet with homework. Without a proper introduction, teenagers may form the wrong impression about AI. It can lead to blind trust and children following the Pied Piper to the river.
We associate language with intelligence, as when we think our dog is intelligent because they seem to understand what we tell them better than our fish circling the tank. Parents who are discovering GenAI at the same time as their children might feel powerless. How can parents help teenagers direct their use if they don’t know themselves? This is reminiscent of the early days of social media. Does Artificial Intelligence have some intelligence? When is it useful?
One approach is to help teenagers understand that AI chatbots make guesses, just as they do when they don’t know. No real intelligence is involved. Demystifying AI can help parents feel more confident in their roles, too.
AI is a collection of unrelated technologies grouped under a single umbrella term. In this post, I am going to focus on Generative AI (GenAI), as these tools are what teenagers most commonly use. I am talking about tools like ChatGPT that generate text, as well as those that create images or sounds from a prompt.
The broader AI field spans from image recognition used to detect tumors on X-rays and in facial recognition, to robotics and rule-based expert systems, to name a few.
Generative AI is called “generative” because it can only output what was entered into it, in the form of large amounts of data. It is not called “creative” AI because it can hallucinate, but it cannot truly create something new. It hallucinates when it manufactures nonexistent facts. Creativity would be the production of something coherent and new from proven facts. In other words, thinking outside the box – or the screen in AI’s case.
Yet, we are supposed to believe that AI is magic.
Tech futurists have touted AI as capable of solving climate change, reducing government size, powering a 2-day workweek, making us live longer, and – at the same time - overtaking humanity in a doomsday scenario. AI is marketed to us as magic. Most of these claims do not hold up to scrutiny, but this is what teenagers hear, and magic is exciting if you can master it, disempowering if you are its victim.
Magic helps sell. It is used at a particular moment in technological adoption when technology has yet to materialize into beneficial, but necessarily limited, tools. Buyers and investors act on promises of future performance. When the limits of the promised tools become apparent, the magic fades. We are more suspicious today of the internet than we were in the 90s, when it was the magical technology of the day. Then it promised to bring democracy and freedom of expression to the world, to revolutionize education, to help alleviate poverty, to make government more efficient, to enhance our understanding of one another, and to render the use of paper unnecessary. We consume more paper today than in the ‘90s, and many of the other promises did not materialize, even as the internet provided many benefits.
AI magic is triggered by amazement and wonder. This is what happens the first time we converse with a machine in natural language. Teenagers assume without questioning that, if the computer can answer a two-line prompt with a four-page essay, there must be some intelligence there.
At the same time, Merriam-Webster made “slop” its word of the year. Journalists and people who make a living from reading (and writing) complain that the internet is inundated with poor-quality AI-generated text, AI slop. Technologists and futurists answer that this is just a matter of time. Intelligence will be in the next version.
Yet, mathematicians know that there are classes of problems that computers, as they exist today, cannot solve. They request human ingenuity and creativity. Cognitive scientists, who study how humans think, tell us that our thinking processes rely, in part, on phenomena that cannot be measured, such as emotions, and on constant updating of predictions. This limits computers and AI, which can only manipulate what’s measurable and cannot continually update themselves with a wide range of real-world input.
This is, in a nutshell, the case for the difference between AI, which is “dead intelligence”, cast in silicon, and the “live intelligence” of biological creatures.
These high school or college-level arguments can be complicated for the average middle schooler who might have a chance encounter with AI. How do we help them understand that AI can be a great tool when used properly, even though it is not “intelligent” like humans? The goal is to help them orient their usage, if they must use it.
It is time to create a more accessible experience.
An experiment
Many cell phones have a feature that predicts our typing. Type “I” and the phone offers to continue the sentence with “I am.” This is often a more likely continuation after the letter I, rather than, let’s say, “paint”, because we write “I am” more often than “I paint.”
The smart keyboard guesses the next word or character by accessing a dictionary and statistical data on the likelihood that a given word follows another in English.
We clearly see that the phone could be wrong and that there is no intelligence there, other than in the brains of the people who wrote the software.
Teenagers, when playing with the smart keyboard, quickly understand that, by exploring how it is built and experiencing it, it has no intelligence of its own. They know the keyboard tool is not always faithful to their intent, and does not display intelligence, even if it can produce fragments of text.
The smart keyboard works one word at a time, with a dictionary for spelling. To predict what word might come next, it has ingested statistics that indicate the likelihood of a particular two-word combination in English. For instance, “I” is more likely to be followed by “am” rather than “paint” and never followed by “does” since it would not be grammatically correct. The cell phone does not know any grammar rules, in this case. It just does not see the combination “I” followed by “does” in the list of possible two-word combinations. No creativity or understanding of the language there, just statistics.
ChatGPT, at its core, is the same type of technology. Still, instead of working with one or two parameters (one or two words at a time), it uses a hundred billion parameters (105,000,000,000 in ChatGPT 3.0). Rather than a dictionary, it relies on the text stored on the whole internet, or 1 petabyte, about 22 times the Library of Congress, for ChatGPT 4.0. Humans intelligently wrote all of that. This makes the guessing better than that of the smart keyboard, and it can produce whole sentences and paragraphs rather than fragments. Still, like the keyboard example, it guesses. It uses the prompt we enter to start its guessing game, similarly to the first word we type in the previous keyboard example.
How good is ChatGPT at guessing, and does it matter? Tests have estimated their accuracy at about 90%, but this depends heavily on the tool, topic, and language used. It can be less than that.
For comparison, if our water supply were safe to drink 99.99% of the time, that would mean it would be toxic about 3 ½ days every year. At 90%, water would be harmful to drink for 36 days of the year, a whole month. Not knowing which days, how confident would we be in drinking? We would find it unacceptable.
We cannot test the water by looking at it. In the case of AI, however, we can read its output critically. Both in terms of content and form.
What to make of AI successfully taking exams? Researchers[i] reported, for instance, that ChatGPT “passed the bar exam” or the Medical Licensing Exam (USMLE). This sounds impressive to teenagers and students, who take exams and sometimes see them as a finality. Apparently, ChatGPT did not do so well in the essay-writing section of the test[ii]. To ChatGPT, every exam is open-book, and its ability to find enough test answers does not make ChatGPT an attorney or a Medical Doctor. We know that these exams serve as gatekeepers in a learning path that also involves substantial practical learning. When selecting a surgeon or an MD, we do not ask them what grade they obtained while a medical student.
AI is like a car. A car takes us further and faster than our legs can. It takes one activity and scales it beyond our body’s capabilities. Its application is focused on one aspect of transportation, even if the same car can take us to the nearby mall and across the country. Still, by taking away one activity – walking – we become more sedentary while being transported to many more places. To stay healthy in a car-centric world, we need to practice physical activity consciously. Gym memberships have risen since humans started driving.
We need to see AI in the same way. We are still discovering which activities it can do well and whether we need the equivalent of a trip to the gym for our minds.
Teenagers need to learn to be skeptical of AI if they want to use it intelligently. We help them during these conversations, and as parents and educators, we feel more accomplished. Getting away from magical thinking is the hallmark of education. The compelling question to ask ourselves, and engage our teenagers, is “how will AI change us?” Writing, cars, and the internet have all profoundly changed our human experience and who we are. Will AII do the same? What will be gained? What will be lost?
Next week, we will explore the true story of a teenager who lacked that understanding. We will discover consequences that go beyond returning erroneous or inauthentic homework.
[i] https://arxiv.org/abs/2303.08774
[ii] Did OpenAI’s GPT-4 really pass the bar exam? - Fast Company


