For generations, parents have approved, and disapproved, of the people their children have chosen as friends and romantic partners. The next generation of parents will face a new challenge: there’s no guarantee that all their children’s friends, or future lovers, will be human.
Even if the next generation doesn’t seek out artificial relationships, they will find themselves interacting with digital “humans” in many different settings, and they might find artificial “friends” more available, more agreeable, and less high maintenance than real ones. If we want children to expend the energy and effort required to develop social skills that foster healthy human relationships, parents will have to explicitly model and prioritize those interactions.
When writing about “What happens when AI chatbots replace real human connection,” Stanford Accelerator for Learning executive director Isabelle Hau and Centre for Universal Education director Rebecca Winthrop say relational intelligence isn’t peripheral, it’s foundational—and it should be just as central to education as academic content or technical skill.
People are already finding it impossible to distinguish between a chatbot and a human, choosing to use AI for companionship and therapy, and already professing love for bots.
Hau and Winthrop say humans are “wired for connection” and in the absence of sufficient human contact, many are now “bonding” with machines. “The irony is that we are not turning to machines because we’ve changed. We are turning to them because we are more ourselves than ever—hungry for connection, meaning, and care,” they say.
“But machines cannot love us back. And they were never meant to raise our children, counsel our grief, or substitute for presence.”
We don’t need to be technophobes to protect our kids, but we do need to recognize the threat AI poses to human flourishing—human relationships in particular—and take steps to counter it. For example, it would be heartbreaking if a child thought a chatbot cared more about them than their parents did, or that a parent preferred a chatbot’s company to theirs.
Companies making artificial assistants and companions as “likeable” as they can and seem motivated to blur the line as much as possible. It will be up to families, friends, and educators to teach children, and keep reminding them, that however much chatbots and the like act like humans, or come to resemble us, they don’t really care about us. They don’t care about anything.
If we want our children to retain the ability to discern what’s living and sentient, versus synthetic; what has feelings and a soul, and what doesn’t, we’ll need to show them. But we’re already setting the stage for confusion because we’re already treating machines as humans.
We do so every time we say “please” or “thank you” to a machine, or use personal pronouns in place of “it.”
Consider a 2022 study in which researchers questioned 166 children aged 6 to 11 about their perceptions of AI assistants such as Alexa. In a report published last year, the researchers said most (70%) of the kids thought it wasn’t acceptable to be “rude” to their devices.
What is the appropriate etiquette for interaction with our digital tools? I don’t say “thank you” to my couch every time I sit on it, or “thank you” to my car every time I arrive at a destination; that doesn’t mean I’m “rude” to them, and it doesn’t spoil my human interactions.
One girl in the study called Alexa her “friend” and referred to it as “she.” One boy said, “I have a Google and he’s very special to me.” The researchers say that while the children acknowledged the devices were not alive, they attributed several human-like characteristics to them, and regarded them as having “mental, moral and social qualities such as intelligence and the ability to form friendships.” Yes, children attribute human qualities to their toys, and imaginary friends, and they grow out of it. But in the case of AI, adults do it too.
If children grow up viewing smart speakers as friends, how will they relate to more sophisticated AI assistants? Where will they turn when they’re lonely? What social skills will lose, or fail to learn, if they turn to AI for companionship? What relationships will they fail to form? And how will they navigate the complexity of being a spouse, a caregiver, or a close confidant?
Hau and Winthrop wonder what would happen if cities and communities were designed to prioritize “connection, compassion, and collective well-being.”
We can’t implement these priorities across society in one fell swoop, but we can prioritize them in our families and communities.
We can keep teaching our children that humans are precious and irreplaceable. We can learn to discern and distinguish our moral obligations to living people versus lifeless agents. I don’t want my children to treat a pet as they would a human, or a human as they would a pet. I want them to be able to play “animal, vegetable, mineral” or “human or machine” without giving up because it’s “just too hard to tell.”
Ultimately, no parent can control who, or what, their child befriends as they grow into adulthood. We can only hope that if they do end up in artificial relationships, they’ll be quick to realize that the more they sideline relationships with real people, the more they lose something essential to their flourishing.
Good might even come from missteps with AI agents. We might come to value the wonder, mystery and delight of real life, real time, genuine relationships more than ever. Humans may disagree with and perplex and challenge us in ways machines won’t—but they can also love in ways machines cannot compute.
Emma Wilkins is a Tasmanian journalist whose freelance work has been published by news outlets, print magazines, and literary journals in Australia and beyond. She has a particular interest in relationships, literature, culture, ethics, and belief.