fbpx

Templeton.org is in English. Only a few pages are translated into other languages.

OK

Usted está viendo Templeton.org en español. Tenga en cuenta que solamente hemos traducido algunas páginas a su idioma. El resto permanecen en inglés.

OK

Você está vendo Templeton.org em Português. Apenas algumas páginas do site são traduzidas para o seu idioma. As páginas restantes são apenas em Inglês.

OK

أنت تشاهد Templeton.org باللغة العربية. تتم ترجمة بعض صفحات الموقع فقط إلى لغتك. الصفحات المتبقية هي باللغة الإنجليزية فقط.

OK
Skip to main content

At a gathering of scientists and storytellers, psychologist Alison Gopnik called for a shift from AI mythmaking to more rigorous questions about culture, creativity, and governance.

A Room Where Science Meets Story

On a recent evening in Los Angeles, scientists, filmmakers, and writers gathered poolside at the home of producer and writer Leigh Dana Jackson and novelist Sarah Shun-lien Bynum for a salon-style conversation about “Imagination Machines: AI and the Future of Human Storytelling,” led by UC-Berkeley psychology professor Alison Gopnik

For decades, Gopnik has studied and shared how children learn about the world. Her widely viewed TED Talk “What Do Babies Think?” argues that young minds are not incomplete adult minds but remarkably powerful learning systems, and her Wall Street Journal essay further articulated “What Babies Can Teach AI.”

The event, supported by the John Templeton Foundation, was hosted by The Science & Entertainment Exchange of the National Academy of Sciences, which connects filmmakers, writers, and producers with leading scientists. And indeed, the invitees were well-versed in, and some responsible for, the AI science-fiction lore feeding popular imagination—from the rogue computer HAL 9000 in Stanley Kubrick’s 2001: A Space Odyssey to the sprawling mythology of Isaac Asimov’s Foundation universe, including the hit Apple TV+ adaptation.

It was clear that the dystopian, cautionary tales of AI, once confined to science fiction novels, have jumped to our TV and film screens, morphing into palpable, real-world fear, intensified by the news that week that the U.S. military is using AI to help plan attacks and power autonomous weapons. 

There is also anxiety about sentient AI reshaping society in unpredictable ways, enslaving humans, or outthinking, overpowering, and ultimately destroying humanity. Or, at least turning us into unhealthy, WALL-E-like hoverchair-lounging humans, or Idiocracy-adjacent morons with eroded agency and intelligence.

AI as a Cultural Technology

Gopnik’s talk explored one of the defining technological questions of our time: whether today’s AI systems should be understood as possessing an emerging, autonomous, artificial mind.

Gopnik contended that large language models may be better understood as a new kind of cultural technology, built from the accumulated knowledge of human society. Such systems, she suggested, could reshape how knowledge, creativity, and culture move through the world. But understanding them clearly, and being willing to fight for and implement guardrails, is the first step toward deciding what societies do next.

The Story We Keep Telling About AI

Gopnik began with a story that long predates computers. Across cultures, humans have told tales about artificial beings brought to life. One famous example is the legend of the golem, a creature fashioned from clay and animated through mystical means. In many versions of the story, the artificial being ultimately becomes uncontrollable.

“It’s a very ancient story reflecting a very ancient anxiety. And a lot of the stories that people tell about AI…have this kind of flavor. There’s going to be a super-intelligent agent. It might be good, but it’s probably going to be bad. It’ll be a smarter nerd than the smart nerds in Silicon Valley. And that means it will take over the world,” says Gopnik.

“…I think that’s not the correct story…not the story that actually captures what the current systems are doing.”

If we misunderstand what AI is, she argued, we risk preparing for the wrong problems.

From the Golem to Stone Soup

Instead of the golem story, Gopnik offered another folktale: Stone Soup. In the story, travelers claim they can make soup from stones alone. Villagers gradually add carrots, onions, and chicken to the pot, resulting in a rich soup, not because the stones themselves contain magic. Gopnik used the metaphor to describe how modern AI systems work.

“You really can’t make the soup just from the stones…but by combining all of the food that all the villagers have, you can end up with something that’s different and richer…than you could have had if you didn’t put those things together.”

Large language models appear powerful in part because they aggregate enormous quantities of human-generated material produced by millions of people. They also rely on human labor: people who label data, refine prompts, and provide feedback that helps train the systems. Seen this way, the models are not independent minds. They are tools built from the collective products of human culture.

AI in the History of Cultural Technologies

Gopnik contended that artificial intelligence belongs to a much longer lineage. Human societies have repeatedly invented technologies for transmitting knowledge across their societies—language, pictures, writing, printing, libraries, and digital search

Printing, for example, spurred the Protestant Reformation and also helped spread Enlightenment ideas that shaped the American and French revolutions. But it also enabled misinformation, scandal, and propaganda to travel widely. The early world of pamphlets was filled with rumor and “libelous pornography,” said Gopnik. This includes sensational stories about figures like Marie Antoinette.

Referencing Socrates, Gopnik said, “he thought books and writing were a terrible idea because people would read books and people would think that they were right just because they were written down. You couldn’t interrogate them. You couldn’t have a dialogue with [a book].” 

New cultural technologies have always produced both intellectual breakthroughs and new forms of confusion. Artificial intelligence, in Gopnik’s account, may represent the latest step in that long evolution, a new way of accessing and recombining the accumulated knowledge of human culture.

Why Children Still Outperform Machines

Gopnik’s research on childhood learning offers a revealing contrast. Human intelligence involves two complementary abilities. One is imitation: learning from what others already know. The other is innovation: discovering something genuinely new about the world. Children excel at both.

“You put four-year-olds in a situation in which they have to think of something new, they’re incredibly good at doing it,” says Gopnik. “Grownups, not so good. Large models, terrible.”

Large language models, she says, primarily detect patterns in large datasets. They recombine existing information. When she was later asked what development would cause her to revise her view of current AI systems, Gopnik pointed to something young children do constantly.

“If you had a system that could go out and explore the actual external world autonomously itself and change what it thought based on its exploration…that would be more like the kind of autonomous intelligence that a two-year-old has.”

The difference is not simply computational power. Minds—human or animal—learn by exploring the world and updating beliefs through experience. Large language models instead recombine vast archives of human-generated material and detect statistical patterns.

The Real Work Ahead

Yet the argument did not land without resistance. In conversations after the talk, several attendees told me they worried that Gopnik might be underestimating the risks posed by rapidly advancing AI systems influencing life-and-death decisions.

When I asked Gopnik about those concerns, she emphasized that her argument was not meant to minimize the stakes. Cultural technologies, she said, have historically transformed societies in ways both creative and destabilizing. What concerns her, however, is the way popular narratives about AI can misdirect attention.  That the golem story “is actually a distraction from the real worries.”

Public conversations about artificial intelligence often swing between apocalyptic fear about runaway machine intelligence and utopian excitement about technological breakthroughs. In Gopnik’s framework of AI as a cultural technology, the challenge is in deciding how societies will shape the institutions, norms, and regulations that guide how those systems are used.

“If you want to open a hot-dog stand, you have to go through all sorts of regulations,” Gopnik said. “So, the boring but straightforward thing is that we need regulation—and regulation has to come through the legislative process.”

That work spans several fronts: ensuring that creators whose work trains AI systems share in the economic benefits; developing legal frameworks—similar to libel and liability—that clarify responsibility for misinformation; and addressing the concentration of power among the small number of companies that control advanced AI infrastructure.

Artificial intelligence may reshape how knowledge circulates across the world. But how that transformation unfolds will depend on the choices societies make about how the technology is governed and the values they choose to protect.

The danger, Gopnik suggests, is not the technology alone, but distraction from the deeply human work that must be done to safely integrate it into society.


Alene Dawson is a Southern California-based writer known for her intelligent and popular features, cover stories, interviews, and award-winning publications. She’s a regular contributor to the LA Times.