Transcripts of our episodes are made available as soon as possible. They are not fully edited for grammar or spelling.
Dr. Shannon Vallor is a professor at the University of Edinburgh, where she serves as chair of the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute. Her research explores how new technologies, especially AI, robotics, and data science, reshape human character, habits, and practices. She also advises policymakers and industry on the ethical design and use of AI. Her latest book, The AI Mirror: Reclaiming Our Humanity in an Age of Machine Thinking, argues that AI can be a tool to enhance our humanity, not replace it. Shannon joins the podcast to discuss artificial intelligence, both what it is and what it could be if we step back and rethink what technology is for.
Tom: Shannon, welcome to the podcast today.
Shannon: Thanks. Really happy to be here.
Tom: I want to start by asking you where you grew up and what are some of your favorite activities when you were a kid?
Shannon: I grew up in the San Francisco Bay area, in the East Bay, just in a kind of traditional suburb, a particularly exciting place to grow up, but it was the late seventies and eighties. And so it was the rise of personal computers and it was a great time to be a girl nerd, and, I grew up loving movies, loving computers, loving games. I was obsessed with airplanes, because I’d never flown in one and they just seemed like magic to me when I was a little kid. So. I was really interested in all things tech, for most of my childhood. I’m pretty much a classic girl nerd, just obsessed with Star Wars and, trying to figure out how to make a world in which things like lightsabers and robots were real.
Tom: Nice. Did you have any particularly inspiring teachers or mentors, that you looked up to?
Shannon: You know, it took a while for me to really find teachers with whom I connected. It wasn’t until I went to college that I really found a true mentor.
The first person from whom I took a philosophy course was a great professor. However, either I wasn’t ready for philosophy or it just wasn’t a good fit for me. I didn’t realize that philosophy was meant for me until I took another course a few years later from a philosopher named Bob Makus, who wasn’t a particularly prominent figure in the field. He was one of those instructors with a heavy teaching load at a Cal State University campus, but students absolutely adored him. The first time I took a philosophy course from him, within three weeks, I had changed my major.
Tom: Wow.
Shannon: He stayed my mentor for a very long time until he passed away shortly after I graduated from Boston College with my PhD, but yeah, it took me a while to really make that kind of connection.
Tom: Yeah. What did you learn during the first three weeks of that philosophy class that changed your life? Were there certain questions or, subject matters that, got in your mind and, you couldn’t shake it loose?
Shannon: Like a lot of young people, you grow up thinking about ethics as something that you just take for granted. That, Moral life is not something that seems to require a lot of reflection. It’s just whatever values you were raised with, whatever rules you absorb from family, or if you had church, or just the society around you. And just from the very first night, He would ask questions about things that seem to be very common sense moral conclusions that anyone should draw. And then he would ask you to defend that conclusion and say, Why should I believe that? Why should I believe that’s unjust? Or why should I believe that’s permissible? And would not let you go until you could articulate in some way, something beyond a raw emotional reaction like, Oh, it’s just icky, or I don’t like it, or it just feels wrong. He would never accept that, right? And so it is the, basis of the feeling? Why should I trust your feeling? If I don’t share that feeling, why should I agree with your conclusion? why should your judgments about this be allowed to shape how I live, for example? So thinking about the translation from ethics into law and policy, and I had just never thought like that in my life.
Tom: As you majored in college, were there particular, ethical systems or moral philosophers that you really gravitated to?
Shannon: Well, that’s the funny thing. Ethics is what got me into philosophy, but I very quickly turned to philosophy of science.
Tom: Okay.
Shannon: Because again, still underneath just a big nerd, right? And still craving the answers to those questions about, the nature of consciousness and reality, how knowledge is related to the structure of the physical world.
And those were the questions that drew me further into philosophy. I spent a lot of time, studying philosophy of technology, Specifically, looking at how technologies contribute to the way that we experience the world and the way that we think.
So I was really interested in questions at the intersection of science and technology and ethics was something that I. still took great pleasure in thinking about, but I didn’t think it was going to be a central part of my research career.
Tom: Uh huh.
Shannon: My dissertation wasn’t about ethics I really came back to ethics only later when I was teaching the philosophy of science and about the role of ethics and values and science and engineering and around that time, social media technology and smartphones. We’re taking off. So this was around 2005 2006. when I started asking questions about the ethical implications of the new technologies that my students were using, they they instantly grabbed on to that Like drowning people grabbing for a life raft, they were already panicking about everything that was changing about the social organization of young people’s lives about moral implications of that and needing to talk about it and needing to work through it. And so.
I remember having a discussion with my students about, and this is very naive sounding now, right? But you have to remember 2005, it was a very real question whether you could have a friendship online, whether that was possible, whether a friendship that existed wholly or even primarily in virtual space could be a real friendship.
Tom: Yeah.
Shannon: And students were beginning to have friends for the first time that were not online. People that shared their physical space and I was trying to come up with resources to help them answer that question. And so immediately I thought of, oh, who has a theory of friendship that, offers us something useful.
And of course, Aristotle had been one of the few philosophers to take friendship seriously and think about the varieties of friendship and the different ways that, , friends, benefit one another. And so I began applying that to The questions that my students were grappling with, which for them were as much personal and psychological needs than intellectual needs, right?
They weren’t sitting there in the classroom asking these questions out of intellectual curiosity. They were having a kind of existential crisis of, identity and social purpose. And so I felt the need to bring them some resources that could help them think these things through. And that became my first two articles, about, the ethics of technology.
One was called Social Networking Technology and the Virtues, and one was called Flourishing on Facebook, so absolutely, it was the teaching that drove my research into the direction that now consumes my life.
Tom: I’m gonna fast forward a little bit to, the topic of artificial intelligence. And I want to focus specifically on some of the arguments you make in your, new book, The AI Mirror. But I want to start with this, artificial intelligence is a word that’s in our lexicon, we use it all the time. But I wonder in your estimation, in what sense is artificial intelligence intelligent?
Shannon: The kinds of systems that we have today aren’t intelligent in any of the ways that I think for us really matter, but it’s a very useful and attractive marketing term and sometimes marketing terms, they stand in for the thing, right?
I think about the fact that, America, we call a bandage that you put around your finger when it’s bleeding a band aid, regardless of whether it’s a band aid brand, right? The brand, the marketing, becomes the word we use for the thing. And I think AI has become something like a, commercial stand in for something that It doesn’t warrant identifier scientifically.
One of the really interesting things is that there’s the science of AI, which is the quest to create something artificial that actually realizes the capabilities of human intelligence. It’s what we aim at when people talk about AGI or artificial general intelligence, right? Robust natural intelligence and engineering that’s a very different scientific quest than what OpenAI calls their mission, which is to create machines that can outperform humans at nearly any economically valuable work,
Tom: right?
Shannon: Nothing about that necessarily speaks to intelligence at all. That definition doesn’t describe the kind of cognition or thinking that needs to go into that performance. It’s a purely behaviorist description of achievement of a certain economic measure.
Tom: Mm hmm.
Shannon: And it’s so far away from the scientific definition of artificial intelligence that originally drove the field.
That being said, AI as we see it today, the commercial reality of AI is a new kind of thing in the world that, will have incredibly disruptive and challenging and, Exciting and some beneficial impacts, even if AGI is never built
Tom: Gotcha.
Shannon: It sounds like the term artificial intelligence does have a good use, which is perhaps what a subset of people are striving for right to develop kind of software with computer chips that can Think or perform these sort of cognitive tasks and describe the way that we apply AI now is more of a marketing term. Is there a term? If you strip away the marketing, that would be more descriptive.
Well, it’s funny because for years resisted the, label of artificial intelligence, they were trying to, get the term augmented intelligence to catch on
Tom: So the idea being that you have something that isn’t intelligent on its own, but it can augment human intelligence.
Shannon: And it didn’t catch on because artificial intelligence is What, everything from science fiction authors to technology leaders have, promised as the, dream of computing for nearly a hundred years.
And so artificial intelligence was always going to be the term that caught on from a marketing standpoint, because it was going to be the thing that fulfilled the dream and the hopes of, all these people who grew up wanting to meet someone like you. Data from Star Trek.
But I think again, that’s been unfortunate because of course, what we’re getting is nothing like that. But what we’re getting is something that can very much mimic that kind of intelligence. And so we have reflections of intelligence, reflections that can in their own way, be activated and automated to carry out series of tasks. But again you use the word cognitive task and that’s ambiguous because it depends on whether you emphasize the task part or the cognitive part.
So, what we have today was something like ChatGPT. is a tool that can perform a lot of cognitive tasks if what you mean is achieve the same Or comparable output as a human performing that cognitive task.
So if you just measure the task part, you might get something quite similar. Right? but it doesn’t actually perform it with anything that resembles human cognition.
One of the really interesting things about large language models is that they’re very bad at defending even the answers that they get right. they’ll confabulate a justification, but it won’t have anything to do with the actual chain of thought that you got the system to, produce. And it becomes very clear that these systems have no awareness. Well, of anything at all, but they definitely don’t have any awareness or any kind of record of the thought process.
There are circumstances in which we don’t need the thought process; we just need the most likely answer. But there are many, many kinds of problems where the thought process matters, in some ways, as much or more than the answer you arrive at.
Because you need to be able to defend it, you need to be able to examine it critically, because you need to make it consistent with other things that you think are important. believe or have reasoned through. And these systems can’t help us with that.
And if you start accepting instead machines that arrive at the answers without knowing how to reach them correctly. Then there’s a real danger of what I’ve often called a kind of moral or de skilling of individuals, but also of our culture, because you lose the knowledge of how to think through complex, weighty problems. And instead you rely on the ability of a system to produce the most predictable answer to the question,
Tom: So the problem with AI today, is that , we’re trying to use it to automate the most important and the most complicated and the most difficult questions that we ask ourselves. Questions about who should live and who should die?
Shannon: Who should go to jail? Who should get health care? Who should I date? Who should I marry? What kind of career should I choose? What kind of policies should our government promote? Right?
Tom: And we’re doing that with systems that, can’t actually hold on to the thread of reasoning that’s required to justify morally high stakes or politically high stakes decision like these.
Shannon: There are certain kinds of questions, that if we lose the ability to answer them, we become far less capable beings and far less able to sustain ourselves and flourish as societies.
Tom: Yeah. We’ve talked quite a bit about the limitations of AI, how it fails to live up to some of the promises or dreams that have been laid out for the last hundred years or so. But I want to turn to AI as it could be, looking forward and , some of the goals we can really push towards.
So what do you think would be a good role for AI in, uh, Healthy, well functioning society.
Shannon: That’s a great question. So I’ll say that I think the answer to that splits into two directions. So one is if we think about the prospect of actually building genuine artificial intelligence, Machines that think with us, not just that reflect our thoughts. And patterns of our thoughts back to us. Then I think many of the
False promises that you’re hearing sold today could, come to pass where we have machines as genuine partners in thought at work, at home. they might be beings that we would have to acknowledge having some moral status and, social standing, perhaps even rights. But room in the world for other, sentient beings.
Life or there should be, by sentience, I mean the ability to consciously think and share a world with others. I think that’s a rather miraculous thing. And whether it arises through biology or mechanics, we ought to cherish it and protect it. and I don’t know if it’s possible to build sentient machines.
I genuinely don’t. I’m agnostic on that. I think it’s probably possible in principle, but it might be impossible in practice. In the same way that, intergalactic travel could be possible in principle for humans, just completely out of reach. Given the material constraints and other constraints that we have on the planet.
But if you’re talking about the kinds of AI systems that we have today or that we could imagine being built in the near future. In terms of what would a beneficial future with those tools be? I think it would go back to that original hope IBM had that we would use these machines to augment our own intelligence.
Because human. Thought is imperfect. We are biased. We are, often overconfident and, undercritical of our own assumptions, and, default values. And I think there are ways to build AI systems that instead of thinking for us, , prompt us to think better and more humanely, and with greater integrity than we naturally do.
The problem with that is if you have a human who’s augmented by an AI tool, you still have to pay that human a salary.
Tom: Yep.
Shannon: And you still have to, give that person sick leave And there’s lots of economic incentives for companies to, turn their back on those commitments to workers. And so what you have is a rush to use these tools. To automate human performance, even if the quality and reliability of human performance goes slightly down as a result, because companies are being told it will be so much cheaper than hiring people. But that’s such a short sighted strategy because even if you’re just interested in productivity and outputs,if you augmented human intelligence so that every human on the planet could
Solve problems more wisely and more reliably than before in every sphere of the economy, in every sphere of culture, we’d see a flowering of. new kinds of creations, new kinds of ideas that were still our ideas, but ideas that had been enabled and boosted by AI technologies that had helped us become stronger.
Tom: Mm hmm.
Shannon: But that’s not the track that we’re on. and I think what we need to do is rethink what technology is for. Is technology something that justifies itself, or is it justified only insofar as it serves human needs and amplifies human and planetary flourishing? And I believe that it’s the latter.
It’s always been the latter.
And frankly, I don’t think anyone truly believes That expenditure can be justified purely by saying, well, it will make shareholders a lot richer, but it’ll make the world worse.
Tom: Mm hmm.
Shannon: So my view is if we can get back to a proper way of evaluating technologies and evaluating their worth, then we have a chance of using. Even the tools that we’re building today for all their weaknesses and flaws and dangers, I think the tools we’re building today can still be incredibly useful if they were guided by, those kinds of incentives and measured and evaluated. By those kinds of, outcomes.
Tom: I’m going to wrap up with two questions. I’m thinking about kind of longer time horizons. Technologies, as they come online, can be associated with a certain generation. I’m thinking of the last generation, like the iPhone generation, where kids grow up with that just being the omnipresent reality.
Seeing everyone with their heads down, staring at their phones, It’s a visual that is part of our everyday life. I’m wondering with, the increased development deployment of artificial intelligence, maybe 10 years from now.
Do you have any sort of iconic sort of visual representation of what next generation of human society is how might we look if an alien were to come see us and observe us and, , and what we’re doing and where our attention is?
Shannon: Well, that’s a great question because I think it depends on which path we take, in book, I really emphasize that technology does not follow a predetermined path, human choices and social power shapes it, And so we have paths open to us. And the only people who are selling. The lie that technology and AI is on a kind of preset path are people who want to make sure that you don’t try to grab the wheel and turn it in another direction than the one that benefits them.
There’s something we haven’t said, which is AI is much more than things like large language models. AI involves all kinds of smaller and more sustainable applications of, for example, machine learning that are, built on smaller, more. Targeted and relevant data sets that can really solve, well defined problems quite well for us.
So, I don’t want to lump AI into the same bucket.
I’m largely talking about the really super energy-intensive, large data-hungry foundation models being sold now. These can either go in an unsustainable direction, or we could use them to steer, as I said, into a future where our intelligence is actually augmented.
If an alien were to arrive in 10 years. Um, think what they would see would look very different depending on which path we choose. So, if we stay on the path we’re on, what I do fear, actually, and not to be too dystopian, but I do fear that you will see, humans greatly impoverished to a greater degree than they are today, largely cut out of the workforce and perhaps subsidized by some of the efficiency gains that AI has realized in certain industries to get along and, fed and, watered like domestic animals, but not empowered and not self governing and nothing like what the ambitions, right, of, modern liberal societies pointed us to was, a world where humans are not only more intelligent, but freer and, more self determining and more humane.
So I don’t think we’ll see that on the path that we’re on if we don’t change course.
If we do change course, I think what’s really interesting is you could imagine humans that could actually be freed to, um, Spend more time doing the kinds of work, that is truly meaningful for us. Imagine the creative sector being revived rather than drained.
Imagine that the arts become, once again, a truly viable way for people to make a living. What would our cities look like if art were not something that is increasingly only accessible to the privileged who can afford not to pay the rent?
Tom: Mm hmm.
Shannon: Think about what our social life in our community spaces could look like if we could use a I to, identify the ways that the built world currently isn’t serving human psychological or physical needs and could help us redesign our environments in ways that actually promote human flourishing.
Would you see children out, playing in, new kinds of spaces that AI might help us design? Could you imagine pointing AI towards solving problems with basic needs like food and health that we desperately need to make more accessible and affordable for everyone.
There’s tons of work we could do with AI to make those things, a reality, or at least put them closer to reality. So
Tom: Well, given all that we’ve discussed today, what gives you the most hope for the future
Shannon: So there was a philosopher of technology from the 1980s and nineties, Albert Borgman, who is often seen as very critical of modern technologies. But one of the things that he observed is that when you get a modern technology that.
Replaces a meaningful process or a meaningful human experience and just gives you a commodity at the end. used examples like a microwave, right? A microwave takes all the love and care out of cooking and you just press a button. And so he talked about how when a truly meaningful process like cooking, which is a very social and emotional and physical and kind of local experience, when that gets damaged by a technology, sometimes our reaction is to run back to what we’ve taken for granted and treasure it again.
And if you look at, for example, the cultural power of cooking. in this decade and compare it to the cultural power and status that cooking had in the 1980s. When the microwave became the first, widespread, commercial cooking, invention in, in decades, cooking is so much more culturally powerful now and has so much more status and is celebrated again in ways that in the eighties, we’d sort of forgotten.
So sometimes when the thing that we need most to flourish is truly threatened and endangered, we sometimes wake up
Tom: Mm hmm.
Shannon: We sometimes remember why it matters. We do that sometimes with things like human rights. I hope we can do it with the planet, But I also hope that we can do it with ourselves. With our own thought processes, with our own intelligence, when our own intelligence and capacity for thinking for ourselves is threatened by the commercial exploitation of AI tools.
Our response could be to actually remember why humane intelligence and forms of life matter, and why building up those capabilities the way that education systems used to be designed to do, Why that’s so important to reclaim and restore. So my optimism comes from the fact that at the end of the day, humans aren’t all fools.
We can perceive what has lasting and durable and expansive value. We can s ee the difference between what impoverishes our lives and what enriches our lives. And so I’m hopeful that what we might see in the coming years is that the kind of predatory use of technology to drain our lives of value pushes us up to a point where we fight back and we reclaim the value of human experience of human thought of human sociality of human judgment and human responsibility.
I see little flashes of that on the horizon. I hope to see more.