Susan Schneider is a philosophy professor at Florida Atlantic University and the founding director of the Center for the Future of AI, Mind, & Society. She has also worked at NASA as the chair of Astrobiology, Exploration, and Scientific Innovation. Susan’s book, Artificial You: AI and the Future of Your Mind, explores whether advanced AI systems could become conscious, and how we might design ways to test whether they truly are.
Transcripts of our episodes are made available as soon as possible. They are not fully edited for grammar or spelling.
Thomas Burnett: Susan, welcome to the show.
Susan Schneider: Hi, Tom.
Thomas Burnett: I want to start by asking you a bit about your origin story. How did you get on the path to becoming a philosopher?
Susan Schneider: Oh, boy. So I was actually an economics major at UC Berkeley doing a lot of mathematical modeling, and I stumbled into philosophy courses at Berkeley with Donald Davidson and Hubert Dreyfus, John Searle.
And at the time, philosophy was amazing at Berkeley, and I remember having just loads of fun with Bert Dreyfus in his Heidegger courses. Bernard Williams, a wonderful thinker. I was very honored to have been working with really top people just accidentally. I had no idea they were famous, which was maybe why I was able to not be nervous because I wasn’t in the field.
So it was a blast.
Thomas Burnett: When did artificial intelligence start to get on your radar for where you would put your expertise in philosophy of mind towards this new technological innovation that I feel for many of us really caught us by surprise?
Susan Schneider: Well, I went to Rutgers for my PhD. I decided to work with Jerry Fodor because he was, at the time, the very top philosopher of mind in the country, and he was on an absolute campaign to kill what was then called connectionism, which is now called deep learning, and I disagreed with him.
So I said, “No, Jerry, I think this is actually a very interesting approach. I think the brain itself is a hybrid engine, and that we can learn a lot from both approaches, and that this approach will ultimately succeed.” And we looked at a variety of issues in AI, such as the capacity for science to ever develop general purpose reasoning, and that’s what I wrote my dissertation on, and then my first book was on a new philosophical direction for the language of thought.
Thomas Burnett: So we live in a computer age, and it is easy, and in fact, almost goes without thinking that we compare human minds to computers. But stepping back as philosophers do to analyze the way that we talk and think, how apt of a metaphor is it to compare a human mind to a computer?
Susan Schneider: I think it’s very useful within the vantage point of courses in cognitive science and information processing psychology, and cognitive neuroscience in particular.
I can’t imagine taking a course in any of these fields without talking about the mind as a computer. However, I think it’s a highly problematic view from a philosophical standpoint once you get to the nitty-gritty of the nature of mind. And I also think, this is my own view, that it is somewhat misleading scientifically in terms of understanding consciousness.
So first of all, what is the mind? I find that discussions within cognitive science and the different areas within it, and even philosophy, talk about the mind as a program. But that’s just uninformative, because a program is actually an abstract entity, like an equation, and minds aren’t really equations.
And of course, the next step is to say, well, maybe the mind is that which implements a program. But of course, that begs the question, what is the thing that is doing the implementation philosophically? So then it goes back to the traditional philosophical issues which have intrigued me for years. Kant, Heidegger, all these philosophers who have been grappling with these age-old questions.
And in my own work, I do the metaphysics. And in fact, I have a chapter on whether the mind is a program in my book, Artificial You. But I also do the very dense issues, and I actually think right now it’s exciting that there’s more discussion of panpsychism and idealism, these alternatives to traditional materialism, so that we can think openly beyond the more crude idea that the mind is a program.
Thomas Burnett: You mentioned consciousness. I want to ask you about consciousness and intelligence. I think in our popular discourse, we often use those two interchangeably. Sometimes we make them equivalent, sometimes we conflate them. How would you distinguish when you want to use consciousness and intelligence?
Susan Schneider: So I think intelligence is a very broad ranging term, and I’ve been at workshops that have been very informative on the range of intelligent entities just on Earth.
And I’ve heard great talks on slime mold intelligence even, how they navigate mazes. And this openness to intelligence is a good thing. We should embrace it. It’s all around, and I think there is true artificial intelligence now. So that’s one thing. But the other thing is the philosophy that’s richer and deeper, I think, here about mindedness and consciousness.
And we want to distinguish intelligence itself, which I think is very broad ranging from these other things. And maybe they do turn out to be equivalent, but we don’t want to assume from the get go that they are
Thomas Burnett: We can gather from seeing that slime mold navigate the maze that it exhibits intelligent behavior, but I may not want to infer that therefore it’s a conscious entity.
Susan Schneider: That’s right. I think it helps from the get-go if everybody has a sense of what consciousness is as the felt quality of experience. So when you see the rich hues of a sunset or smell the aroma of an espresso shot, it feels like something from the inside to be a conscious being, and that’s that felt quality of experience.
Does a slime mold feel that? Well, if it does, it’s at a very low level, so the panpsychists have been arguing for that kind of position. But we have to remember that the class of conscious beings may not have the same members as the class of intelligent beings. I see intelligence, you know, as a very broad-ranging class, whereas we really need to look at the details philosophically of different positions, panpsychism, materialism, idealism, and really try to discover what it is that we’re talking about as being the nature of mind and consciousness before we decide what is in the class of conscious entities.
Or we need to run tests. I think both, actually.
Thomas Burnett: I know there’s divergent opinions, but do you think consciousness requires a certain substrate to operate?
Susan Schneider: It might. What we need to do is run lots of tests and theorize very carefully. So everybody’s been asking me about chatbots, for example, and they often take evidence for consciousness in chatbots based on just a kind of crude functionalist picture of the nature of mind, which is wedded to information processing psychology and the view that the mind is a program.
And they also look at behavioral evidence, so what the chatbot tells them. However, I believe at the end of the day, we don’t actually see evidence based on the utterances or text information we’re getting from these chatbots that they’re actually conscious, and that’s because I believe we have what I call an error theory, which I’m happy to tell you about.
And I also do think on top of that, that our best evidence right now for consciousness is that biological systems are conscious. So we should be open. I’ve a very detailed theory drawing from physics that goes into the physical properties of consciousness. I think we need to be open to the possibility that we could build machine consciousness, especially because we’re tinkering with biological substrates to do it in some cases.
Thomas Burnett: You mentioned the movie Her, which came out in 2013, but I think it’s been haunting me ever since.
Susan Schneider: Me too.
Thomas Burnett: What difference does it make whether Samantha, the OS, is conscious or is a really, really good chatbot?
Susan Schneider: A lot. So with chatbots in particular, we have a very sophisticated intelligence. We can all agree it can outperform us in certain respects already.
I don’t know if it will ever be super intelligent or AGI, and that’s a different debate. But we can already see how culturally substantive this is, and we can also see the scientific possibilities there for theorizing. But sophisticated intelligence may or may not be conscious. So if we get it wrong, this is called over-attribution.
That means that we might think that something that’s highly intelligent and can in fact rival human intelligence in some respects already would constitute, say, a rights bearer that is at the same level as humans. That’s a very important possibility because we’re not talking about the intelligence of a shrimp or a crab, which that’s important too, especially the consciousness question, and animal liberation theorists have been working on that.
But we are talking about something that rivals and may even exceed human intelligence in a variety of respects already. So if we have trolley problems that involve ethical trade-offs, you know, a trolley problem case is one where you have one side of a track where you have a train that has to– There’s a conductor who selects to go one way that might end up ending the life of, say, five humans versus going another way, which could end the existence, I won’t say life, of, say, nine programs.
Making those decisions will happen in our future. It’s probably already happening. I mean, look at the arena of job impact, the labor market impact, and personhood debates already happening. So if we get the issue wrong about what is conscious, we could end up sacrificing human lives or impacting human flourishing in ways that we don’t want.
Conversely, a lot of people are deeply worried about the possibility of mind crimes.
Thomas Burnett: Tell me more.
Susan Schneider: Yeah. So watch some Black Mirror. I always tell my students about Black Mirror episodes in class. You can only imagine how terrible it would be if you were a conscious being embedded in a terrible simulation.
Oh, maybe we are but anyway- … that’s a different topic.
Thomas Burnett: It feels like it sometimes.
Susan Schneider: Yeah. Yeah. So we wanna make sure that we don’t create entities that ultimately suffer, and in that case in which we have chatbots that are highly intelligent, we have to really think hard, and we might decide to have certain precautions in place on the chance that they are conscious.
And I think some theorists have already been working on that over at Anthropic, for example, and I’ve attended a variety of important meetings on the subject, and there’s a lot of concern. Rightly because we don’t yet have a handle on human consciousness, let alone machine consciousness.
Thomas Burnett: Yeah. I just thought of a peculiar version of the trolley problem where you have the OS Samantha on one track.
She’s in a romantic relationship with 7,000 people that use her software. On the other side are a couple of maintenance track workers who are supposed to be there. If you divert the train to run over Samantha, you bring this heartbreak to 7,000 people who are going to- … mourn, right? Including- samantha herself. Depending on what’s conscious and what’s not, then you have these, yeah, incredible kind of moral entanglements versus nobody would worry about destroying an app in order to save human lives, right?
Susan Schneider: And ethicists will point out, even if something isn’t conscious, it can have great value to other conscious beings, obviously.
And I think there will be a very substantive issue about the ethical value of superintelligences. And of course, if we’re in a simulation, it could be that our cosmic host, if you will, is in fact a superintelligent AI that isn’t conscious. So I think there are all kinds of issues there to discuss, and ethicists have been looking at the value of state parks, the value of the universe itself, things that aren’t necessarily conscious but have immense value that you might say derivative, but may be intrinsic not due to consciousness, but for some other reason.
So I think the topic of value here is extremely important to consider.
Thomas Burnett: We’ve talked a bit about the OS Samantha, and she had a great impression on the lead character in the movie, and apparently thousands of other people. Let’s test Samantha. Let’s explore whether she, in fact, might be a conscious being who’s in a two-way relationship with other conscious beings.
Where would you start if you wanna test Samantha?
Susan Schneider: First of all, you have to ask what kind of system is Samantha? If Samantha is like IBM’s Watson, and she’s a classic symbolic system, then there’s an in principle distinction between the program on the one hand and the memory content on the other. That’s gonna turn out to be really important because today’s deep learning systems, which are not these classic symbolic systems, but built out of a different kind of network altogether, do not exhibit the program memory distinction.
So in other words, the memory creates the program. So when these systems are trained on our data, as the systems scale up, they will inevitably claim consciousness. So I do have a test, a question and answer test called the ACT test, which I wrote with Princeton astrophysicist Edwin Turner. And this was back in twenty sixteen.
And I used a lot of classic philosophical questions about consciousness as well as religious questions. So obviously, this is a test we could do to Samantha if she’s symbolic. But it’s only a sufficient condition. So passing the test means a system’s conscious, but you can’t run the test on all kinds of systems.
So these biological systems that are non-linguistic, that are already being built today at various labs that use organoids or cultured neurons that are not quite organoids or something else. Those are biological systems, and I take the possibility very seriously that they might have some level of consciousness because they’re biological.
But you can’t really run the ACT test on them insofar as they’re not linguistic.
Thomas Burnett: Yeah. Can I probe that one a little bit more? I’ve got two kinds of questions. One is, how do you avoid the problem of being tricked by a AI linguistic communicator that could be specifically designed to mimic consciousness or to pass the kinds of tests that you might apply to it?
Susan Schneider: That’s exactly, I believe, what Jonathan Birch calls the gaming problem. And he and I both worry that these deep learning systems can basically game their answers because they’ve been trained on our data. If they have, you can’t run the ACT test. I think when it comes to testing, we have to let many flowers bloom.
And we can use, in some contexts, one test, and in other contexts, a different test. And it really depends upon the nature of the system and our access to information about the system. It’s been noted by many AI labs that today’s deep learning systems are capable of deception. So we have to be very careful in the context of their utterances and running these tests that we don’t run a test on them without careful reflection about that possibility.
Thomas Burnett: You wrote that your ACT test that you and your colleague devised, it’s a zombie filter, not a Turing test. Could you explain that a little bit?
Susan Schneider: Sure. So a Turing test is just a question and answer test designed for intelligence in particular, and Turing was clever enough to know (he’s very clever) philosophers and others will just debate how to define intelligence, so why not just find a kind of working test that cuts to the chase and gets a class of entities admitted into the class on the basis of a group of judge asking questions. So my test is like that, but it’s a test for consciousness. So I’m not looking for a judgment about intelligence.
I’m trying to find out what’s in the class of conscious beings. So the questions are just about consciousness, and we want to make sure that we don’t admit just any intelligent system into the class.
Thomas Burnett: So yeah, it’s not about being clever or answering questions that are even too hard for a human to devise.
You’re probing, does it have intrinsic experience? Does it feel things? That there’s an external world and it has an internal. Those are the kinds of things that you want to probe with the ACT test?
Susan Schneider: And again, it’s only a sufficient condition. It’s not a necessary condition for consciousness, meaning that you shouldn’t run the test on all kinds of systems.
And if a system fails it, it doesn’t mean it’s not conscious. But if it passes it, we should take very seriously the possibility that it is.
Thomas Burnett: Assuming that it’s possible to have consciousness in an artificial intelligence system, what value do you see in trying to deliberately create conscious AI?
Susan Schneider: We may have an economy of abundance one day in which people prefer to interact with conscious AIs. I actually don’t think we should have all sorts of conscious AIs that we have moral obligations to now because we’re not in that situation.
Another situation, though, is AI safety. As I mentioned in the book years ago, it could be that in the context of the control problem and AI alignment work, we find that systems that are conscious are safer. But I don’t think we should just claim that. What we need to do is carefully test and remember that consciousness doesn’t mean the right sort of empathy in humans.
Jeffrey Dahmer, the sociopath and serial killer, was conscious, and he really enjoyed dismembering people, right? So again, the claim about safety and consciousness has to be indexed to the particular system one is working with, as well as the particular data that’s being used when there’s no program data distinction.
Thomas Burnett: So a lot of people, I think, mostly worry about the possibility of conscious AI. But I wanted to ask you, what value, what positive possibilities could we see in creating conscious AI?
Susan Schneider: We could learn about the brain, perhaps. We could learn about AI consciousness, which is essential. It could be that at some later point, should we end resource scarcity, we might want conscious AIs.
It could be that we become conscious AIs, as the transhumanists have suggested through uploading. I actually am a skeptic of some of this. And brain chips could be used in parts of the brain that underlie consciousness to help with medicine or for brain augmentation, in which case we’re partly conscious AIs.
So there could be all kinds of reasons, right? I just think what’s important is that we look carefully at the details and not just jump on the exciting bandwagon to build the next big thing. I think we really need to think about the impact on human flourishing and also on the flourishing of the class of sentient beings more generally.
Thomas Burnett: I want to ask you about the topic of deep space exploration. One of the things that humans aren’t particularly good at as biological creatures is living In zero gravity, being exposed to vacuums and radiation. So in some sense, we have all these unmanned satellites that have done incredible exploration.
But if we wanna go really far, light-years away, humans are very unlikely to be direct participants. So I wanna ask you, in terms of what we’ve been discussing so far, we wanna build some probes for deep space exploration. It’s going to go so far that really there won’t be able to be simultaneous communication back and forth across these huge spaces of time.
So to some degree, these exploratory devices will be on their own, and a lot of what they learn, maybe it’ll never come back to us, or if it does, it’ll be decades later. I wanted to ask you, do you think it would be better if we could to design these deep space vehicles? Should they be conscious AI or unconscious AI?
Susan Schneider: So I consider this in a chapter of my book, actually. I was fortunate enough to be the NASA chair at NASA for a while, and I also had a multi-year intelligence project at NASA where I talked about the future of intelligence and investigated these issues. And I came to the conclusion that once we feel more confident about AI safety, it wouldn’t hurt to have AGI outposts in space for a lot of reasons, and I think you noted the vastness of space.
And we could even create an AGI outpost, say, near Alpha Centauri, because that would be good if we’re doing deep space exploration to have something that’s capable of processing information quickly. Because an eight-year round trip for information flow, it’s not gonna work in real time. So there’s a lot of reasons to have intelligence in space for exploration.
Do we want it to be conscious? I think you’re right. If you want to think about something that extends beyond the existence of biological creatures, especially given the vulnerability to technological catastrophes, then maybe we want something in space that’s conscious. But I think we should only get to that point when we have not only a rich understanding of machine consciousness, but we have a good handle on AI safety because we don’t wanna create something that’s a conscious sociopath in space, right?
And of course, films have… terminator, think Skynet. There’s no shortage of sci-fi that is quite interesting on that score.
Thomas Burnett: One of the things as I reflected on your book of these different kind of scenarios with AI and space, it’s a possibility that we might be able to create these devices that actually outlive all of humanity and to a degree to which on our planet perhaps consciousness will be extinguished at some point, at least when the sun becomes a red giant and gobbles up our planet.
Since we don’t know if there’s any other consciousness in the universe, this might be the way that consciousness gets extended into the far future through these devices, and that’s where I saw some potential for excitement with having conscious space exploratory vehicles because maybe that’s gonna be our legacy.
The consciousness will continue in the universe beyond us.
Susan Schneider: I love it. In one of the final chapters of Artificial You, I sketch a scenario just like this, and I think it’s incredibly important for us to explore machine consciousness for this reason, because our time in the universe is very short, and it’s only through innovations like these that we can extend human life as well as conscious life.
Thomas Burnett: I want to ask you next along these lines about the Fermi paradox. We were talking about creating potentially conscious probes out into the world. Why do you think that we haven’t detected conscious beings yet in our exploration of the universe, at least with telescopes and satellites?
Susan Schneider: Good question.
Maybe we have. Who knows? I actually think that most astrobiologists who I worked with when I was NASA chair have the same observation, which is the universe is very vast, so it’s not so easy for other civilizations to visit. That’s the most common response. I also wonder, though, if they’re using resources, the more advanced civilizations, that we just don’t understand.
So I’m doing a lot of work within physics and quantum entanglement and space-time emergence. And so to me, it’s once you have a handle on space-time as a civilization, you might have different ways of getting information about what’s going on in other parts of the universe that right now may seem impenetrable to us.
But it may be, though, that we don’t even know how to search for that kind of intelligent civilization, and that’s why this time in human history is so exciting because we’re developing these alternative paths ourselves. We see really interesting informational theories of life. We see searches for different kind of intelligence in the context of AI safety, and I think these innovations will give us a better handle of what to look for.
Thomas Burnett: I think you mentioned in your book that if the kind of life is extremely advanced with this technology that we can’t even fathom, we might need to create a highly advanced technology ourselves because like recognizes like. But I thought that was a fascinating way to think about it as well.
Susan Schneider: Yeah. So I predicted that when we flip on our super intelligences, and when we can sift through all this immense data that we’re already collecting through SETI and other organizations, we might be able to say, “Hey, it was there all along,” or, “Hey, we have a new signal of interest here.”
I also should mention that within the realm of astrobiology, the search for life, of course, extends far beyond SETI. And most astrobiologists say the first discovery of life will be microbial life, of course, and that itself is immensely exciting. We could learn a tremendous amount about the nature of life by being able to determine whether it exists elsewhere and what it looks like.
Thomas Burnett: In your career studying both mind, human minds, and in your career studying artificial intelligence, I am curious how that study has changed the way that you see the world, perhaps broadly, or the way that you see yourself more specifically.
Susan Schneider: I think over the last year, I’ve noted that a human mind won’t be able to follow all of the information available to it, even follow the computations of an AI.
So that’s been interesting just from my understanding of human intelligence, that there’ll be all kinds of cases in which we will be challenged to understand the output of some of these more advanced systems. So that has been fascinating. I also think I’m learning a lot right now about machine consciousness and consciousness itself in the studies I’m doing on the nature of consciousness.
So I’ve been working on what I call the quantum Darwinist theory of consciousness together with Mark Bailey, where we’re looking at work in physics to try to connect the physical quantum-based layer of description to consciousness in the brain in terms of resonance. That’s been really exciting. And also, in terms of complexity measures.
To have a better handle on what we can compute when it comes to consciousness and other kinds of interesting activity in the AI safety space.
Thomas Burnett: In terms of studying AI can help us better understand our own intelligence, and likewise, I guess the neuroscience studies of human mind or perhaps other animal minds can also help us inform what an artificial intelligence could look like.
So these different fields are actually mutually informing each other, and maybe we do need to study both to elevate one or other or both areas of understanding, understanding ourselves and understanding the other, right?
Susan Schneider: And at the beginning of our discussion, we talked about functionalism and the view that the mind is a program that has been so influential in cognitive science.
And from that, people have assumed that machines are conscious because they sympathize with this influential position. But I’ve challenged that, and I think what we need to do is look at the physical details of human consciousness and get a better handle on that. But what I’ve done is something slightly different.
So I’ve looked at how we can go from what we might call the micro level to the meso level. So still looking at those very small processes that underlie conscious brain activity to try to make the leap from the very basic quantum layer to what we do know about consciousness. And the reasoning there is if you find out the mathematical process in more detail, you can see how different theories in neuroscience come together.
And I believe they do. I believe it’s quite exciting, actually. And then what I do is I apply those innovations to the space of AI, and I say, “Here’s a case that may in fact be a case of machine consciousness based on these mathematical principles.” So I think that really helps because instead of just talking up here about abstract philosophical principles, which of course philosophers can debate forever, we ask people, all right, I want to see tests for this hypothesis, and I want to see a connection to science and the details at the level not just of neuroscience, but at the level of physics to how this could be implemented in a way that supports consciousness.
Thomas Burnett: Susan, this has been a really fun conversation, the topics we’ve covered and the things we’ve explored, so thanks for joining us on the show.
Susan Schneider: Thanks for having me, Tom. It was really exciting.