This conversation began with garden snails. I was reading a philosophy paper on the conjecture that simple-brained snails might be conscious. In 1974, Thomas Nagel famously asked, 鈥淲hat鈥檚 it like to be a bat?鈥  The philosophy paper followed Nagel鈥檚 question, wondering if snails have some dim sense of self. Is there something it鈥檚 like to be a snail? Which got me thinking about the explosive development of artificial intelligence. Could AIs become conscious?

I began to wonder if there could be something it鈥檚 like to be ... a robot.

And that brought to mind my favorite roboticist, Joshua Bongard, the Veinott Green and Gold Professor in the Department of Computer Science in 日韩无码鈥檚 College of Engineering and Mathematical Sciences. He鈥檚 been building world-leading, category-busting robots for decades, including his most recent collaboration to create Xenobots, the world鈥檚 first reproducing robots鈥攃ustom-built out of living frog cells. He also thinks deeply about technology, artificial intelligence, and cognition鈥攁nd what this all means for the future of the human experiment. And what this all means is by no means a question that only lives in the halls of engineering. One of the great strengths of 日韩无码 is the way scholars and researchers reach out from their disciplinary homes to ask other scholars in radically different fields: what do you think? I knew that Bongard had had fruitful conversations with professor Randall Harp in 日韩无码鈥檚 Department of Philosophy, a researcher who ponders the meaning of free will, teaches courses on dystopias, and asks questions about robots. And with Tina Escaja, University Distinguished Professor of Spanish in the Department of Romance Languages and Cultures, and director of 日韩无码鈥檚 program in Gender, Sexuality, and Women鈥檚 Studies. Escaja is a writer, artist, and pioneer in digital poetry whose category-defying creations include 鈥淩obopoem@s,鈥 five insect-like robots whose bodies are engraved with seven parts of a 鈥減oem@鈥濃攚ritten in both Spanish and English, from the robot鈥檚 point of view. I invited them to speak together, prompting them with several questions. When gathered in the gorgeous library of Alumni House, they took these questions and ran, adding many of their own. Here鈥檚 a small sample of the free-wheeling, two-hour conversation, edited and condensed for clarity. It was a meeting of minds that kept returning to powerful questions, including this opening one: what is a robot?

three professors sit around a coffee table chatting in a library

Tina Escaja: This morning I asked Alexa to help me with this (I didn鈥檛 have the opportunity to ask ChatGPT, but that would have been nice): what is a human and what is a robot? The answer was predictable; it was probably coming from Wikipedia. Alexa said a human is a species of mammal that comes from 鈥渉omo sapiens鈥濃攚hich is 鈥渨ise man.鈥 (Just by itself, there is a problem there because of gender construction!) And a robot is a machine鈥攑rogrammed through a computer and mostly related to functionality鈥攁ccording to Alexa鈥攁nd less related to aesthetics and emotions. I thought that was interesting: a robot is explained by what it's not, how it鈥檚 not like the human. I've been looking at robots that are geared into emotional and aesthetic perspectives, thinking about robots that feel. The problem with binaries is that they're faulty by definition.

Randall Harp: Okay, I鈥檒l take a stab at this. I think about robots in the context of what humans do as agents. We make changes in the world. We reach out our limbs to implement changes, but we also plan for those changes鈥攁nd think about the way we would like the world to be, right? Robots are attempts to artificially do those things.  You might want robots to be able to accomplish a task: weld this door onto this car. But you also want them to be able to plan, which means figuring out which actions are needed and in what order. Suppose there are civilians trapped in rubble. 鈥淩obot, figure out what you need to do to get them out.鈥 Maybe the robot needs to cut a hole in a wall. We want them to plan autonomously. That's the second step in making robots: are they able to decide what they want the world to be? I would say a robot is an artificial agent, implementing changes in the world鈥攁nd making plans for how they want the world to be. How they decide the way they want the world to be is where I start to get some concerns! Do we really want robots to decide the way they want the world to be?

Josh Bongard: That makes sense to me, Randall. For most people, the intuitive idea of a robot is a machine that acts on the world, that influences the world directly, compared to other machines鈥攃omputers, airplanes鈥攖hat indirectly affect the world, or at least affect the world with a human watching closely. But when you start to unpack that, what really does this mean? As a researcher, as a scientist, that's where things start to get interesting. Tina, you mentioned binary distinctions. Those start to fall apart once you dig down into what do we mean by intelligent machines or robots.

Some dictionaries have the historical roots of the word, which comes from the Czech language, 鈥渞obota,鈥 which comes from a play by Karel Capek. And in that play鈥擨 won't give away the plot鈥攖he machines, the 鈥渞obota鈥 are actually slaves. There's a dramatic tension there. Lots of things have changed in the 102 years since that play was published, but these underlying issues remain: What are machines? What are humans? Are we machines? Are we something more? How closely do we want to keep an eye on our machines? Those questions and tensions have remained, but now they've become pressing because the robots are no longer in science fiction or in plays. They're here. And we have to decide: what do we want to do with these things?

Escaja: Josh, you said the word 鈥渁rtificial.鈥 I've been considering the question鈥攁nother binary鈥攚hat is organic and what is artificial? In your work with Xenobots these limits are being blurred. Even the concept of human/not human鈥攖hat's a binary I question. We ask: what is a robot? The next question: what is a cyborg? This combination of artificial and organic makes us who we are. Some of us have in our bodies the artificial, machines and devices鈥攁nd that doesn't make us less human. So that's the blur.

The literary imagination has often considered that technology is here to destroy humanity鈥攚hen that technology achieves consciousness. I imagine just the opposite. I think of technology and robots as not necessarily only a tool, but as a way of interaction that is positive.

Harp: Tina, I'm interested in what you said: what is the artificial part doing? On the one hand, I can imagine a biological creature being turned into a robot. I could turn ants into robots if I can ensure that they only do the task that I want them to do when I want them to do it. What's happening to the ant is artificial, it's contrary to the way it would ordinarily act. Usually there's some thought that a robot doesn't have free will to decide what it's doing next. On the other hand, it鈥檚 interesting to think about robots as autonomous鈥攁utonomous tools. They're not like a crane where someone needs to operate that. A robot is created to act like a human agent鈥攚ithout the human directly involved. That's the artificial part鈥攊t's something created for the purpose of independently doing the thing that we want it to do. Autonomous creation is important for being a robot.

Bongard: In the history of robotics, which goes back to the Second World War, there鈥檚 been ongoing debate: what is a robot? I'd like to invite all three of us to pull back and think about the larger community of machines: robots, AI, Alexa, the stuff running on your phone, the stuff running in the drone. For most folks, it's the combination of all these technologies, and how quickly they're progressing, that is frightening or exciting or some combination of the two. We can talk about definitions of robots and cyborgs鈥攂ut there are other questions: What can they do? What can't they do? What will they never be able to do? Only humans and animals and biological systems can do X and machines never will be able to do X. And then the deepest question, which moves us into philosophy, is: what is it, this X? What exactly is this thing?

Escaja: I could start to answer鈥攁s a poet. Alan Turing's famous test tells us what is human and what is a machine. A test could also tell our robot what is not a human! I have a CAPTCHA poem. What is CAPTCHA? It's a tool to tell humans and machines apart鈥攁 鈥淐ompletely Automated Public Turing test to tell Computers and Humans Apart.鈥 You see them on websites. I transformed that鈥攄uring the Covid-19 pandemic. I created a CAPTCHA poem: which is a 鈥淐ompletely Automated Public test to Tie Computers and Humans as Allies.鈥 A public test to tie computers and humans as allies鈥攊t's a capture. It's in the direction that you were mentioning, Josh: what programming makes us human? Go back to the binary. A CAPTCHA could tell a bot what they are not鈥攖o recognize themselves as what they are, which is different from a human. In a CAPTCHA test, a human needs to recognize, say, taxi cabs and traffic lights to be recognized as human. So here we are in a test that asks particular bots to recognize what they are not. I transform that into a poem, a poem related to the theme of what makes us human? What makes us machines? Who is the creature, who is the creator? Who creates what? Who makes the decisions about us?

I'm talking about poetry. Is it possible for a machine to write a poem? Is a poem the epitome of humanity? The answer is yes. Yes and no, of course, because here we are. That's why we have a debate about what is a robot, what is a cyborg鈥攂ecause we don't know the answers, and we want to get closer to the answer, but we're not going to get there, to the truth.

Over centuries, the sonnet developed as a very specific set of rhymes and it's based on skill. In ways, it's a program, it's an algorithm. So can this be replicated? Yes, probably. What are the limits of robots? What is it they cannot do鈥攅ventually? Maybe now, robots are primarily labor, and it's scary. The origin of the word 鈥渞obot鈥, which is exploitation and labor and slavery, is scary. But in theory, yes, they can write a sonnet. So what is the soul? What else can they do that we can also do? Where are the limits? What do you think, philosopher?

one professor laughs while another talks

Harp: I'm always daunted by these conversations because you guys know more about philosophy than I know about your fields! It's always humbling to have these conversations with you, but I really enjoy it.

You brought up Turing's paper, Tina. Turing was trying to understand what intelligence means in the first place. He said if a machine can fool somebody who is intelligent into thinking that it鈥檚 another intelligent human being then it鈥檚 passed the test. In the 1950s, Turing wondered: is there another test we can have for what it means to be intelligent?

Now comes artificial intelligence. What is our marker for when we think that these systems鈥 robots, AI鈥攈ave passed a threshold into being recognizably intelligent? The answer always is going to be measured against who we recognize ourselves to be right now. Look at the 鈥渓arge language models鈥 that underlie technologies like ChatGPT. Essentially, they鈥檙e just a way of finding what is, statistically, the most likely word or phrase to follow from a prompt. Obviously there are some concerns there. Do we want our artificial systems to look like the average human being? The average human being might have all sorts of鈥攍et's put this delicately ... problems.

Bongard: Let say peccadillos.

Harp: Peccadillos is probably better! If we look at technologies like ChatGPT, they undergo refining on the backend to make sure that they're not actually producing the statistically most likely thing that might be said, because those are often terrible things. It's like, okay, let's take the statistically likely thing so long as it stays inside certain guardrails. Let's not make it be super racist, even though there's lots of super racist stuff online. And this is going back to that question: do we want robots to start deciding? Right now, given the history of the United States, it might be that there are certain professions in which, for example, members of racialized minorities, Black people, or women are underrepresented. And so then, if you ask a machine to take the statistical average of saying, 鈥淢y doctor is a blank,鈥 it might very well say, 鈥淥h, my doctor is a man.鈥

Do you want the machines to be able to imagine a better world? Because right now the machines are not really able to imagine a better world. Then the question becomes: are AI's useful tools for us if they can't imagine a better world?

Bongard: This is really interesting, this idea of asking the machines, or inviting them, to help us imagine and possibly create a better world. This is the big picture, and this is a discussion also about research. As this technology develops鈥攁nd some of us have a hand in that鈥攚hat is it that we want these slaves, these machines, these things that are 鈥渢hem,鈥 and we are 鈥渦s,鈥 what is it exactly that we want them to do? And how much control do we have? Having worked in robotics, one of the first things I teach my students is the concept of 鈥減erverse instantiation,鈥 which is that the machines do exactly what we ask them to do.

Train on every word out there on the internet, and use that to hold an intelligent conversation鈥攖hat's what we asked ChatGPT to do. It did exactly what we asked it to do, but it did it perversely. In retrospect, when we look back, we, as the humans, we're actually the ones that made the mistake. We say, 鈥淥h, that's not quite what we meant.鈥 You mentioned guardrails鈥斺渟o please do this, but don't do it in this way, and also don't do it in this way.鈥 I tell my students, robots are like teenagers.

Escaja: Yes, that鈥檚 funny. For ChatGPT, the problem is the 鈥淧,鈥 which is 鈥減re-trained.鈥 What is our level of constructing the answer? At the same time, I'm very happy that ChatGPT is providing more than simple combinations. In that sense, it creates its own.

Bongard: Teenagers and robots will do what you want them to do, but they know how to do it in the way that you didn't want them to do it. You can get on ChatGPT today and play around with it, and you'll see perverse instantiation start to emerge immediately, which is hilarious. But if you're sitting in an autonomous car on a California freeway and the car starts to perversely instantiate, 鈥淕et me to my destination as fast as possible鈥濃攏ow it's no longer funny. It鈥檚 a matter of life and death. A few weeks ago, there was an autonomous car that slammed on the brakes in a tunnel. Luckily no one was seriously hurt. But that's what's coming. We have machines that actually can do what we want. We are the problems. We can't specify well enough what exactly it is we want them to do鈥攁nd not do. So how do we move forward with a technology like that? I think there's a lot of research and scholarship that needs to happen and happen quickly, because this is coming whether we want it or not. It cannot be stopped.

Harp: Can I ask about that? I'm not ordinarily a booster for our transhuman future, right? But if we want to figure out a collaborative way between us and the systems we create for exploring what the world could be鈥攖hat requires some kind of imagination to say: we're not just trying to replicate the way human beings are right now, or the way biological systems are right now. We want to imagine a different way to be.

Both of you guys are working to find a new thing鈥攚hich has not been done before. And I'm wondering then: how do we evaluate? If you replicate something that already exists, it's hard enough to understand what's going to happen if you release this thing into a wild. Say I've programmed this machine to value saving lives. And now it's out there鈥攁nd it's like, "there are a lot more bacteria than human beings, so I'm just going save all the bacteria and kill off the human beings!" And then we're stuck with that problem you described, Josh, saying, 鈥渋t's doing what we told it to do, but that's not what we wanted!" But when we're dealing with something which hasn't existed before鈥攈ow do we know what the consequence are going to be downstream? Whether we're talking about new life forms鈥攐r new kinds of expressions, new kinds of normative structures. In poetry, Tina, you're finding new ways to value things, new ways to describe and capture and promote and create the values that inhere in things in the world. How do we do that? Can we ever predict what's going to happen next in these new things we're creating? Or is it always just going to be: let's just try it out and hope that we don't mess it up?

Escaja: I鈥檓 thinking about an autonomous car heading for an accident. So it has to make a decision about who to crash into. The car might consider: who has more options to be alive? So it will go for an older person instead of a child. Maybe that's the right way to go. A human might think, 鈥渢hat's my mom, so she's going to live; I don't know this little kid, so he's going to die.鈥 That's a way of having machines educating us, going beyond our preferences and personal algorithms. So is the machine making decisions that are less flawed? And can we learn from that? Maybe.

My robots鈥攚hich are poets and poems at the same time鈥攁re based on upon a poem that questions the anxiety of what makes us humans. The poem is from the point of view of the robots. The robot is saying, "why are you scared of me?" The last line of this poem rephrases Genesis in the Bible鈥攚hich is another text to think about perhaps. I say, "according to your likeness, my image." That's the end of this鈥攊n Spanish and in English. So that's the question: what is the model? What is the original?

I'm a feminist, and I know that gender is a performance without an original model. If the model is the human model, then it's biased. The way that we are constructing society is based on bias and clear hierarchies and concepts that we are programmed to follow. So that's the problem: if the model is the human for artificial intelligence, then we are in trouble. So how can we go beyond that programming? That will have to start with ourselves as humans! We鈥檙e very arrogant. Maybe robots have an ethical point of departure that is more pure than our perspective, because we're programmed to be biased and unfair.

one professor holds a coffee cup while talking

Bongard: On this issue of bias, my hope is that this is one place where these technologies can be useful鈥攖o not make machines in our image, but to ask the machines to build themselves in a way that is as different from us as possible, that complement us as a society.

It's taken us a really long time, and we're still not very good at it, but we鈥檙e realizing that diverse perspectives are usually a good thing. So if we want a diversity of perspectives, what do we have on hand? We have male/female/transgender; black/white; old/young; human/non-human. And now we now have this emerging, potential new member of our global community: the machines. And unlike other groups which already exist, this new group is pliable. We might have some ability to shape it鈥攏ot necessarily into our image, but into something that provides a unique perspective into what it means to survive and thrive on this planet, trying not to harm others. That very optimistic. I have no idea whether that will happen.

Randall was asking: where is this all going go? I don't know. I don't think anyone knows. And anyone who says they know, I would suggest to be suspect of them!

Escaja: I think this conversation is beautiful because the development of a posthuman future is something that is happening right now. The idea of posthumanism has several branches. One is the posthuman where we are less human: we are more intricated, we are machines, and we are going to fix ourselves. In that vision, we are going to live forever in a way.

But also there is a posthuman from the point of view, for example, of Donna Haraway or Rosi Braidotti鈥攚ho are talking about the intelligence of animals. We cannot, in this discussion, forget about that. Recently, Braidotti and Haraway are talking about this need to reconsider ourselves in the language of brotherhood and sisterhood with our kin鈥攖he other animals. This new perspective is happening now, this discussion is happening now: who are we in relation to others? And these others are also us鈥攗s being animals, us being bots. So in that sense, they are also related. I want to keep in the center of this discussion the concept 鈥減osthuman.鈥 What is posthuman?

Harp: Wow, great question, Tina. You mentioned autonomous vehicles. Even if we can agree on the principles that humanity holds for what this vehicle should do鈥攖hat still doesn't answer the question: what the vehicle should actually do? What the vehicle should actually do seems to require that we agree on what values this system is trying to promote. Just understanding what people think鈥攔ight now鈥攄oesn't answer the question. It's not a simple step from saying, 鈥渢his is what people think鈥 to 鈥渢his is what people should think.鈥 We can recall many scenarios in human history where lots of people thought something鈥攖hat was not the right thing to think. Even if you take a survey, and everyone's like, "yeah, this is cool," it's not cool. And the reason we know it's not necessarily cool is because human beings can engage in debates over values with other human beings. We can collectively create for ourselves the values we think ought to be guiding our choices. Ideally, you allow all human beings and non-human animals to have this discussion on what the values should be.

But then the next question: is it ever going to be the case that the machines are also in that sphere of moral considerability? Do we need to be working out what the values should be鈥攏ot just with other humans, not just with non-human animals鈥攂ut also with the machines? Or is it that the machines are just there to implement the values that we have already, antecedently, agreed upon? And the machines are bad when they're not promoting the values that we purport to endorse?

I don't know the answer to that question. That's the good thing about being a philosopher: you can just ask the questions and then walk out the door! But I wonder about that and I worry about that.

Bongard: I want to bring in time. Things are happening鈥攆ast. Technology, in general, holds up a mirror to our values. Intelligent technology, as a united constellation, is pushing us to get straight what it is that we want. As is the other big change in our world: the degradation of the environment, climate change, energy crisis. We're destroying the planet. What planet do we want to leave for the next generation?

These two big forces鈥攖here are others鈥攁re forcing us to make decisions, make value judgments鈥攊n a hurry. And this is not something, historically, that humans have been good at. So what do we do with the fact that there are these drivers out there in the world鈥攁nd the clock is ticking? We only have so much time to work all this out. In terms of our research and our scholarship, what can we contribute to that urgent discussion about what is it that we want?

Escaja: The fact is that the artificial is getting more prominent and the organic is diminishing鈥攇iven the Anthropocene. That's why we're having this discussion. At the same time, this discussion is within itself entangled. Whether technology will save us, or will destroy us, is not clear. The discourse of posthumanism is a good thing, that framing of the Anthropocene. So, yes, it's scary at this point. And it鈥檚 also beautiful in that we are moving our conversation from being scared of machines and robots鈥攖o being scared of humans. So that's, maybe, a priority of imagination. Where do we concentrate our effort? Into creating an epistemology of robotics? Or just into thinking about how we can save ourselves as humans? So maybe it is just a little bit rhetorical, even our conversation here. Going back to your comments, Randall, I'm very concerned about the question: what is the model, the original model? It doesn't exist. What's the control? Are we really in control of our opinions? Of our values? I don't think so. I think we are programmed鈥攙ery strongly programmed. And I use that vocabulary on purpose. So it's really difficult.

And that's the way that should be: to have an alliance with machines, with robots, to create a set of values that are morally better. But how do we teach that? And based on what? We don't have any control, really, of ourselves and our opinions. We are based on the opinions being taught to us. So that's a gray area where I'm a bit pessimistic鈥攅ven though I'm very optimistic about the possibilities of mechanics and robotics. So yes, I love your direction, Josh, in this conversation鈥攂ecause I think that the problem is humans, not necessarily robots.

one professor holds his hand up to his mouth in thought

Harp: Where would you want those values to come from, Tina, if not from us in our our environment? You say we're programmed to respond to certain things in certain ways, which poses all these problems with potential bias, but also you're pessimistic about this enterprise of constructing these values鈥攂ecause we just get these values from our environment! Where else would you want them to come from? That should be a cause for optimism: we get these values from our environment and our environment is something that we have some control over. We can change our environment and therefore change our values in positive ways. So isn't this one step on the path towards a better future?

Escaja: I don't know. Language, for example, expresses a lot of these values and these values are limited and biased. It's not trivial, the fact that we are completely tinted. So how we go beyond that is the challenge. But I want to remain optimistic and try to find a way for an alliance with robots and machines and humans to go beyond what we know. But how do we go beyond what we know to create a new set of values?

Bongard: I think this is one place where machines are already helpful. Hopefully, we are learning that we are flawed, we are imperfect, we are finite. There's only so much we can do. And we exist on a planet where things are not going so well for us. You can get on ChatGPT today and you can share your thinking about a topic with ChatGPT. My feelings about climate change are so and so, my feelings about racial justice are so and so. Here's my thinking about it. And then you can end with, "what am I missing?" And the machine, although it is also imperfect in different ways鈥攖his is one thing that machines are good at鈥攖hey鈥檙e encyclopedic. They have lots of stuff. They might prompt you鈥攁gain, this idea of machines challenging humans鈥 and say, "well, you didn't mention X, Y, and Z. What do you think about that?"

This where I think some of the positive aspects of this technology begin: they literally or metaphorically converse with you. It's not master/slave. It's not the human saying, "robot, fix the car door." It's turning into more of a conversation which is more complicated and doesn't mean everything's going turn out well for us鈥攚hatever we mean by 鈥渦s.鈥 But the fact that we can have a conversation鈥攚e can invite the other imperfect thing into the conversation and invite it to try and complement our biases, our limitations鈥攖hat's one area of this technology that gives me some hope.

Harp: OK, but are we getting anything new out of that conversation?  If I ask ChatGPT, 鈥淥K, let鈥檚 talk about free will,鈥 is it going to tell us anything we didn鈥檛 already know?

Bongard: Randall, philosophers have been asking thousands of years: What is free will?  What is intelligence? What is consciousness? what is human nature? Has what's happened with machines in the last few decades changed the deep conversation going on in philosophy? Or is it a distraction, a sideshow?

three professors sitting around a table reflected in a mirror

Harp: I don't think machines have changed the conversation in a fundamental way. With these new generative systems and large language models we could be close to having the output from a machine be very similar to an "ordinary鈥 human being within not many years. But that doesn't affect the fundamental question: what more is there to being conscious than just the capacity to produce something which looks like human conversation?

An engineer at Google became convinced that one of their large language models was conscious鈥攁nd that we should respect the rights of this system. And Google quickly said 鈥渘o, this person is wrong. This system is definitely not conscious.鈥 And I thought...definitely? It鈥檚 probably not. It鈥檚 almost certainly not, but...definitely? We are not in the position right now to say it is鈥攄efinitely鈥攏ot conscious, because we don't have a good way of evaluating the claim.

We understand how these systems work and so we, right now, believe that simply producing a convincing representation of intelligent conversation is not sufficient for intelligence. One paper by Bender, Gebru, and others is calling some of these large language models "stochastic parrots鈥: they're just parroting back what they're receiving from their input in a stochastic or statistically determined way.

That's all that's happening right now. And if something is just parroting back words, based on their statistical likelihood, that is not sufficient for consciousness. But what else do we need, or do we have, for consciousness? Our idea of consciousness is tied in with our capacities as an agent. Consciousness seems to require capacities to monitor what's going on within the system. It seems to require unity of states in the system over time鈥攜ou need to have something like a "stable personality." Right now, with ChatGPT you can ask it a question; two seconds later, if you ask it another question, you may get the opposite response.

There's no consistency across these systems right now, because that's not what they were designed to do. But that may just be a design issue. You could program it to have more stability and unity of personality. Is that getting closer to consciousness? You can certainly build self-representations into a system. You can also build them into systems that can influence the environment; you can make them more robust agents. Is that getting closer to consciousness? What else do we need for consciousness鈥攐ther than all of these different functional capacities?

Some philosophers will say there needs to be some kind of "what-is-it-likeness." There's something it's like to be a human being. There's something it's like to be me right now sitting in this room, seeing the colors outside; I can feel the breeze; I can soak up the history of the room with the wooden paneling. Okay, I can get all of that. There's something it's like to be a bat, to use Thomas Nagel's expression. We can agree there's something it's like to be a bat, which is very different from what it's like to be me or an octopus. There's something it's like to be Tina or Josh, which is very different than it's like be me. Is there something it's like to be a large language model or an AI鈥攔ight now? It's not clear that there is. And how would we measure that? For people who are skeptical, it's a big hurdle imagining artificial systems having that extra bit of understanding鈥攐r whatever it is that we think that previous artificial systems do not have鈥攖hat makes for consciousness. I don't know what evidence would convince people to get over that last hurdle, but it would need to be pretty strong.

Escaja: Wow. What is consciousness? That's a big question. In Blade Runner, the "replicants"鈥攖hey're simulacra鈥攖hey're imitations. One way to understand鈥攙ery simplistically鈥攃onsciousness, in that story, was the will to survive. They had a life span of four years, and they didn't want to die. You remember that? So that鈥檚 one concept of consciousness: not wanting to die鈥攁nd wanting to reproduce. That鈥檚 a basic feature of consciousness that goes with animals too. Are animals intelligent? Of course they are. They have their own language. And now we are starting to understand. It's not unimaginable to think of a machine having the consciousness of not wanting to be terminated. And their reproduction could be like a Xenobot. So now here we are. It's beautiful to have a conversation like this because it's humbling. Our arrogance has been proven through our dismissal鈥攖hrough the Anthropocene鈥攐f animal intelligence. Now that鈥檚 being questioned. Maybe we need to consult with robots and with animals to save ourselves and to establish an integration of any consciousness.

Bongard: I hear a lot of questions coming to the surface from all three of us鈥攁nd not a lot of answers. Which may be, for the readers of the 日韩无码 Magazine, a little frustrating! But we're also researchers. My grad students eventually will ask the question: what is research? And the best answer I have so far is: coming up with good questions. The three of us work in very different areas. One thread that unites us is thinking about the right kinds of questions. Questions that matter to other people. Deep questions that have been asked for a very long time, and that we don't have good answers for. Personally, it's great if we ask a question and we get an answer. But that's rare in science and engineering.

Harp: As a philosopher, if we answer questions, we're putting ourselves out of a job!

Philosophy is where the hard questions get left over. I guess that sounds pessimistic, but, no, I actually think there's value there. What I I try to do in my research is make sure that knowledge across different fields is actually consistent and coherent. So let's understand how roboticists are using concepts of life and consciousness, and ethics. And let's ask how poets use ideas of creativity and value and agency. And let's make sure we're all talking about the same thing.

Now we have these new artificial systems and that puts additional pressure on all these ways we thought we understood the way the world works. Does our understanding of what's required to be intelligent need to be changed now?

To have rights, or moral obligations to a thing, requires that thing have certain capacities. Does that mean that machines have rights鈥攐r will soon? That we have obligations towards them? Does that still hang together? What does that mean? Do we have an obligation to protect a robot's existence? Would it be important to program these machines to promote their own existence or not? What should we be doing with these machines and how does that tie with other things we care about? These are important questions.