HIGHLIGHTS
Asst. Professor James McLurkin from Rice University is a lifelong robotics fan. His thoughts on the future of voice-based assistants, the possibility of a humanoid butler walking around in our homes, and the moral implications of sentient bots.
James McLurkin –
Assistant Professor, Department of Computer Science, Rice University – is a lifelong robotics fan. We had a great conversation with him last month on his maiden trip to India on all things robotics related. What’s more, we also tickled his fancy and got his thoughts on the future of voice-based assistants, machine learning, the possibility of a humanoid butler walking around in our homes, and the moral implications of creating sentient robots. What if Skynet truly comes to pass? Excerpts from our exclusive interview:
Q) We know you’re an expert in the field of robotics, but tell us a bit about how you got drawn into this subject. What sparked your interest in it as a kid?
JM: For me, robotics is the culmination of a lifetime spent building better toys. When I was younger, cardboard boxes, model trains, and as I got older radio-controlled cars, video games, and lots of LEGO. Ultimately, if you put that much technology into a 15-16 year old, you move towards building robots – that was the most natural inclination for me. I still like to think myself as building better toys, toys that everyone can use. So, robots for everybody, yes.
Q) What sort of robots do you build and study?
JM: Right now, we have four different robots split into two different types. This particular robot was built in Rice University – in collaboration with our students – and the goal of this robot is to have research grade features and qualities but built as cheap as possible to minimize costs. The ultimate goal of my work is to understand distributed algorithms for multi-robot systems. So if somebody gave me 10,000 robots, I need to figure out how to put them together, how to program them, how to get them to do tasks as a group.
And we see example of this all over. Ants, bees, termites, wasps, all of these insect communities do very complicated tasks as groups, that individuals can’t do by themselves. We don’t want to copy nature, because nature’s far too sophisticated for us to copy, but understand the algorithmic implications of these tasks – what’s the input, what’s the software, and what’s the output of these tasks that nature runs, so to speak. There are various things that insects do in their lifetime, which are useful to us if we get robots to do them, too.
Q) How will your technology research and robots be implemented in the real-world?
JM: As robots become universal, more practical, cheap and accessible, it will be uncommon to see individual robots working on a task and groups of robots will be common. I take that concept to the extreme where I’m not worried about 10-20 robots but 10,000-20,000 robots. Enough robots to build buildings, to reclaim land, to search the oceans for giant squid and other unexplored creatures. Right now we have two robots on Mars, what if we had 2,000? We could definitely search the planet far more efficiently.
Q) Do you think there’s a lack of awareness of the impact of robots on our lives among the general public?
JM: We as humans have two views of robots: the classic view concerning the robot butler and then we have the giant killer robots (Terminator bots). Most of us won’t see either of those possibilities for a very long time, and hopefully we’ll never get to the giant killer robot stage. But we’re going to see stuff like
Google’s self-driving, robot cars, which’s a hot topic right now, and this is a problem we’ll see getting solved within our lifetime (of a self-driving car). But robots are all around us, in a way. I flew here in what’s essentially a very large robot called a Boeing 777, my dishwasher is robotic. So that inherent need to sense, compute, act in the physical world without human intervention is one of the keys to robotics.
The other area where we’re working on actively as a community is artificial intelligence (AI). In the example of Google’s self-driving car, there are basic things that the car needs to understand, but it doesn’t need to be very, very smart. It does need to be careful and good at driving, but it doesn’t need to be creative, draw inferences from experiences that are new and novel. These are things that we just don’t know how to do yet. The type of intelligence we will be able to achieve is one that’s given a large amount of data, and analyzing that data to find the best possible solution to a problem. But there’s this whole other aspect of intelligence which is new thoughts, new ideas, creativity, these are things that we don’t even understand fully among us humans yet, much less how to get that into a robot.
Q) Do you think there’s a lack of participation when it comes to academic research concerning the field of robotics? Not enough students interested in robotics?
JM: When it comes to academic research, I see robotics as a good way to attract students to computer sciences, mechanical engineering, or electrical engineering, or technology in general. There aren’t as many people as I’d have liked, but the number of students taking robotics is increasing. What excites me the most right now is the Maker Movement, where there’s this huge DIY resurgence, which is helping the robotics cause. When I started teaching, I’d get students who didn’t know how to use a screwdriver, but now I work with students who tell me that the chips we’re working with in the labs are out of date. This is very encouraging from the academics perspective, to see that cultural shift that’s making it cool to do technology. Technical interest in the community at large is also going up which is an encouraging trend. It’s a very exciting time in the field of robotics right now.
Q) What’s your impression of Indian students?
JM: I’m biased because the students I meet are always the top students from India. I’m always impressed with the Indian students I get to interact with. My perception of the culture in Asia, India in particular, is that education’s respected here. There’s much more desire to learn, and the social costs of being interested in technology are lower, and the possibility of being teased and ridiculed in the locker room for playing with robots or video games or studying maths is still a problem in the United States. The choices faced by kids when they reach middle-school at the age of 15 or 16 years, deciding whether to do something they enjoy (which is building mechanical robots, studying software, or electronics) or do something to look cool. Cool wins more than it ought to right now. We want to tell students about Geek Chic, that you can do both. You can be cool, you can do technology, and the two things don’t need to be mutually exclusive.
Q) Our digital age has evolved to include off-the-grid machines which are connecting to the internet and becoming smart. How do you see the field of robotics or AI adjusting to this shifting landscape?
JM: The pervasiveness of technology, with smart machines, and the internet of things, is definitely advancing rapidly. The simultaneous growth of machine learning (which is different from AI) and predictive analysis of behaviour where the computer can look at what you’ve been doing and take a good guess of what you’ll be doing next is already a reality – in some cases like Amazon’s and online music streaming service’s recommendation engines, for e.g. We will see this behaviour increasingly and we might not even notice it. The phones will get smarter, cars will get smarter, your refrigerator will send you an email when you’re running short of milk, and these things will happen bit by bit. The first smart refrigerator will be heralded as a breakthrough, as RFID tagged food becomes the norm in our house, and there will be a few media stories around it, and it will fade. We’ll just get used to it.
Q) But is there an inherent danger of things getting out of hands and robots taking over the world?
JM: There are two reasons I’m not worried about that possibility. The first reason is a philosophical argument: If we create this thing, a sentient, self-aware robot, why would it want to exterminate its creators? That’s not logical, with an assumption that we create something that’s compassionate and caring, something that isn’t sadistic in nature. The second reason why I’m not worried about the possibility is because there are currently no robots that can tie shoe laces, there are no robots who can safely cross streets. If we’re going to have robots that are going to take over the world, you can’t do that if you trip over your shoelaces.
Q) What is the future of voice-based smart assistants found on devices – like Siri on Apple, the smart voice-based AI portrayed in the film Her? How far away are we from that kind of voice-enabled AI that almost has a personality of its own?
JM: So there are two questions there: How far away are we from building a true AI, a device that’s sentient and has all hopes, dreams, wishes that we have? The second question is how far away are we from building something that can fool us most of the times? To answer the first question, I simply don’t know. That’s a very long way away. I don’t think we’ll see that in any of our lifetimes or even our children’s lifetimes. The answer to the second question is that we’ll be able to fool ourselves pretty well, pretty soon. We will be convinced our phone really does understand our schedule with a deep level of intelligence. But if we asked the same phone to order flowers for Valentine’s Day, that might not work so well. Or the selection of different flowers, especially when the phone’s never seen a flower before.
Then there’s this question of can we have intelligence without being embodied? Understand a three dimensional world, understanding its rules and how things operate in it because I’m embodied in it. Can we have that same level of intelligence that we think we have without the kind of body we have?
This begs the question of how intelligent are we really? What if the intelligence that we have is a collection of stupid people tricks? What if we what we perceive as intelligence is nothing but layering of millions of simple instructions, simple interactions, simple patterns? How smart is smart? This leads to the question then are we even able to judge intelligence?
But coming back to the near future, digital systems that can mine data, mine our patterns – where we leave a digital trail of information – those systems will appear to be smarter and smarter. The leap between that and a full-on
Turing Test, where I can talk about not just my schedule but about politics, or science, or the fact that my back is itching, things of that nature that a robot may not understand, that’s a long way from now.
Q) Given the rapid strides in technology, how far away are we from truly having a partially self-aware humanoid butler in our future homes?
JM: There are two things that will happen simultaneously. The first thing that will happen – that is happening right now, in fact – is our houses will get smarter. Crucially, our houses will be more amenable to a robot being in them. We’ll get used to having everything in the house talk to each other, everything being always communicating. Everything that will make your house more digital and connected, will also make it friendlier for a robot. For e.g. if a smart fridge knows what’s inside it and where they are, this makes it a whole lot easier for a robot to go talk to the fridge, figure out what’s in there, where it is, leading to a far more easier manipulation of task when these two systems work together.
I don’t have a good timeline for you, as to when the butler will happen. The houses that we have still are difficult for artificial creatures to walk in. The
ATLAS robot from
Boston Dynamics is by far the best, most agile legged robot, and still we’re not going to see that in homes for a long time to come. So, if we were to design a house very carefully, with one level, an indoor elevator, have everything in it tagged and painted in bright colours, then you can have a robot bring you drinks – sure. But when you add all that clutter of life, when someone leaves a backpack in an unexpected location or move a chair, the complexity of the task increases. But the new crop of RGB-D cameras – found in
Microsoft Kinect – that measure colour and depth of an object are making life easier.
I think we’ll get something semi-useful moving around our houses within a decade. Take the
Roomba vacuum cleaning robots for instance, where the humans take a small set of responsibilities (keeping clutter off the floor, finding lost Roombas, etc.) which is easier to manage than vacuuming as a whole. Increasingly, this concept of using humans as robot assistants will become normal and widely acceptable. So the robot butler will be sentient but will need help here and there, at least in the first version. I’m very optimistic of this scenario happening within a decade. The notion of full automation – like
Rosie from The Jetsons – is a long way away, but we’ll have some small version of that soon, and it’ll be ok.
Q) How has sci-fi influenced the field of robotics and AI?
JM: So there’s sci-fi and there’s science reality, and they both advise each other at some level. Where science fiction inspires people, wakes them up when they’re 10-11 and they get old and they realize how hard it is. It’s still fun, but you realize how daunting the technical challenges are. As an inspiration, a well-informed sci-fi can be really very impressive in terms of taking things forward 50-60 or a 100 years forward. This is where
Isaac Asimov is a true inspiration. In 2014, his writings feel a bit dated because of the way they’re talking and doing things, but even after 60 years his logic, his principles make sense… man, he nailed so many things.
So yes, sci-fi inspires us, and the truly good sci-fi leads us to want to strive for more. Science reality is what we have in the present, the slow careful work doing basic research, figuring out good applications, and applying that to building products that we can actually use. And each of these phases have separate challenges. The basic research is what I’m focusing now as a professor, but with an eye towards products that people can use, particularly towards education. Advising the research are the systems that are being doing it for 160 million years — ants and bees are about that old. The algorithmic solutions that nature’s devised to these things are critical to our understanding. We know ants and bees are very successful in how they forage, reproduce, and get around the world. We just need to figure out a way to translate all that into robots.
Q) On a philosophical note, what are some of the moral implications of designing bots or AIs that are sentient? Is there an unknown flip side?
JM: The biggest fundamental concern is
Skynet, robots taking over the world and doing bad things. The first thing we’ve already talked about is that there’s no reason to believe that something we build will immediately exterminate us. The other answer is technologically we’re very far away from that eventuality, so let’s just defer the question to our great grandchildrens.
There’s another answer that talks about the convergence of biology and technology. As we gain more control of our biology at the genetic level, and we become experts of reconstructing and at some point constructing our own bodies, interesting things will happen. As wearable technology gets more sophisticated, we have more smartwatches, smart earrings, tattoos, smart implants, and you walk further down that road, the line between man and machine starts to get blurrier.
In fact, it may be the case that the robots we’re all so worried about are us.