INTERVIEW: Robotics Engineer Dr. Kagan Tumer Talks AI

If there is a modern day Di Vinci, Dr. Kagan Tumer may be him. He came to Oregon State University by way of Austin, Texas and the National Aeronautics and Space Administration, and has been applying his talents to the development of Artificial Intelligence since then.   

The holder of a patent meant to aid early diagnosis of cervical cancer and, in his spare time, a novelist, Tumer has his fingers in many pots. We sat down to talk about the many things he does.  

You can check out more about Tumer at his personal website  

TCA: Hi, I’m Sally Lehman and I’m with The Advocate. Today we’re talking with Dr. Kagan Tumer, the director of the Collaborative Robotics and Intelligence Systems Institute at Oregon State University. Dr. Tumer’s work has recently received a five year, $20 million grant from the National Science Foundation with the aim to create a better world for people with cognitive impairments. Thank you for being here today, Dr. Tumer.  

Kagan Tumer: My pleasure. Thanks for inviting me.  

TCA: You earned your Ph.D. from the University of Texas in Austin. What brought you to Oregon?  

Tumer: Well, after Austin, I spent almost a decade working at NASA, working on artificial intelligence systems and coordination. And Oregon State was a really great opportunity to build a new program around those ideas. So what brought me to Oregon State was the ‘can do’ attitude of building a new program that didn’t exist before.  

TCA: So [at NASA] you worked on multiagent systems and control of complex systems. How did that impact the work that you’re currently doing?  

Tumer: Well, a lot of the work we’re doing now is also based on coordination. So the kind of things that I look at is one single entity, like imagine a chess player I mean, you can design an AI system to play chess, but that’s a very well-defined problem with one interacting person. You’re looking at a board and you’re making a decision. The question here is what would happen if you had hundreds of chess players at the same time and they are all interacting and changing each other’s boards. Nothing you learn would apply anymore, right? Because suddenly everything is mixed up.  

So I’m imagining that sort of environment where things are changing dramatically because many, many people are interacting. Actually, this [the technical issue that interfered with his video] is a good example of what I was talking about, coordination. There’s probably another 50 people in this area using the Internet now, and it’s unstable, whereas if I were the only one using it, it would work just fine. It’s the interaction that you can predict. And that’s a little bit of what we’re trying to do with our AI systems.  

TCA: You have a patent for using spectroscopy to detect cervical pre-cancers. Have these networks that you patented been put to use?  

Tumer: Yes, although I have to admit that is work that is two decades old, and the patenting and the implementation was done by our partners both at the University of Texas and the medical facilities there. So I know that it’s been used in Texas in terms of reading. What that was was a way of looking at the spectroscopy which is the light that’s reflecting once you have some kind of sensor, and it was to determine whether it was healthy tissue or potentially a tumor in the tissue. And they gave light differently. So what we were trying to do [was] detect that difference using a neural network. So a simple, intelligent system to detect that. But again, that’s two decades in the past. That field has come a long way since then. So I would argue that there are much better versions of that than what we worked on 25 years ago today. 

TCA: How did you come about to participate in that endeavor?  

Tumer: That was a great collaborative effort. So I was working on these networks that are trying to learn from examples and discriminate between whether something is I worked on, for example, whale sound, whether it was a particular whale or not, and it was the same kind of thing.  

Ironically, when you look at a light or a sound, when the computer gets all the ones and zeros, it’s very much the same problem. You have certain discriminating features. One is a whale and one is some other sound in the water, and you’re trying to determine which it is. So I had already worked on that problem. And these collaborators from the biomedical engineering side said, ‘hey, we have this issue where we cannot detect this, would that idea of work.’ And that’s when we applied our method to that problem. So the problem came from our colleagues, and we had a method that seemed appropriate. So we tried it and it worked.  

TCA: Does this have something to do with the reliable pattern recognition that you wrote about for your dissertation for your Ph.D.? 

Tumer: Correct. That was part of that work. So my dissertation was more on how do you reliably detect certain patterns in data? And how do you get that to be multiplied if you have multiple versions of that? How do you know? I mean, averaging is a crude way of doing things. So is there a better way if I collect five times the data of something that I need? How do I pick what’s really happening out there? Averaging is, in a lot of ways, not the best way of doing that, because you’re losing a lot of information. So what’s the clever way of putting things together?  

Which has then led to my coordination work and more AI work. So a lot of that work has been coming and going in that direction.  

TCA: In 2017, you spoke about the larger picture of introducing robots into society, and said that interactions between humans and robots is a big concern. How does that concern factor into your work with the NSF?  

Tumer: Well, that is actually the essence of our work. And in this case, the NSF current work that you mentioned at the beginning is much more on the AI side. We’re not talking about physical robots, but we’re talking about: Imagine a smart house kind of system that’s going to help monitor things and talk to the person in the house, allowing them to remain in their homes as they age. The idea there is what are the societal implications of doing that? What are the privacy concerns of doing that?  

Obviously, you don’t ever want a system that’s going to be broadcasting what the person is doing at home. But on the other hand, if the person were to fall down, you do want to advertise or call or do something about it. So understanding when you are allowed to break what I would call that privacy column, because something much more dangerous and imminent has happened.  

And then who do you call? Do you call a family member, a neighbor, a medical person? So these are all different decisions that have to be made. And how to make them intelligently is what we want to study here.  

TCA: So you would have to then teach the AI how to decipher HIPPA [Health Insurance Portability and Accountability Act] regulations, perhaps, so that it could appropriately apply them to the case at hand?  

Tumer: I wouldn’t say so much that we would have it decipher that, it would be much more that we would build a system [that would be] taking decisions or doing things with those regulations built into it so that the system wouldn’t have to figure that out.  

I think we oftentimes attribute a little more intelligence to the AI systems. They’re really much more of ‘if I see this, I should do that.’  So they are much more reacting to what’s happening. So we would set it up in a way that those regulations would be built into how they operate.  

TCA: The robotics lab at OSU has created a robot that recently participated in a five kilometer walk, completing it in just under an hour. What are the goals in robot design beyond just legs? 

Tumer: That’s my colleague at Oregon State University. They’ve been working on building robots with legs that can basically operate in a human environment, because oftentimes when we talk about wheeled robots or the factory robots, you’re engineering the location and the system to the robot capabilities, whereas everything around us has been built for us.  

Our house is a very human-operated, human-centered design. You can have two steps and all your robots are done. They can’t walk in there. I mean, they can’t drive in there. Right? You might have a narrow passage that a three-foot-wide robot can’t squeeze into.  

So the idea with the legged robot is to have a robot that can help you in and around the house or operate in an environment built for humans so that we don’t have to reengineer our entire houses or workplaces to accommodate the robots. The idea there is one of the biggest problems that my colleague in the company Agility [Robotics], talks about is the last mile, which is really the last couple hundred feet problem when you’re doing delivery.  

They talk about autonomous vehicles will deliver it. Well, no, they’re not going to deliver autonomous vehicles can drive to a driveway and then what? So the idea would be these robots that he’s talking about would pop out of the trunk, or I guess in this case it’s a van, and take the package, two steps down and put it on your door, pop back into the trunk.  

If you were to pair a somewhat agile, two-footed robot with a self-delivery car, self-driving car, then you could really access and provide anything from packages to medicine to help in any area. Because between the driving capability and the walking capability, you can cover just about any scenario that they would encounter.  

TCA: In your opinion, is there any reason to build a robot in the exact form of a human, or could you improve on the fragility of our form to better work with the human society?  

Tumer: There is no reason to build a robot that looks like us other than human desires with having a robot that looks like us.  

It’s just from a functional perspective, the only reason to have a two legged robot is to operate in human built environments. But that two legged robot doesn’t have to look like us. An ostrich-like robot would be perfectly capable of coming in and out. It doesn’t have to look like us or have the same proportions of body components. It can do the leg part.  

It’s simply a question of are we comfortable with a robot in the house? If you were to project 10 or 20 years, and now you have a robot that can help you around the house do certain simple tasks, maybe iron, maybe unload in the dishwasher, or things like that. What do you want that robot to look like? That’s kind of what’s going to drive that question.  

A lot of the studies now say that we like shapes that look like humans better, because it makes us more comfortable. But you don’t want to try to replicate a face. So what you could do is a smiley big screen. Imagine an iPad with a smiley face operating, but in a roughly human proportion, gives you better acceptance or more likely to be something we’re comfortable with than put some weird dinosaur like thing in the house, which would functionally work, but it would make us uncomfortable. 

TCA: Many years ago, I was told by someone that works with computers that the idea of artificial intelligence is kind of a misnomer, shall we say. That we would never be able to actually make an intelligent artificial product that would be able to literally think for itself to some extent. How do you feel about that?  

Tumer: I think that statement is probably accurate, but it depends greatly on your definition of intelligence.  

I think that if we are talking about building an artificial construct that is human-like kind of conscious intelligence thing, I agree with that statement that that would be a very difficult [thing to do] I say nothing is impossible, but it’s not imminent and it’s not something that we even strive for right now, and I agree with that statement.  

But I also think the definition of intelligent is pretty vague in a lot of ways. So if we can build a robot that reads the room, understands what you want to do, understands how to operate, and kind of has a mental map of what’s going on around itself, you can call that intelligent because it’s functioning in a way that satisfies everything that it’s doing, every goal that it’s doing, in a way that gives the impression of being intelligent. So I think it’s a little bit of what that word means that trips us up when we discuss intelligence.  

TCA: How are you looking at the improbability factor in terms of the fact that human beings will at times do things that are not only illogical, but completely unexpected?  

Tumer: The way I look at that is that probably everything that we do that sounds completely illogical, makes some sense from a certain crazy perspective at some point. So in a very weird way, what that means is that whatever is driving that action, at that point, is just completely irrational to us, but that it’s probably not necessarily to that person no matter how or what it is.  

So I think the question, again, is what are those goals that drive us? And we have maybe thousands of little things that we want to do. And with a robot, you’re going to have a lot fewer set of things that we provide as objectives for them to do. So I think you can kind of wither that down a lot by having a few very specific things that the robot does.  

Let’s kind of pull this back into something that we’re a little more familiar with, the self-driving car idea. They’re not there yet, but they’re getting better and better. We started from lane changing to emergency braking to almost driving for stretches of time by themselves. That’s all well and good. But those cars [have] very specific objectives. They’re supposed to drive safely within the laws and determine what’s going on around them to avoid cars. So they have a very thick set of things they’re doing. The idea that they’re going to one morning wake up and say, ‘You know, today I want to play golf, I’m not driving.’ It’s absurd. You would never think a car would do that.  

But we do the exact same kind of analogy when we talk about AI: ‘Oh, what if the AI wants it?’ It’s not designed that way. And if you take that car analogy and put it inside a home, it’s the same thing. It’s certainly not going to suddenly decide to do something completely crazy, because it was always built to do something specific.  

When we talk about learning and AI and intelligence, we’re talking about doing things better. We’re talking about competence. We’re talking about understanding how much pressure to apply to a plate without shattering it, depending on the plate’s changing. It doesn’t mean it’s going to decide that it doesn’t like plates.  

So those are the things that, when we talk about intelligence and volition, we kind of mix up a little bit. But I think it’s better to think of them as, you know, again, bring it to a car and its function and change the function and the form. But it’s still the same kind of [thing] – it’s learning to drive better. And that’s what we are doing. What a lot of our AI systems and robots is [that] we’re teaching them to drive better, whatever their tasks are.  

TCA: On top of everything else, you are an author. Your novel, Purged Souls, came out in 2020. What led you to write a book?  

Tumer: It’s something I’ve been kind of playing with and talking about for a while. And that book, I started writing it about 2012 – 2013. So it took about three or four years to get a first draft. A lot of polishing, a lot of issues with publishers back and forth. So it took a while to get published.  

But ironically, I mean, the book is set in a post-pandemic world. And a lot of people ask me when I started, ‘Why are you not putting more AI systems or robots in your books?’ And I’m like, well, I deal with AI all the time. That’s my day job. So I wanted to pick something that would be a little further removed from these kinds of interactions. And I didn’t want to be sitting there and talking about AI in my fiction. That is a real thing. So I wanted to take something that I thought was unreal and picked the pandemic.  

That didn’t work out really well, as you might suspect. But, yes, that was kind of the origin of the story.  

TCA: It does feel a little close to home. One of the things that I found really remarkable about your book was the society in which your main character lives, which is all female military. How did you come up with that idea? It’s really great.  

Tumer: Well, thank you. And what I was thinking about there is, I really wanted to create a world that would be very distinctly different than ours, because I wanted to kind of break some of our social bonds.  

But I wanted to create an environment where they still had the same needs and wants and friendship and everything else, but with the different confines. So it kind of pushed me in that direction of this happened and the world order collapsed and different things pop up. We always have the very similar-ish post-apocalyptic worlds, and I wanted to see a little different version of that, where you still have the same issues if you’re on one side of the border or the other, and you may not be able to interact with everybody the way you want to. I just wanted to create a different version of what we see most typically in post-apocalyptic settings.  

TCA: I have to ask, considering your day job and your writing genre, have you read Isaac Asimov’s Robot series?  

Tumer: Yes, I have. Of course, I have. I mean, I was back in high school and college and definitely.  

TCA: Do Asimov’s rules for robotics affect how you do things in your day job at all?  

Tumer: No, but the way to think about them was, in some ways the way he set them up is very different than how AI really works. In other ways, it’s quite intriguing that he discussed building these almost unassailable kinds of goals and rules into the way they’re set up. So I think it’s an interesting way.  

It’s just that the way we build them would be very, very difficult to enforce those rules. But I always think of objectives – like what objectives they give the robots and how do you make them do certain things. And I think those rules would form a basis of a very interesting set of objectives. So I think the idea is great that you would want to aspire to that. I think in terms of implementation, you probably couldn’t do it quite the way he set it up. But the idea is very interesting. Yes.  

TCA: Do you think Asimov had influence over what’s happening today in your line of work?  

Tumer: I’m going to probably guess that, beyond getting interesting robots so that people reading the books wanted to go into the fields, probably not. But I think that we sometimes neglect – the kind of social dimensions of what he did.  

I mean, I read those books. I’m like, ‘I want to work on robots.’ So in that sense, he did [influence me]. But, I mean, it was a very different world and computers were new. And we really treated a lot of these systems very differently.  

So I think the way he thought of the way the field would go is not at all the way things developed. But I do think the idea of having functioning robots that could be equals, or at least have issues that come from them being super competent at their jobs, he did a really good job setting that up. And I would say a lot of people working on robotics today, or AI probably have read those books and have some fond memories of them. So in that sense, he probably had an impact. Yes.  

TCA: Are you working on another novel? 

Tumer: Yes, I completed my sequel to Purged Souls. It’s coming out in September. So we’re about a month out from the release of the second book, and it’s called Carved Genes. 

TCA: I look forward to reading it. Thank you for your time. 

Tumer: It was great talking to you. Thank you, Sally.  

By Sally K Lehman