Dr. Ronald Arkin is a Professor in the College of Computing at the Georgia Institute of Technology, and Director of the Mobile Robot Laboratory.
In this discussion, he outlines the ethical aspects of robotics, especially those used in war; embedding ethical codes in technical systems; and how practitioners can effectively engage their colleagues in discussions about the ethical aspects of research and engineering.
“The deepest issue right now that I’m confronted with is seeing what I view as the almost inevitable march toward autonomous systems in the battlefield,” Arkin says. “How do we build the safeguards into the technology, which is to me the real responsibility question, to make sure that if it is created, it will be created and used as it was intended.”
Ubois:
Can we start with some background and context? How and why did you get into the field, how has it changed, and which organizations are you involved with now?
Arkin:
I’ve been working in robotics for nearly 25 years. In the initial days of my research, it was hard enough to convince anybody that a robot could do anything. Eventually we got to the point where the science progressed, not just in my group, but others around the world as well, to the point where we could start to see the potential impact of this on everyday life. Then the whole mode of operation changed from trying to convince people that robots could do anything, to try and deal with expectation management, and deal in particular with how it can’t do everything [when] people started to believe almost a Hollywood vision[of what robots might do]. I have worked over this time with a wide range of sponsors and groups, ranging from manufacturing companies to Sony, which I served as a consultant for, for 10 years on Aibo, the pet robot.
Ubois:
Oh yes, right.
Arkin:
And as any normal American professor in robotics, I have had considerable funding from the Department of Defense, dating back to the beginning of my career. I also pursued curiosity-driven research: what can we accomplish here from a basic science perspective?
Ubois:
Yes.
Arkin:
And I initially worked in manufacturing, because manufacturing was going to pay the bills, and then the military came along and they would help fund my students, and I was happy to work with them, as long as I didn’t have any restrictions on publishing my research. So my work is unclassified, and I can talk to anyone around the world about it, which is a commitment I made, in that regard. So I could share this knowledge with other people, and not make it out of balance, if you will. And that’s not to say I’m disdainful of those folks in my community who do classified research, far from it. They just have a different commitment than I do. And there are people in my community who won’t do any work for the military at all, directly.
Ubois:
That’s actually an interesting point about who do you take your funding from; it seems to be one of the ethical questions about any research program.
Arkin:
It is. There’s a colleague of mine, a fellow named Ben Kuipers, who was the Chair of UT Austin computer science program, also been in robotics for a while, who because of his religious commitments, he’s a Quaker, has refused, over his career, to do any work for the military, take any funding from them. But he doesn’t hold it against the people who do, either, and I respect his point of view. I respect other people’s point of view. And personally, I have no difficulty taking funds from the Department of Defense. I’m happy to do the work, especially when our nation is at war, and that’s just a personal commitment in my case. I try and be tolerant as much as I can of other people’s perspectives on this, as well. So, anyway, you know, after a while, after doing all this curiosity-driven research, basic fundamental science within the field, you start to see things happening around you. You look around and you say, hey, this is really starting to change the world. And things were moving forward and are continuing to move forward at a relatively rapid pace, and I recognized, through a series of events, that I may want to start taking some responsibility for the kinds of things that are happening within the world as a result of the technology. A tipping point may have happened a few years back, where it almost like a perfect storm of things came into play. One was I got an invitation to a workshop, the first international workshop on robot ethics or roboethics that was held in Villa Nobel, San Remo, Italy, which is where Alfred Nobel spent his last few years before he died there. And so I decided to go to that, because I thought this is just a good thing. And also, I had seen, after some recent discussions with my military colleagues, some video that I found personally disturbing. It involved lethality, which again, I’m not adverse to lethality under the normal ethical conditions of warfare. Some of my colleagues who were present there were a little more taken aback than I was in that regard. But I saw something that verged on what I might have interpreted as a violation of the Geneva Convention. And I brought this to the attention of my sponsors who were there, as well, too. We talked about it. We discussed the notion of what it means to neutralize the enemy and the like, as well, too. But it kind of woke me up.
Ubois:
Can you describe what that was?
Arkin:
It was a video — well, it didn’t involve robotics, but it could have. That was the difference there. It had standoff technology with an Apache helicopter that was engaging some combatants or some suspected insurgents that were planting IEDs. And it correctly, from my estimation, did neutralize, to use the DOD term, the first two combatants in there. But then another one was neutralized, and there were just some things that went on after that, which seemed to take it one step further than it should have. And again, this was standoff by probably a mile or two away, but there’s — like part of my readings of the laws of war is to talk about what happens when you have wounded people in the battlefield. This was a command from a superior officer, who decided to end the mission at that particular point in time. And, anyway, it circulated around the Internet, so I was able to get it off the Internet. But I’m going to leave it at that, if you want to dredge it up.
Ubois:
Okay. [Video is here; discussion is here]
Arkin:
So anyway, that — and also I had the opportunity to create a new course in robots and society, which got me interested in the broader issues, which is an ABET-accredited course now that satisfies the requirement for ethics in our college.
Ubois:
ABET?
Arkin:
Accreditation Board for Engineering and Technology. They require ethics courses for all our undergraduates to take, and I decided to create a robot ethics course. As I mentioned, I did work with the Aibo and Qrio over the years, and one of my daughters had more concerns about the work I was doing with those products then she was with my Department of Defense work — which sounds odd, but when you think about it, it’s really not.
Ubois:
She was concerned about people forming emotional bonds with robots?
Arkin:
Right, we are deliberately creating an affective state in people to bond with these artifacts in ways that some people might consider unethical. I have sought out and found philosophical papers by colleagues of mine in the philosophy community who have indicated that this promotes detachment from reality, especially among the aged, and other things as well too, that may not be appropriate. So, again, it made me start to think about that. And indeed, I’m presenting an upcoming talk at a workshop in San Diego in October, which will deal with some of the ethical downsides of the use of assistive technology of robotics. So I’ve just kind of become a conscience, I guess, for the field. And what I was concerned with principally is not to go around and tell people you’re doing things wrong, but to try and encourage introspection by my colleagues in the research they were doing, to recognize that this stuff is not just the joy of making things happen, but is potentially life-changing society-changing and world-changing in many, many different dimensions. And you need to think carefully, as I did, about what is it that you’re doing, and then make an intelligent decision. Whatever that decision is that you make, it just should be an informed decision and not a naïve decision. And that’s what a lot of my proselytizing, I guess you could say, has been in that regard.
Ubois:
What kind of reception do you get when you try to start those discussions? How do you engage with people in ways that causes them to want to respond rather than, you know, turn off or dismiss?
Arkin:
Well, it’s been very, very favorable, in general. Part of it is lip service. Very few people want to turn off, because, I mean, our community is filled with thoughtful people. It’s just that they haven’t really thought about this, so it’s not hard to engage them. We’ve had workshops every year for the last three years at the robotics conferences, where we solicit participation by individuals. We, as I mentioned at that initial workshop in San Remo, we had members of the Pugwash Institute, the Villa — I keep saying Villa Nobel, I’m going to get it right eventually, Pugwash Institute, the Geneva Convention and others, even the Vatican, to help assist us in understanding the potential ramifications of the work, the research, that we were conducting. And I found that very educational. Some of my colleagues, though, especially some of the Europeans, do get a bit [worked up] about what is right and wrong, the ethical validity of some of the ideas that you have, particularly in the context of military robotic systems.
Ubois:
Sure.
Arkin:
But we have well-intentioned discussions, no anger or anything like that. But clearly there is a spirit I guess is the best way to put it…A well-intentioned spirit to forward their position. I was just recently at two unusual conferences for me. Normally I go to robotics conferences, you know, computer science and engineering, but this past summer I was at a social implications and technology conference, where I presented some of this work, and also a philosophy conference in Europe, too, where I did this. So I’m trying to get different perspectives, where this is a tool I use personally to broaden my own view of who can contribute to my research.
Ubois:
That’s right. Can you tell me what the conferences were, the social technology conference, in particular?
Arkin:
One is the Annual Conference of the IEEE Social Implications of Technology Society that was held in Las Vegas, interestingly enough, a few months back. And the other one was the European Computing and Philosophy Conference, held in the Netherlands, just about two months ago, as well. Fascinating stuff.
Arkin:
And so I’m engaging in those communities. Also, I’m considering submitting something to a technology and warfare conference coming up at Stanford in January, by the CPSR.
Ubois:
Yeah, Computer Professionals for Social Responsibility? [Note:Ronald Arkin will be speaking at this conference, as will I.]
Arkin:
They’re holding something in that topic area, so I’m thinking of submitting something to that, as well, too. I’ve been encouraged to do that. So part of the question is just getting everything on my calendar at this point in time. There’s another, there’s a NATO workshop going on in autonomy and the like, so there’s more places to talk to about this stuff than I have time to be able to do it. And to do the research, as well, which I find very heartening that the conversation has been started. And I’m also just putting this in context. The work that I’m doing for the military right now, ethical stuff, I’m going to [various facilities] to talk about this issue to them. So I’m keeping active all these communications sites. I’m not stepping out of one community and going into others. I’d like to bridge the gap between these communities, as well, foster discussion across them in meaningful ways. You know, there’s one thing I learned — the second to last workshop on robot ethics that I attended. I’m sure you’re familiar with Norbert Weiner’s work.
Ubois:
Yes.
Arkin:
There’s a wonderful book that was written, The Dark Hero of the Information Age. The co-authors of that book presented his work. I was very familiar with him as they cyberneticist, but most of the community are unfamiliar with his ethical background. Now he probably would have strung me up. And I think the difference is he took a highly confrontational approach, refusing to take money from, and encouraging, and from what I have seen, demeaning people that did take money from the defense sector.
Ubois:
That’s interesting, because cybernetics came from anti-aircraft fire control systems, right?
Arkin:
Exactly. Well, he understood the potential consequences of his work a little too late, as many other people have found. But he decided to be confrontational and basically say “don’t do this at all.” I share some similar concerns, no doubt, that he had, but I’m trying to put a rudder into the ship, and steer it in a way in which we can manage in an ethical manner the consequences of our research. Whether it’s without reproach or not, I don’t know, but that’s the one that I’m taking, because he has clearly shown to me, that while I admire his stance, he was run over by the juggernaut of the industrial defense complex, and it basically set the whole field of cybernetics back. Indeed the word cybernetics was unacceptable in proposals, as I understand it, for many years, because of the effects of his hyperbole. So there’s a destructive and a constructive manner you can take, and I’m trying to take — work with these people instead of trying to damn them, I guess, is the best bet.
Ubois:
Right.
Arkin:
So that’s just a different strategy. And like I say, I admire him for what he did, but I don’t think he — especially because his legacy in ethics was completely overshadowed by his technical contributions, he didn’t seem to make as much of a difference. So anyway, but in spirit I believe that we have to manage this stuff. And then you look at folks like Bill Joy.
He’s written articles on his reflections on the Unabomber, and I actually make my students read parts of the Unabomber Manifesto in my class, which, you know, argues that we should basically relinquish all research in robotics, because it’s going to lead to the extinction of humanity. Most of us think that’s quite a bit over the top. But we have to be aware of his perspective, listen to it, and be able to address it. The point is, let’s get the conversation started. I think it has been.
Ubois:
It’s an interesting question of when should you essentially exit a field or exit certain social or power structures, because you can’t abide by things any longer. Sort of the Norbert Weiner solution. Or when do you try to remain engaged.
Arkin:
Well, let me share my experience with you.
…
(more on 15th and 18th January )