(…continues from the previous entry)
Ubois:
Do you see — I mean, another direction I could see this going is in the ways the lines between warfare and law enforcement get blurrier and blurrier.
Arkin:
Yeah, but that’s why I’m sticking to international warfare at this early stage right now. Now it’s granted that spin-off technology could lead to that. It’s interesting to study the laws of war, which talk about combatants and non-combatants and civil wars and civil unrest and how things are handled in different sets of circumstances. The reason why I chose to work in the embedding of ethical codes in these systems, not for a babysitter robot, but for a warfare robot, is because the law is clear. It doesn’t mean the ability to interpret the situation is clear, but the law is clear from a different perspective. It clearly delineates what is acceptable and what is unacceptable. The hard part is seeing through the fog of war and being able for the robot, and the human being as well, to be able to understand the situation effectively, and dispassionately apply those rules that exist in the laws of war and rules of engagement in ways that result in an ethical outcome.
Ubois:
When you think about embedding these ethical rules in these systems, how do you ensure that they’re not disregarded — I mean, in some ways it’s a very thin line or it feels like a very fragile construct. The temptation is there to turn off that set of limitations.
Arkin:
And the potential is there, just as the potential for a commander to tell his troops to go and do something unethical, as well.
The difference is, if there should be a clear line of responsibility showing that a war crime has been committed, if these systems have been instructed to violate the rules of war. So I’m less concerned about that. It’s not like nuclear weapons in this particular case. If the machine somehow has been overrun and the laws of war have been deliberately and intentionally negated within the system, then a war crime, if the system is put into action, is being committed, where they are fighting outside the bounds of legality.
Ubois:
I suppose you can turn it around and say, well, you can actually create a historical record that is broadcast or more reliable or accessible.
Arkin:
And getting back to the issue of jus ad bellum, as well, too, one of the questions in the survey also deals with, will the advent of this technology make it more likely that we will enter into war, if we have new technology and fewer soldiers. And my initial counterarguments for that were always based on the fact, well any new technology, of course, we get precision guided weapons, stand off weaponry systems tend to do that already. I was in an interview with the guys at BBC, a BBC radio program. He brought up the fact that if you have these robots in the battlefield, this is right on the front line, so to speak, and they have video that you could transmit back into the living rooms of America, and that could have significant impact upon reducing war, as opposed to favoring it, trusting in American humanity here, that just the sheer violence would reduce the likelihood of us going off to war, because we find it so repugnant.
Ubois:
Another problem in comparing the possible scenarios with the ethical codes is what level of probability, where do you stop in terms of level of probability? It’s I think possible to come up with very unlikely scenarios that would have tremendous effects. Do you say, well, the possibility is infinitesimal but the results are nearly infinite, and therefore we have to have a kind of almost perverse Pascal’s wager, a strong precautionary principal of some kind? Or do you tend to focus your attention not on the edge cases, but on things that are more probable or more foreseeable?
Arkin:
Well, the strategy is neither of those I would contend. The technique that I plan to use is basically the following. Because a robot being destroyed means nothing from my perspective. Maybe for somebody else, but it means nothing from my perspective. So if we start from the contention that even if it had the weapon, you start with the null hypothesis that under no circumstances is it allowed to use that. It may be equipped, but it cannot use it by itself (autonomously) under any set of circumstances. So then we start looking at the laws of war, and we look from a bottom up approach, if you will. Under what conditions is the engagement of that weapon system is forbidden — represent all those as prohibitive constraints.
Ubois:
Okay.
Arkin:
And then, starting at a bare minimum under conditions which are highly discernible and clearly, given existing technology, you could recognize that this is a situation which is appropriate, give it not permission — I’m not looking for permissible constraints, I’m looking for obligatory constraints. Where the system is obliged under the rules of engagement, and this deals with responsibility, as well. Someone has obliged the system to fire under this particular set of conditions, then it can — it has the ability to engage the enemy.
Ubois:
I see.
Arkin:
So that’s the technique that I’m using under this. And you can say it’s scenario driven, if you will. It is not trying to say under any and all circumstances will this be able to engage the enemy. It is saying initially from this extremely limited set of circumstances, that the technology has been proven that — and it may be something, although I don’t know yet, I haven’t gotten that far, it may be something as fire when fired upon. And there are systems that are capable of doing that relatively autonomously right now. You would probably do what’s called a friend/foe interrogation first, to make sure you’re not shooting your buddy under that set of circumstances. But under those sets of circumstances, in a kill zone, that likely would be an acceptable, a perfectly legal form of engagement. So that’s the strategy that we’re using here. Start with a null hypothesis, the system can’t fire, period, and then it is prohibited from firing under all these circumstances, what can we do if it is obligated to fire by the commanding officer and the rules of engagement under these sets of circumstances. So, its not going to do it electively, it’s going to do it because it must.
Ubois:
Okay. I want to be respectful of your time, but there were a couple things I was hoping we could get at. One was who else you think is worth looking at. Or, are there other innovators who are thinking about the consequences of their innovations in interesting ways and have models that I should go check out? And then the second point, unrelated, is if you’re dealing with essentially an inevitable path of innovation — or you can see that these innovations are going to take place in various parts of the world simultaneously and pushed to the limit — how does the level of inevitability around innovation affect ethics?
Arkin:
Okay, well, let me do the first one. And I can address — it’s a question of whether you want the innovators themselves, or the people that are concerned about the innovation, as well too. There are folks like Robert Sparrow in Australia, a philosopher there, who is concerned with much of this stuff. He’s going to be coming to the US I understand during a tour of the country. But he’s been concerned — he wrote a paper on the March of the Robot Dog. And he also has another one on Killer Robots, or something like that, as well, too. Another fellow named Peter Asaro has done some movies on this subject as well, too.
And from the military side, there’s a fellow named John Canning. He’s written some interesting things on how to use the technology. He’s very concerned that it may not ever make it to the battlefield. He’s concerned that it shouldn’t make it perhaps, but he argues that you should shoot the bow, not the archer, to be consistent with this.
Ubois:
Huh, that’s interesting.
Arkin:
It is. It’s a different take on things. I’m also interested very much in implementing Walzer’s principle of double intentions, as opposed to the principle of double effect, which again, that’s legal/ethical stuff, dealing with collateral damage.
Ubois:
So in a sense, go ahead and wipe out any bit of infrastructure you want on an autonomous basis, but when it comes to people, don’t shoot the archer?
Arkin:
Well, yeah, but with principle of double effect, you can kill the archer at the same time, as long as you’re aiming at the bow. There’s an issue of proportionality, and there’s an issue of discrimination. So, if you aim at the bow, it doesn’t mean you won’t kill the archer– it means you’re not intentionally trying to kill the archer.
Ubois:
Right, but if it happens, maybe too bad, or maybe it’s a tragedy, depending on where you…
Arkin:
It’s not illegal, it’s not illegal under that approach — that’s the principle of double effect. And that’s what the common just war theory abides by. Walzer’s stand on just war, argues that you should use the principle of double intention, which is what I hope to be able to use, and to use effectively in robotics, which argues that not only should you avoid intentionally aiming at the civilians and non-combatants, but you should minimize the likelihood of any effect occurring upon them as well, too. Intentionally try and minimize, not just ignore it, and include the issue of proportionality as well, too.
Ubois:
So Canning, Sparrow and Asaro?
Arkin:
Those are the ones that I think I’ve engaged in the most interesting discussions with on this particular topic.
Ubois:
And about inevitability? How does your perception of inevitability color the ethical work that you’re doing or the type of engagement that you have with the community?
Arkin:
Well the fact is, and again, learning from Weiner’s experience, this stuff is going to happen. It does appear to be inevitable. And as such, if I help create it, I feel a responsibility to the best of my capability, to steer it in a way in which it will be used ethically. So that has had a profound impact upon the way in which I have chosen my research path ethically.
Ubois:
So then really the question becomes what is inevitable and what isn’t, right? You can begin to separate and winnow the possibilities a little bit. It’s not necessarily a monolithic set of technologies.
Arkin:
That’s right, and that doesn’t preclude me, for example, from encouraging the discussion about the use of this technology, the potential use of this technology. And conceivably, although for reasons that are not clear to me right now, just as blinding lasers and dumdum bullets and all sorts of other things are banned from the battlefield, it is conceivable that an international protocol could be generated to ban autonomous robots from the battlefield. I am not averse to that. But the issue is, I don’t see that happening any time soon, and I’m not going to sit on my hands, well, as things are moving forward.
Ubois:
I was wondering if we could go back to something that’s may be less immediately disturbing, which is the consumer technology, kind of the how popular engagement with these issues is likely to play out. We made it all the way the hour without mentioning Asimov’s Three Laws…
Arkin:
I appreciate that, thank you.
Ubois:
So, Aibo and robots – perhaps we’re manipulated by them in some way, or we’re not or…
Arkin:
Well, we are. Again, but the whole difference is, I mean, movies, toys, books, all sorts of things, Madison Avenue, advertising and the like. We allow ourselves to be manipulated.
Ubois:
Yes.
Arkin:
Cliff Nass showed how easy it is for us to be able to be manipulated by computational artifacts. He wrote this wonderful book called The Media Equation, which studied again how people bond to things.
Ubois:
Yeah.
Arkin:
The difference in here perhaps is that the effect on the social fabric could occur, because we may end up creating robots that people love better than people. Things that people care more about than people. The potential impact of that on society as a whole is unclear. I mean, suppose we could create these kinds of systems that do things better than people, that react to us just the way that we would like them, and we no longer want to see our children or our wives or whatever. And the goal of these systems, obviously, from a roboticist point of view, it’s a wonderful challenge to be able from an intellectual curiosity perspective to create a system capable of doing that. But, whoa, let’s think about that. Suppose we succeed. Suppose we really succeed and we could create the Hollywood robots, the Stepford Wives, Cherry 2000, those things as well, too. What if we did create those things? What have we done? And that’s something that is frequently less talked about. There’s another colleague you might want to speak to, his name is David Levy, L-e-v-y. He has just written a book, Love and Sex with Robots: The Evolution of Human-Robot Relationships, just came out. He is very bold to speak about the potential implications of this. He’s less concerned with the love sort of thing than with the prostitution side, which is already occurring. I mean the autonomous robots are…
Ubois:
You’re going to automate the world’s oldest profession out of existence?
Arkin:
Well, yeah, there are examples of this actually happening already in Japan and Korea as well, too. But he could tell you a whole lot more about that, as well. So that may be a beneficial effect, conceivably.
Ubois:
Robots for love and war.
Arkin:
And there’s other aspects as well, too, which people don’t like to hear, but what about in the treatment of pedophilia and other things, as well. Suppose you could create methadone-like robotic surrogates for these people, which would — instead of right now, what do we do, we throw them into complete isolation, even when they’re out of prison, as well, because they are still viewed as posing a threat to society, which may indeed be true. But what can we do to potentially reduce and mitigate that threat? Could robotic technology play a role in that? This whole notion of , I guess, of justice at some level, as well, too. I don’t know. You know, these are questions that robotic technology can conceivably have an impact on and rehabilitate sex offenders, hard to say. There’s all sorts of stuff along those lines, which is beginning to be explored. But right now, there are no bounds in our field in terms of what can be studied.
Ubois:
Right. It’s all about whatever can be done will be done, and that kind of gets back to that inevitability.
Arkin:
Well, the issue is, at least with roboticists and hopefully with the participation of others, I think we should start seriously thinking about establishing some set of bounds. Europe is good, they have — I don’t know if you’re familiar with it, the European Roadmap, Euron Robotics Research Roadmap.
Ubois:
No, I’m not, but that sounds actually like something we should check out
Arkin:
Go to www.roboethics.org, you can download a copy of that. Early work, just describing the space. I’ve been involved with that group, as well. And South Korea is now developing a robotics charter to help guide their deployment and use of these systems. They’re going to basically distill the Roboethics Roadmap, the Euron one, and add the uniqueness of the Korean culture into that, too. There’s nothing like that going on directly in the US, and no funding to support that. Those are both — Euron, again is an EC effort, by the European Commission; the South Korean one is funded by the government. We have no such thing in this country whatsoever.
Ubois:
We let the market decide.
Arkin:
Pretty much, yeah. That’s pretty much it, just a different take. But I don’t know if that’s the wisest take, unfortunately.
Ubois:
Well, I really appreciate you taking the time with me.
Arkin:
Well, then I appreciate you helping me get the word out. That’s probably the most important thing.
(read the first part and the second part)