(…continues from the previous entry)
Ubois:
It’s an interesting question of when should you essentially exit a field or exit certain social or power structures, because you can’t abide by things any longer. Sort of the Norbert Weiner solution. Or when do you try to remain engaged.
Arkin:
Well, let me share my experience with you.
Ubois:
Please – it seems like the whistle blower path sometimes works, sometimes doesn’t, that it can leave people without influence.
Arkin:
I don’t think it was a whistle blower thing, it was just basically a throw up your hands and stop.
Ubois:
I guess what I’m trying to describe is a strong oppositional stance.
Arkin:
The question is, can you understand and respect the people that are on the other side of this, and understand their motivation? And are they willing to listen? And so, initially, what I started doing, after this had gone forward, is I was sitting on an Army review panel, which is setting up strategic research direction for the Army in robotics. And I approached the subject of robot ethics and the like. It was interesting that they encouraged me to submit a proposal, which I did. It was funded, and people are interested, willing, able to listen to this. And these people want answers. They’re working on it themselves.
Ubois:
Right.
Arkin:
The Judge Advocate General (JAG) of the Navy has been deeply involved in the regulation of these kinds of weapons, making sure that they comply with the international rules, laws of war. And so the point is, I have learned that these people are not demons on the other side. I think Weiner tried to demonize them. I don’t think there’s a need to do that. I think these people want basically to do the right thing. And the question is, it’s so hard to understand, as you know, the impact and where this technology’s leading and how it can be used and the potential consequences of this. How do we build the safeguards into the technology, which is to me the real responsibility question, to make sure that if it is created, it will be created and used as it was intended. To me that’s the most important aspect of this. So, there are all sorts of subsidiary questions when you get to the laws of war and the sorts of things I’m dealing with, military in terms of — I don’t know if you took the survey or not?
Ubois:
No, I haven’t, but I will.
Arkin:
Yeah, but if you did, you would find that there’s questions regarding jus in bello and jus ad bellum, which deal with entry conditions into war and the conditions during war, and what is the potential impact of this technology. And that’s what I’m trying to understand, again, what do people think about it. And it’s been very useful in getting the discussion going in terms of the general public. Indeed, we’ve got tremendous visibility for this effort, perhaps more than we deserve at this particular stage of the research that’s just ongoing right now.
Ubois:
I was reading the paper you did on that, and it looked like that was — was that the basis for the survey?
Arkin:
That was the first year of the work that we’ve been doing for the Army, which is just finishing up right now. So that did describe indeed the process and procedures by which we’re doing it, and we’re finishing up this paper, which will include some preliminary results on the robotics demographic. In the meantime, I am working extensively on the design document, or the second phase of that, which is the so-called “artificial conscience”, which is basically just a mechanism to ensure that the system complies with the laws of war, as so designed, which is a very challenging task as you can might imagine. The amount of funding I have for this, obviously, is relatively small, compared to some of the other efforts that I’m dealing with. But nonetheless, I’m very pleased that I’m able to conduct the research at all in this context.
Ubois:
So when you think about the robot design, I’m curious about this idea of essentially delegating decisions to autonomous or semi-autonomous systems.
Arkin:
Correct. My contention is again, that ultimately robots can do better in the battlefields than human soldiers can. That’s not perfect, but better.
Ubois:
And why is that?
Arkin:
Well, if you took a look at the recent reports on mental health and ethical behavior [in Iraq] it’s a relatively low bar to do better. People will not turn in people that are guilty of war crimes. A percentage of soldiers are ready, willing and able to treat non-combatant civilians with lack of respect, to abuse them both verbally and physically. Not everyone does it. I used to think it was just a few, but it seems that the conditions under which human beings have evolved are not appropriate for modern warfare. And thus, that’s why I believe it’s easier to be able to do that. Now I don’t believe that robots will ever be the only folks out in the battlefield. I see humans and robots working together in the battlefield, assisting each other in a complementary way, not a purely replacement type of strategy. So I think they will leverage the best of both capabilities. Ultimately I think they can do better than people can under these very difficult circumstances. Ideally, of course, I would prefer this technology never have to be used. In other words, we would never go to war again, and humanity could somehow remove that off the face of the earth. But I’m not naïve either about what I view as the impossibility of that occurring. So if we are going to go to the trouble of creating international laws that govern the way in which we kill each other, if we are creating technology that is capable of doing that, it must, in my mind, adhere to the rules of warfare.
Ubois:
So in a sense, you’re able to use or rely on a whole complex set of laws and ethical thinking that’s been developed over centuries and apply that to scenarios of how the technologies you develop might behave?
Arkin:
Correct Again, like I usually say when I deal with the press, if they [the press] gets 80% of what I tell them correctly, I’m usually pretty happy. And so, there’s been all sorts of things coming out. I am not creating new ethical rules for warfare. I am trying to implement existing technology to ensure that this technology abides by the existing rules of warfare. Now because this is a moving target, the technology is introduced as it has over the last century with blinding lasers and other things being forbidden in the battlefield, these rules are dynamic. And they will change over time, as well too, but whatever those rules are, the system need to be able to comply with it. It’s not my intention to create new ethics for the battlefield but to enforce existing ethics within new technology.
Ubois:
Does the creation of new technical possibilities demand the creation of new ethical stances? Or does it create different ethical decisions that already exist or different ethical possibilities and choices?
Arkin:
It requires that to be examined. Whether it requires new things to be created, perhaps, perhaps not, but nonetheless, the examination of the advent of autonomous robotic technology should be thoroughly examined, not just by the technologist, but by the military, the international policy makers and the like, and that’s indeed the conversation that I’m helping to try and strike up, along with some of my other colleagues.
Ubois:
One of the answers that we’re getting in these engagements that we’ve had with researchers is that cross disciplinary conversations can be very helpful. That, in fact, some of the most problematic things that have come up are those that have been developed essentially in isolation.
Arkin:
I agree completely. And I’m not saying it’s helpful, I would say essential in that particular context. Part of it is doing sanity checks. So I’ve had to go out and deliberately seek out my discussions with philosophers and social technologists and the like, to make sure that I’m being rational about the approaches that I’m doing here. But nonetheless, you also have to strike up the conversation within your community as well too, because you don’t want to be isolated, the only guy in your field doing this kind of work, too. To use Weiner’s example, unfortunately seemed to end up being the case.
Ubois:
Are you doing any kind of scenario planning? Or do you have any kind of structured approach to mapping out the possibilities and implications of your work in dialogue with people? Are you using any?
Ubois:
Well, what I’m embroiled with right now is coming up with this design document [ Now available ], which incorporates this multi-disciplinary perspective of how this can be implemented and how it should be implemented. I expect to send that out to some review, to some of my colleagues that are in fields not just in robotics. I mean, the technical review would come when I submit it to technical journals or conferences, but also to some other folks I know after it’s done to get their feedback on what they think, not only from a design perspective, but from an overall approach. And, of course, I intend to go out and talk in the community as it’s being done, both among the military and philosophers and roboticists about this work, to ensure that this dialogue is ongoing. The issue is, like I say, finding the time to talk to these people and all the opportunities, but I want to make sure that I maintain the diversity of opinion on this and to cultivate in myself the ability to listen to what these people are saying. Not to take the stance that you have to — this is the right way to do it, and you must believe. But rather to say, this is a way to do it, what do you think? That is what I’m trying to cultivate in my conversations with folks.
Ubois:
Can describe in more detail some of the ethical questions that you’re personally wrestling with?
Arkin:
Well, the deepest issue right now that I’m confronted with is seeing what I view as the almost inevitable march toward autonomous systems in the battlefield. I look at different fields, the history of different fields. Like the advent of nuclear weapons with the Pugwash and the Russell-Einstein manifesto that followed afterwards as a reactive strategy. I look at the Asilomar Conference in bioengineering, which occurred when human cloning could occur, maybe 30 years ago. The proactive stance where they called for a moratorium on their research at that point, and went and got a whole bunch of people, went to convene the conference and then considered how to move forward, which I thought very admirable. But the difference in those fields is that those things were marked discontinuities. Suddenly, something happened. We created a nuclear weapon, discovered how to clone a human being. Robotics, for better or for worse, isn’t proceeding quite like that.
Ubois:
It’s a smoother curve, isn’t it?
Arkin:
Well, there’s bumps in it, but there’s no marked discontinuities, at least to date, that make people sit up and take notice. So it’s penetrating subtly. It is a revolution that ‘s going on, but it’s a very peaceful revolution to some extent, oddly enough.
Ubois:
Yeah, Dolly the sheep was a discontinuity in the field of bioethics.
Arkin:
Right, but we [roboticists] haven’t seen much of those things going on. Even the advent of autonomy right now in the battlefield. I mean there are robots out there in Iraq fighting today, but they’re under human control. And this whole notion of the human in the loop is a slippery thing, because the question is ‘how much in the loop?’ The question is whether you have direct command over every single action that the robot takes, or do you just say go in and take that building at some point, like you would command a soldier. So autonomy is graded — it’s not suddenly going to be autonomous. And so, the thing I’m most concerned about right now is the potential misuse or abuse or inadvertent accidents that could potentially occur through the – again — the appearance of these things into the battlefield as autonomy continues to progress. Many of my philosopher colleagues are most concerned about this from the attribution of responsibility should a war crime occur. Who is at fault? You can’t blame the robot, although some argue that robots will have rights and the like. But I’m not going down that path. I leave it to philosophers to pursue that. But the question is, if you look at folks like Robert Sparrow and Peter Asaro, who have argued eloquently that no one is responsible if a robot commits a war crime, so that makes it completely unethical to use. Well, I don’t necessarily take that stance, and indeed I’m building into the architecture what I call a responsibility advisor, which will require as best as possible, the establishment of responsibility before these systems can act. That doesn’t mean they can’t act autonomously, but they will require someone, somewhere to be able to assume responsibility for the commands that puts the robot into this particular position to do that. And that does not absolve either the designers, which I think some of your early questions addressed, the technologists, even the creators of this stuff from the responsibility from creating it, designing, implementing, contracting. I mean, there’s a very interesting pathway of spiral design, among other things, that military contractors go through in getting this stuff battle-hardened into the battlefield. And it also has to deal with legal approvals for any new weapons systems, it has to be shown to comply with the laws of war. And this is a daunting problem right now for the lawyers in the military, as well, too, if and when such technology should appear.
Ubois:
That’s right. If there’s a mistake, can I sue Lockheed or Boeing or whoever makes it?
Arkin:
Well, hopefully it won’t make a mistake is the thing that they want to make sure – that it will comply with the laws of war.
Ubois:
Unlike every other technology, this one will have no surprising outcomes?
Arkin:
The point is again, actually with any new technology, though, look at the matter of precision munitions. Suppose for some reason the fins go in the wrong directions. You know, who’s at fault under that set of circumstances? So, in many ways, the same principles of advanced technology’s responsibility attribution, to me, still apply, [for example] if there was clear negligence on the part of the contractor. If there was some clear negligence on the part of the creator of the algorithms or something like that, then responsibility could find it’s way back to that particular individual. But part of it has to do with intent to attribute responsibility, as well. And there are many things that are war crimes, that people do not get convicted for because of the inability to show intent. So, to me it’s not all that different, but I do think responsibility has to be delineated as much as possible.
Ubois:
So I’m wondering about how do you do that? Do you work through scenarios and kind of compare those against — compare scenarios against existing ethical codes? Is it a question of exercising your imagination about what might happen?
Arkin:
Well, again, the military fortunately is very good, because they’ve been designing weapons systems. Before they sign off on anything, they have to make sure that it’s done properly. I don’t know all their proper procedures for doing that, but at one level, there will be scenario descriptions. They go through — there are battlefield laboratories associated with many different bases around the country, where these things can be exercised. Under military experiences, I’m sure they will be subjected to rigorous verification and validation from a software perspective. We’re not there yet is what I’m saying. But the procedures for testing these things to make sure that they comply with the rigor that’s necessary to assure that there’s no negligence is probably better than the military, I would contend, at least from my limited knowledge, than you would find for most commercial products. I mean, look at Windows.
Ubois:
Yeah, exactly. I wouldn’t want to leave my life up to Windows.
Arkin:
Well, you know, Bill Gates is big into robotics, so that’s even scarier. But that’s a second question.
…
(more on 18th January)
(read the first part)