Dr. Michael Twidale is an associate professor at the Graduate School of Library and Information Science at the University of Illinois at Urbana-Champaign. His interests include “computer-supported cooperative working and learning; collaborative technologies in digital libraries and museums; user interface design and evaluation; user error analysis; visualization of information and algorithms; and the development of interfaces to support the articulation of plans, goals, and beliefs.”
He is also deeply concerned about the operations of Institutional Review Boards (IRBs), which were originally designed to protect the rights and welfare of human subjects of medical and psychological tests. At their best, IRBs can serve to increase responsibility in innovation, but today, many scholars believe that IRBs have devolved into cumbersome bureaucracies that frustrate legitimate research for no appreciably good reason.
In this discussion, Professor Twidale offers insights into IRBs, participatory design, managing design tradeoffs, ethics and empowerment.
Ubois:
Can we start with a brief summary of why IRBs came to be as they are, and how they need to change?
Twidale:
A lot of this history comes out of a series of medical and psychological experiments in the 1930s, which were highly abusive. One was a group experiment on African-American men in the South, the Tuskegee experiments, in which the men were infected with syphilis, and half were treated and half weren’t. But then there were also psychology experiments after the Second World War, which were basically set up to ask a very important question, which was “how can a very civilized country like Germany produce Nazis?”
A series of experiments were made, the Milgram experiment was one, basically taking subjects and ordering them to do completely insane things. When a man in a white coat told them to do something, the subjects abdicated responsibility. There was also an experiment done at Stanford by Philip Zimbardo, taking a group of students, some of whom acted as prison inmates, others as guards, very interesting but dangerous to the subjects.
So various concerns led to laws and protocols at Universities, saying this unacceptable, and we need ethical guidelines, and some of those got turned into legal requirements, which means that if you get federal funding you have to go through this IRB procedure. Even if you don’t have federal funding, your local university may require some similar procedure, but we’re now in a very legalistic setting, where the IRB is talking to the lawyers, saying what shall we do, and it’s safest for the lawyers to deny everything, and say the IRB process applies to all research, except when it inconveniences the rich and powerful, when of course it doesn’t apply.
So if you look at our IRB rules, there are special exemptions. IRB exempt does not mean you’re exempt from IRB, it just means you fill in a shorter form. And the argument for that is, well it would be too inconvenient to actually follow the full IRB protocol. When trying to figure out how to be a better teacher.
There’s another exemption if you want to determine – if you want to try to some new food stuff, as long as you’re giving people wholesome foods, you can sort of stick on the front of his head, does this taste nice or nasty? What do you think about it? Again, because in the great State of Illinois producing food is important, and so we can’t get in the way of that. But if I want to stick a computer interface in front of somebody and say, if that nice or nasty…
Ubois:
You can’t do it.
Twidale:
If I give them food, I can. If I’m teaching children, I can. If I’m teaching undergraduates, I can’t.
Ubois:
So does this just push research off campus?
Twidale:
It pushes research offshore, and this is starting to turn into a major competitive disadvantage for the United States. This is becoming such a bureaucratic mess, that people are saying, well why should I move to the United States to be fettered?
Ubois:
So why have things evolved this way? Is it that once a bureaucratic mechanism like this is created, it tends to go into the direction it was pushed years ago, until it reaches some level of absurdity?
Twidale:
It just grows and grows and grows, because the people involved keep on thinking of other things that it applies to. There’s been a recent New York Times article, in which one of the professors said, “well, you know, when I talk to my class about IRB, the following questions typically arise.” That is a breach of IRB because he’s done an interview in his class, and he’s reporting the events in a national newspaper, and he clearly did not apply for human subject’s permission to do that. This is somebody who teaches IRB and doesn’t even realize he is in violation of his own code.
Ubois:
So if you apply a really strict legalistic approach to those rules, things are even more constrained than they are in practice right now?
Twidale:
Yes. Every interaction with every live human being requires IRB approval. So long as it’s part of systematic research.
Ubois:
So my interview with you, for example?
Twidale:
Absolutely.
Ubois:
And, having pushed this to this absurd conclusion that it’s sounds like with the New York Times, it’s perhaps slipping back the opposite direction?
Twidale:
Well, at the moment, no. It’s just everybody is complaining about it. But remember, that means that in order to have a conversation with anybody, you have to apply a month in advance and say what question you’re going to ask them.
Ubois:
IRBs did arise out of legitimate concerns, right? The Tuskegee experiments about the effects of syphilis were a pretty horrible example of things gone awry.
Twidale:
Yes.
Ubois:
I’m curious about good research we can’t do. The Zimbardo and Milgram experiments really turned out powerful and important results, but they might not get past an IRB today. If you were going to reformulate things with the IRB, are there some guidelines that could help fix the system?
Twidale:
I think that the problem is that the whole of the IRB approach assumes a certain mode of research, a carefully designed controlled experiment, as typically done in medicine. And for that, all the things like Milgram experiments, you would want to carefully design, and so the fact that it takes you a month to get IRB permission is no great hurdle, because you probably end up spending several months planning it anyway. So as long as you get the timing right, and as long as if you suddenly think of something better you want to do, you go back to IRB and say we’ve changed our minds. Can we update that? I don’t think that there’s many things that for that kind of careful, slow, controlled experiment, I personally don’t think IRB is too much of a problem.
Where I do think there is a problem is a really fast, lightweight, rapid prototyping, where I have an idea in the morning, I go out into the street, watch what people do, build a prototype in the afternoon, invite some people to have a look at it the next morning and redesign the prototype. So for that, from, you know, observation to first prototype to design is a matter of hours. And for that, to apply for IRB permission for two months, it’s just not sensible.
Ubois:
It seems like there’s a level of triviality, too, that below which you’d want to be free to engage in conversation and perform dialogue. I mean, if the Academy is about dialogue, it seems like an enormous barrier to pre-inquiry.
Twidale:
I quite agree, and I still think that, you know, even with free dialogue, there’s no reason why everybody who is to be given permission to talk to people should not first take an ethics course. The solution is some sort of umbrella permission, where you know – I’ve said accept a certain sort of ethical principles and that I will not violate these ethical principles. And indeed, if I want to do something that looks dubiously like it’s violating, then I’ll have to do the traditional application, just as if I was doing something like a Milgram experiment I’d have to go there. But for all of this, you know, I’ve taken a course. I know what I’m allowed to do, and I have permission – blanket permission to do that.
Ubois:
IRB seem focused on the potential effects that an experiment might have on subjects. Are there mechanisms, like an IRB, that could be used to consider the effects on society as a whole, or some other group of stakeholders besides the people that are part of the initial experiment. Is it something that should occur? If it is going to occur, would you want to shape it in some way?
Twidale:
I think it should, but I don’t really think that’s how IRB works. IRBs are not as an ethical mechanism, but a legal mechanism.
Ubois:
As in “don’t sue us?”
Twidale:
Yes. It’s basically how do we craft the consent form so that you can’t sue us.
Ubois:
So the mission of the Bassetti Foundation is to find mechanisms that might allow that, and so it seems like IRB is interesting in that sense, because it’s a – there’s a sort of cautionary tale here, and maybe an original impetus is actually good.
Twidale:
IRBs were created from the best of motives, motives that I, as a very vociferous critic of the whole IRB process, fully support. You know, I don’t want to ditch those principles. There should be very strong rigorous ethical guidelines, but when they are bureaucratized to the extent that you can’t even talk to people, or you worry about whether you’re allowed to talk to people, then something that’s gone wrong. In discussions of freedom of speech, there’s a concept of chilling effects. That because of a certain set up in the world, it’s not only what actually happens, but because of people’s fear for what might potentially happen – you know, things get done or more likely things do not get done. And I think there is strong evidence of chilling effects because of IRBs. People have just decided “we can’t even do this.” That’s a negative consequence.
Another approach that’s well worth the Bassetti Foundation looking at is scenario based analysis of scenario based design. It’s often thought to mean predicting the future, but it’s more about looking at the space of potential outcomes. It is not predicting the future, but planning for a range of possible futures. So with the unforeseen consequences of any technological innovation, can we make some of those consequences a little bit less unforeseen.
Ubois:
Yes.
Twidale:
I really don’t think we can predict every single one, but I really do think there’s far more that we could look, or stare in the face, and make a decision about. Because often all the design I do is about pros and cons. And it may well be that you say, okay, there’s a very small risk of something bad happening here, but there is a potential to moderate it. Let’s put money in that.
Twidale:
An example was the participatory design movement in Scandinavia in the 1980’s. So this came out of the advent of personal computers, which caused people to ask “will automation lead to the loss of the Swedish/Danish way of life?” So, these governments simultaneously passed laws that said ‘no computerization will be allowed in major industries without consultation between the computer systems developers and the workers,’ presumably via the representatives of the workers the unions.
Ubois:
So don’t take tax dollars and automate workers out of ….
Twidale:
They passed this law, and then the computer scientists said, “well that’s a very noble law, we have no idea how to do it.” And so they had to basically invent participatory design, and involve end users in the co-design of the computer systems. They were doing that because firstly, the government had said so.
Ubois:
So it’s the opposite of the British Docklands newspaper strikes and Robert Maxwell…
Twidale:
Absolutely, yes. And it was almost coterminous with certainly the early stages of the Fleet Street automation. Interestingly you mention it because participatory design was introduced into the Swedish print industry. So they get round and sort of figure out how do you talk to union representatives, who were shop floor workers in the print industries, who have never used a computer in their lives, about how can we build computers to help you do your job better. So they develop all these techniques, and discovered almost to their own amazement that the systems they built not only worked, but worked better then their competition.
Ubois:
Oh, I bet they would, sure.
Twidale:
Oh yeah. So it was, you know, this was an unintended good consequence of that law. They were doing it because it was the right thing to do. They didn’t realize they were doing it because it was the practical thing to do.
And so it took off. It’s not coincidental that you have this explosion of interest in computer systems design in places like Sweden, Denmark and so forth. Even though this part of it had nothing to do with telecommunications. But it did seem to have something to do with computer systems design, and very careful observation of what people actually really do, and then building computer systems to match it.
Ubois:
If you accept there is an arc between discovery, invention and market application, do you see points along that arc where scenario based design makes sense?
Twidale:
Yes, and that is why I have all these concerns with IRB. I don’t believe in grand strategic plans. I believe in lots of very careful observational research — try something out, see what happens, learn from it. Lots and lots of incremental improvements, and I would almost call it like information technology kaizen, as Toyota did with cars.
Ubois:
Continuous improvement.
Twidale:
They decided to make cars better and better and better. When Detroit looked to these pathetic little cars in the 1970s, they laughed. But look at Toyota now.
Information technology kaizen is also looking at the process of building the system, but also asks “what is it for?” And that in turn affects what it means and what it’s about.
(more on 19th and 26nd october)