(…continues from the previous entry)
Ubois:
At the Bassetti Foundation, we’ve been developing ideas about sustainability in innovation. What makes for sustainable innovation? What are things that would cause innovation to now be sustainable? This kaizen idea seems very much along those lines.
Twidale:
Sustainability for me means having lots and lots of feedback loops, so I can find out things all the time as I go along. I don’t believe in the lone genius theory of innovation, I believe in involving lots of people in innovation, that’s the participatory design approach. What is it like when you have the equivalent of open sourcing the innovation processing, so every end user has something to say about the design process?
Now in open source, you have to be a systems developer in order to improve it, but now I’m talking about people who are not systems developers and they haven’t got time to learn how to be a systems developer. But can we be inspired by that approach, and the nearest similarity might be Wikipedia. Now there are a lot of problems with Wikipedia, but one of the things is if you discover they’ve misspelled something, or they’ve put your hometown in the wrong county, you can fix it. You can make a very small fix, with a relatively low effort on your part. And they designed this very sophisticated mechanism to allow that. Once you’ve done that, you might do something a bit more sophisticated next time. It allows a pyramid structure, where you have a few people at the top, who do lots and lots of work, and have the massive innovations. You have a load of people to the next stage down, who are sort of discussing about and proposing ideas, and you have a truly huge number of people at the bottom who are making lots of little suggestions.
Ubois:
So the Wikipedia model doesn’t call on the editors to identify the stakeholders. But if you think of something more like an IRB approach or the participatory design approach, you need to identify the stakeholders and engage them in these feedbacks that you’re talking about. Have you seen mechanisms that would help clarify who the stakeholders are, and is that a worthy goal?
Twidale:
Not initially. It is best to identify some stakeholders, but grow the – design your mechanism so you can add others who you haven’t thought of along the way.
Ubois:
So kind of snowball sample the stakeholders?
Twidale:
Absolutely, snowball up. But make sure you don’t start off with a sample of one. Whenever you have a sample of one of anything, you think everybody is like it. I wouldn’t try and get every single stakeholder on board, but I would get a few of those who may be willing to have a go.
You build something, and then you get people involved because doing design on a blank piece of paper is very difficult. I have this idea of design by negation. I build something. I show it to you, and you say, “no that’s not what I want. What I want is…” And it’s much easier to tell me what you want by disagreeing with something which I set up as being deliberately wrong. It’s almost like conceptual art, you know, I built it in order for you to react against it.
Ubois:
Are there measures for that kind of participation? Or measures for the speed of innovation or its relative impact on different collections of people?
Twidale:
There are simple measures which are numbers of iterations, numbers of people involved, numbers of different kinds of people involved, and simple measures of improvement, which might be sort of faster, better, faster, cheaper, whatever those are. But all of those are very crude, so should only be collected when they’re cheap. If they are considered as cheap, dirty ways of getting you something to get started, and also to look for anomalies, that’s great.
But much more important is the qualitative data, which is the why? You know, why is it that people do things? Now those are much richer, but are more time consuming to collect. So you can’t collect from lots of people, so you use the crude crumbs to do the breadth, and then you do the detail quals to get a few of what’s going on. And there you’re looking for critical incidents. I found out you move more from sort of the quantitative – that measures. Does quantitative data do more to the qualitative data. What are you looking for here? Whenever I’m doing any kind of user testing, emotional reaction is a wonderful trigger. You just see they just look surprised. You know, okay, this is something where I can’t ask about everything, so I’m going to ask about why are they reacting to that, particularly if they didn’t react so much to the other. So I get a strong differential. Why is this one causing so much more of a reaction than others?
Another is authentic user testing. It’s very problematic to try and do a controlled experimental test of the kinds of things I’m talking about. Experiments are wonderful, if you’ve got just one or just a couple of variables you want to actually do. Is it this or is it that? Experiments are great for science. Science is about fundamental truths. What I do is engineering. It’s about the art of the possible. Shall I make this or shall I make one of the ten billions variants of this that I could think of? Well, I can’t experimentally test all 10 billion options, so find something out and see what’s going on, to see what’s likely to be causing the problem.
Whenever I decide on the design for any system, I have a set of implicit assumptions about how it’s used. If I’m also the person who designs the evaluation, my evaluation process may well have the same implicit assumption about how it’s used and all that. So in the case of a bad example, a library catalogue. So I say, okay, I’m a hardcore librarian, and I’ve built the ultimate library catalogue and this interface, all you have to do is type in the ISBN number and it will find the book for you. Instantly, so much faster.
Ubois:
Great.
Twidale:
Okay, so I’m now going to give you – I’m going to do a controlled experiment on this. Okay, here’s a list of ISBN numbers. Type them in and see how quickly you can get the books using my system, rather than Amazon or Google. Oh, surprise, surprise, my system wins. But then we launch it into the real world and nobody likes it. Everybody does it that way. People type in really vague things into Amazon and Amazon wins, because it supports the really kind of grubby information these real life people ask. So by authentic user testing, I mean, ask people to use the software that I’ve developed for something they were going to do anyway, and invite me in so I can watch them using my system, and ask them not to be polite.
Typically what you find is, people will start using my system out of politeness, and they start using it and at a certain point, uhm, no thanks. And they’ll go over to another computer system, more frequently they will switch to paper. We videotape that. In the end we say, okay, why was it that you switched at that point then and there? Again, it’s that critical incident that occurs, because it takes an effort to switch your attention from one thing to another, so there must be a good reason.
If I can figure that out, I can make Version 2 or maybe you can use it for ten minutes rather thank five. By the time Version 20, maybe you can actually use it for the whole process.
Ubois:
Let’s go up a level to how you think about an innovator’s responsibility and the individual or team that is developing new systems. How can groups of people remain mindful of their downstream effect? Participatory design, contact with stakeholders, scenario based design are good examples people might adopt. Are there othes?
Twidale:
One question is, to what extent is this stuff we’re empowering or not? And empowering in many things. One is, to what extent does it allow people to get better at doing their job? And again, we live in a reasonable social democratic world, and as you get better at your job, you should get a pay raise, you get paid more, because you’re adding more value to the organization. The organization. The organization doesn’t mind paying you more, because you’re adding even more value even as their paying you. So if you’re kept locked in at a level where you’re underperforming at your potential, you’re unhappy, the organization, everybody’s unhappy. It’s a tragic waste of human potential.
So one question is “can it allow people to improve?” That goes with a big debate within computer science between those who are interested in automation versus augmentation. I definitely fall into that [second] camp. Not generally because of the ethical reasons. I worry about automation, not really because I just don’t believe it will work reliably and safely — designing tools that allow people to perform far more smartly with the tool than without is the way to go. Just so when something important or dangerous or special arises, that person has the power to make an executive decision that I would never trust a computer to do.
But with power and empowerment comes responsibility. So I would design in my software technologies or what is sometimes called technology of accountability. You can override what it is the system says should be done here, but you sign off on it, so that when your boss said, why, why on earth did you do that, you should have a very good reason for it, because you’ve broken the rules. But it’s not up to the computer system to bid you to break the rules. You’re not locked into the system, but if you choose to do that, then you’ve got to answer of that change and sign off on it.
Ubois:
Interesting. I like the transparency and accountability it’s going to deliver.
Twidale:
And again, it’s something that although it comes from a very liberal perspective, you see a lot of it in the business media now, stories of hotel managers saying, “we empower our junior employee to make a decision. You’ve got these unhappy customers, do what it takes.” And so these hard-nosed, very capitalist businessmen have a very similar notion.
Ubois:
Exception management and devolution of responsibility seem like key things there. But to the extent that the effects of innovation are so far beyond the expectation or scope or control of the innovator, are there other mechanisms of governance that might be employed? How do you hold somebody accountable for something that they cannot possibly foresee? And how do you have some kind of system – IRBs are an attempt at a system of governing innovation.
Twidale:
Yes.
Ubois:
Can we talk about the characteristics of systems that might govern innovation? I wish I could formulate that question more tightly. But it’s sort of rather than build from the bottom up responsibility of the innovator, what is the right institutional set of responses when you have a university or a funding agency?
Twidale:
There the model I would use would be a more environmentalist model. So within environmental science you have many cases of introduction of a foreign species into an ecosystem for a very sensible reason that has an ecological negative effect because it has non predators. And then introduce a predator and that eats everything up in sight, so it goes on and on, king toads and all these things in Australia.
The computational equivalent is we release a technology into the ecosystem of the world without more carefully considering it’s consequences in advance. The only way we can do that [responsibly] is to introduce it into an ‘island’ first. Figure out what’s going on. If you trash the island, oh well, you know, you just kill all the stuff on it. And we’ve done it. It’s not – it’s kind of sad for the island, but it’s better than introducing it onto the mainland.
Ubois:
So you could have these progressive boundaries around the laboratory? So you have your new bug, it’s in the containment laboratory, and it’s in the island, and so on….
Twidale:
Yes, so I think it’s a concentric circle, because of what I’ve said, you can’t just test it in a typical laboratory. But I think there are pseudo-laboratories where you can test it, and one classic one is a university, where typically, there is far more empowered people in a university than in regular life. So if things screw up, you’re probably not going to wreck as many people’s lives and end up causing trouble. They are not afraid to complain about it, whereas if it’s in a corporate setting, that might be more difficult. We’ve actually done this a very long time — the great innovations were happily-tested on and built by graduate students, tested on other graduate students. Is this new fangled thing called email a good thing or not? You know, what about Usenet news? All those things have gone through several rounds of testing before they went mainstream.
Ubois:
That’s right.
Twidale:
They are still problematic, but that environment is there and it’s still there incubating. We have this problem of loss of privacy in Face Book, but people were losing privacy on UseNet news in the 1980’s. So what about the first presidential candidate who has said something embarrassing on Face Book? Well, it won’t happen on Face Book, it will happen on Usenet news, because that was somebody who was 20 years old in 1980. So it will happen, but it will happen there and it’s a small case, and hopefully we’ll sort it out.
Twidale:
So we can set up concentric circles, safety mechanisms to test out some of the larger social consequences before we go live. There is a very strong temptation to go live instantly because of the venture capitalist funding the model of the dot com boom. And that’s risky.
Ubois:
Are there other forms of governance that you think would be worth considering? I mean, in a sense is that practice of governance likely to supersede the individual responsibility of an innovator in some way? We talked earlier about some of the ethical things and individual things, but I’m just wondering about this sort of systemic responses to the question of governing innovation, either by funding agencies or by universities or by governments or by parliaments?
Twidale:
This goes back to fundamental freedoms. The debate over rights in the US Constitution is a good place to start, for example, [I assert that] I have absolute freedom of speech, but [the court says] “No you don’t have the freedom to yell ‘fire’ in a crowded theater.” You have as much freedom is as is possible, but you’ve got to be very careful as not to affect other people. I would be all in favor of saying that anybody can develop any software they like, but I’m not all that comfortable about you inventing a new virus and releasing it, just to see what will happen. But nor do I want you to apply in advance for a license to develop software.
So we have an awful lot of very good case law, because that implies you are following a set of laws, but you got a whole load of careful thinking embodied in the US Constitution, and just because I happen to be familiar with that, that’s been written down over many, many years of how you sort of balance these complex issues. So I would start there and say, well, what are the concerns? And it’s generally a matter of individual liberty versus collective security. I can do what I like, except when it hurts you, so how do you draw the line? Because, on one level you can say, well anything that I do might affect you, so I’m not allowed to do anything.
Ubois:
A strong precautionary principle would do that.
Twidale:
Yes, and those are dangerous now, because of the leveraging power of bureaucratic systems. So in the past that any individual would have bad effects, it would only have to be sort of the rich and powerful. They would have to have a whole factory to cause a lot of pollution. Now, we can cause electronic intellectual data pollution using very small programs like a virus or whatever.
Another way of analyzing it that I would like to explore is to say that with modern technologies we all become rich, so let’s study the rich from the past. How rich people cope with a whole load of problems? One that would be very interesting to explore was the concepts of privacy amongst the rich. In one sense the rich had absolutely no privacy. Their servants knew who was sleeping with whom, clearly.
So why – to what extent did the rich appear even to care about it from their writings? Well, it’s possible that servant’s gossip couldn’t affect them, and I think often we, in our society, can float the fact that you – what you know about something and what can be used to that person’s detriment. And we say, okay, well, to me the biggest safety is to preserve total anonymity so nobody can do anything. And that is one thing, if you can preserve anonymity, but another is to say, you’re like the Lord of the manor, you don’t give a damn what anybody thinks, you know, because you don’t care. They can’t do anything to you. And that, I think, is part of implicit in a lot of the arguments of the people in favor of total transparency, but they don’t actually come to that.
Ubois:
Unfortunately, many people still think of privacy and anonymity as simply about protecting deviant behavior, and so therefore if you don’t have anything hide, why worry? Yet on the internet my joke about religion my may have very real consequences with some other group of people in some other part of the world. The Thai government right now is having a fit because people have made a clown noses on the King of Thailand and posted them to YouTube. That example of the powerless servants is not the same as the far away King who has the power to send an army against…
Twidale:
A lot of it is context. So normally in our social relationships we realize that what one says in a pub is very different from what one says in public.
Ubois:
So there’s no front stage, no backstage anymore, they kind of get blurred.
Twidale:
Yes, and I think that’s to be explored.
(more on 26 october)
(read the first part)