Scott Berkun is the author of The Myths of Innovation, which examines common pre- and mis-conceptions about innovation, and Making Things Happen, a book about project management. He has written for The New York Times, The Washington Post, HarvardBusiness.org, and Wired magazine, and commented about innovation for CNBC, MSNBC, and National Public Radio. He’s also a speaker for hire, has taught at the University of Washington, and is well known for his work at Microsoft on Internet Explorer, Windows and MSN. Currently based near Seattle, Washington, he studied design, philosophy and computer science at Carnegie Mellon University, and graduated with a B.S. in Logic and Computation in 1994.
In this wide ranging discussion, Berkun offers some useful definitions of innovation, discusses resistance to innovation, and outlines some ideas on corporate responsibility (and irresponsibility).
Berkun notes that there is considerable confusion about the meaning of the term “innovation,” and that in corporate settings, the desire to be “innovative” may simply equate to the desire to be “successful.” In other words, “innovation” has become something that is seen as universally desirable, but digging a little deeper makes it clear that this is not the case: people, institutions, and cultures frequently resist innovation, for good and bad reasons.
There are some enduring resistances to responsibility as well. Many are economic. “Any corporation, given the protections it is given, has very little outward motivation to take responsibility for things that it doesn’t have to, so you’re left with the discretion or the morality of whoever the leaders in a given company are,” Berkun says. “Why didn’t they put seat belts and anti-lock brakes in the cars? Because it’s expensive. There’s no other reason for it.”
But other resistances to responsibility come from the mindset of most innovators. “There is an inherent conflict of psychology of someone who is so motivated to make a big change, for them to simultaneously even consider the possible negative ramifications of the change that they’re trying to make happen,” Berkun explains.
Scott Berkun
==
Ubois: Myth has something of a double meaning – myths can be explanatory, or they can be more like misconceptions. What do you mean by “Myths of Innovation?”
Berkun: For the most part, the myths were really meant to be a structure for history and advice about innovation. A lot of people say, “Oh, yeah, of course, the myths.” I think the first chapter’s the Myth of Epiphany, The Flash of Insight. Everyone says, “Oh, yeah, of course, I know it’s not true.” But then, they go about managing teams of people, looking for flashes of insight even though they know it’s not true, their behavior is still built around these assumptions that they don’t fundamentally believe, which is really strange.
Ubois: It seemed like a lot of the book was oriented towards people that were in commercial organizations. If you think of stages of innovation, stretching from really basic research and user at option, the place where it seemed like you felt people could be most effective was in an organizational setting.
Berkun: True. That’s a reflection of my own experience working the tech sector for about 10 years. The book has a strong bias towards technology, so that makes it harder to fit in the story about, say, the drafting of the U.S. Constitution. And you could call that an innovation, too. The book is biased towards both technology and capitalistic corporation type environments.
Ubois: What in your mind changed over the course of writing the book? Did you find things that you didn’t expect that set you off in a different direction?
Berkun: The number one thing that changed was my interest in history. There’s this huge mine of lessons learned that have been entirely ignored, that are repeated again and again and again and again, especially when it comes to creative thinking and breakthroughs or people who are trying to do that stuff. So, in the course of writing the book, I felt like I’ve finally, despite all my education, I finally became a fan of history.
Ubois: So, kind of a search for enduring principles concerning innovation?
Berkun: If you can get down to the essence of a problem, solutions should work well across domains – the stuff that changes every five years, that’s mostly superficial. Core problems are definitely an interest of mine, and I hope the book provides insight into some of those.
Ubois: One of the timeless issues you discuss is resistance to innovation. I’m wondering if you can discuss different varieties of resistance, maybe more or less legitimate resistance to innovation versus self-interested, or anti-competitive resistance. You had actually quite a list of reasons why innovations were hard for people in organizations to accept.
Berkun: We don’t like to believe this about ourselves as people, but we are super resistant to change. All of us have daily habits that we protect vehemently. We are creatures of routine.
Innovation, creative thinking, breakthroughs, that whole vocabulary is about breaking routine. We have this huge psychological conflict that very few people talk about: we think we desire, change, progress, and innovation, but our actual behavioral motivations are the opposite.
Every organization is put together to be resistant to most kinds of change. That’s the goal in any organization.
Ubois: One of your former colleagues, Marc Smith, called innovators ‘marginal actors in the landscape.’ He said that only marginal actors were really interested in innovation, that it was almost never the dominant market owner or incumbent power.
Berkun: The vocabulary here always trips me up. When people start using the word, “innovation,” whenever I am brought into a company to help them with their planning, I always stop them and say, “Well, what do you mean? What does that mean?” Because that word is so loaded and so abused.
Ubois: Well, maybe that’s a good thing to do is here, we could define innovation here in this context in the midst of innovation as you were using the term, innovation means?
Berkun: Usually innovation means one of three things. An idea that is new for a given community, that’s one definition. Another definition is something that is cool. It doesn’t even have to be new. It’s new, it’s put together in a way that makes it cool and interesting. Third is successful. Innovators are successful. People say, “Oh, we want to be innovators.” What they really mean is they want to be successful, they want to be market leaders. So, we have new idea, the cool idea, successful idea. Those are the three most common uses of the word.
When people say, “Oh, we want people to be innovative,” I say, “Do you want them to come up with new ideas, do you want them to come up with interesting ideas, do you want them to come up with good ideas or do you want them just to be successful?” And so, when they think about it, it’s usually the last one is what they really want.
Ubois: So, “sustainable innovation” can be one of 20 different things?
Berkun: Oh, boy. To be completely honest, the places that I see that are the most successful don’t use that vocabulary at all. They talk about goals, they talk about problems they want to solve for customers. And they talk about how to empower people to solve those problems. And by picking hard problems or complex problems, they set themselves up to be seen as innovators because they’re going to do a better job at solving big problems. So, that vocabulary, I don’t often see used in practice at the level of a team that’s actually building stuff that’s then later going to be called innovation. “Sustainable innovation,” “disruptive innovation,” all that language tends to be the language of executives and CEO’s and people who are not really managing the creation of new stuff.
Ubois: That’s often a policy term. People in policy circles use it a lot.
Berkun: Yeah.
Ubois: But, you’re actually pointing out something else, which is around problem quality and problem identification. We’re thinking about responsibility in innovation or we’re trying to spark discussions about responsibility in innovation, and obviously, focus is often where that starts. If you are responsible for a basic research budget and you put it all into better technologies of war, that might be less responsible than putting it into medical systems.
Researchers have a lot of different options about what they want to study, and funder have a lot of different options about where they want to invest. For both of them, being able to distinguish good problems from bad, or useful and solvable problems from unsolvable ones, or well-framed ones from not well-framed ones is really important. And I’m wondering if there are some things that you defined about problem quality that people could apply early on.
Berkun: Definitely, definitely. You’re leading me totally in the direction I like with the question, which is that A, it even exists, this idea you can define the problem.
There are a lot of organizations where they just want something solved, so they jump as quickly as they can to making things and solving things, and trying, where they haven’t really spent some time understanding the problem.
So, there’s a chapter in the book that talks all about famous innovations or breakthroughs that were largely driven by someone who just thought about the problem differently. They defined the problem in a different way than their competitors did, and that’s what enabled them to succeed. They weren’t necessarily smarter or had a larger R&D budget. They just spent a little bit more time thinking carefully about the problem or different ways to define the problem, and that’s what led to their success.
And this is a skill that designers and some other creative professionals were taught. But it’s not really part of most of the technology world, or business culture, this idea that you can be creative in how you define the problem. An example is Einstein, e = mc2, relativity was him framing a question that other people were asking a very different way. Thomas Edison is another good example that he didn’t see the problem in a light bulb as creating a good light bulb, he saw the problem of power. Forget the light bulb. How are you going to power the thing? You need a system.
Ubois: There are already gas pipelines in everybody’s houses, how can we make electric light more attractive . . .
Berkun: Exactly, so what if they make a light bulb? They can’t make money off of that. They’ve got to think about the problem in a more complete way. It’s not the only way to be successful with coming up with new ideas, but it does seem to help a lot if you recognize how many different ways there are just to define the problem, and that some of those ways will open pathways of thinking that’ll be different, depending on how you define the problem, and that that’s important.
Ubois: Maybe that’s a good transition into this issue of responsibility. Part of it is seeing things from other people’s perspective.
Berkun: Saying, OK, our problem may not just be to sell a car. Our problem might be to sell a car that has some safety features in it. There are plenty of excellent summaries about that story. The tobacco industry is another interesting example. There’s clearly a huge gap between the problem that the corporation thinks it’s responsible for and the problems that everyone else thinks they are responsible for. This becomes a philosophical and almost an ethical issue very quickly. We’re talking about how much do corporations owe communities.
Ubois: Yes, and what kinds of costs can they externalize?
Berkun: That’s a word that Jack Welch used when he was at GE, he used to always talk about externalities. Anything that they could basically make the government or the community take care of was called an externality, so it was sort of an ambition of theirs to externalize things that they felt they could get someone else to take responsibility for.
Ubois: Like dredging the Hudson for what they dumped into it.
Berkun: Exactly. Now, you’re not even in innovation anymore. It’s really about what’s your ethical view of the responsibility you have as someone who owns a corporation, right?
Ubois: It’s a challenge to find influence on that sort of behavior. There are things like product liability. And there’s the individual conscience of people in the organization to appeal to. And there are usually professional codes and standards of conduct, but it is a hard sell. For us at the Bassetti Foundation, as we try to get people to want to talk about this topic, to even discuss it, one of the issues is who pays? Are we just simply insisting that people take on additional cost? That’s a hard sell.
Berkun: Yeah, well ethics and morality often are, especially in the American corporate ethos, sort of the absence of those. I’m being cynical here, but it is largely, that is not their operating procedure.
Ubois: Well, the incentives and the measurements are not right for it, either.
Berkun: Yes, and the common thing is to create a separate philanthropic arm that a company does all these things that are probably not responsible and can deny them by saying, “Well, look, over here, we have a separate organization that does non-profit work . . .”
I was talking to someone recently about the Gates Foundation. He’s following the path of Carnegie, Rockefeller, Vanderbilt – all the robber did much the same thing. I’m not an ethicist, but does that work out, to abuse people on one hand and then give back? It’s a very common pattern, at least in the United States.
Ubois: That does speak to some desire for responsibility later on, I mean, whether it’s an issue of different life stages, or a different status hierarchy to climb up, or a more engaging problem, or just people who feel finally like they have a luxury to engage that sort of problem, whereas before, on the way up, they did not.
Berkun: Well, the primary question that started all this part of contacting me was the connection between innovation and responsibility. I know a few stories of researchers, who are trying to do discoveries. Some of the philosophy of science gets into ethics of science and the ethics of making a discovery, the ethics of discovering DNA, and what ethical or moral responsibility people who make breakthroughs happen have.
Ubois: The sort of obsessive focus needed to make real scientific progress tends to reduce peripheral vision or reduce appreciation for effects in other places. You were just talking about DNA. One case is prenatal testing has, in fact, been used for gender selection, and now, by some estimates, these tests that were built to detect disease have resulted in a population loss, of a few tens of million missing women.
Berkun: Right, and these are questions that most people who call themselves innovators or whatever fancy word they want to use, cutting edge, breakthrough, whatever, by virtue of the fact that they’re so interested in trying to make a breakthrough, they can’t possibly be taking that much responsibility for the outcome.
There is an inherent conflict of psychology of someone who is so motivated to make a big change, for them to simultaneously even consider the possible negative ramifications of the change that they’re trying to make happen. So, the last chapter in the book is stories about that.
Ubois: I loved that – the examples that you had there, DDT, aviation, the car, the personal computer, cell phones – were very interesting that way.
Berkun: Even the story of the Wright Brothers, they thought planes for peace. You can find lots of these kinds of stories.
Ubois: There’s usually a Utopian dream around innovation, isn’t there? The world is going to be a lot better when this innovation is here. And it always brings some other surprise or unintended consequence that needs to be mitigated.
Berkun: Yes, and maybe that’s a productive direction for this whole line of thinking to go in, that the expectations upon anyone who says they have a new breakthrough, you have to expect there’s going to be some negative effect, and they should at least be aware of it, they should at least be looking for it.
Ubois: Sort of an environmental impact statement.
Berkun: Yeah, exactly. It’s not that time-consuming or resource-intensive to say, “When I create this new kind of operating system, I can sit down for an afternoon or someone in my organization can sit down for an afternoon and say, ‘Wait, let’s assume all of our goals get met and we solve all these great problems that we think need to be solved.’ What’s the possible negative impact and on whom?” And we don’t have to publish this, but our responsibility as people who have the power to create new things should be to spend an afternoon at least thinking about it. That seems like a reasonable thing to ask.
Ubois: And I think you can even find these general questions. That’s why I like that approach to the UN Declaration of Human Rights by Jeff Jonas; it’s broadly useful. Another approach is to publish more. I think the open access movement is very interesting in that sense that if your research process is a little more transparent, other people can see it early. That’s another thing the bioethicists have talked about quite a bit. What are the off label uses for this drug or what are the other possible applications of this technology?
Berkun: That’s a good point. Yeah, I’d buy that that it’s not going to guarantee that you’ll take responsibility, but it does definitely increase the odds that someone will offer a different perspective that may influence how you go about what you’re doing.
Ubois: Yeah, another one that’s been good is to consider a bright line around the lab. This is particularly true for biotech, where there’s a big difference between doing things with organisms in the lab and releasing them into the wild. Monsanto is putting a lot of stuff out into the wild and there’s a lot of questions about transgenic crops and how is that going to interact with other plants. So one answer is well, play all you want inside the lab, but it’s very different when you expose the rest of the world to your new creation. It’s a very clear example of limiting the effects of an innovation.
Berkun: That makes sense. The other example, which I’m sure you thought of before, but I can’t get it out of my mind right now is Google. And Google’s philosophic attitude, “Don’t be evil.”
But I’m cynical about corporations given their code. There was a landmark case in the 1860’s in the United States, where the Supreme Court – this is where corporations in the United States became . . .
Ubois: People.
Berkun: Yes. That that was a critical – and this whole question of innovation responsibility and given that in the United States, technology innovation is driven by corporations, largely, there’s this huge amount of leeway that was provided for them by that decision. There’s a line that I can draw in my mind between the sins of the automobile companies in the 70’s, whatever you want to say about Microsoft’s use competitor practices in the 90’s, to whatever privacy and rights issues we’re going to have with Google and Facebook. There’s a straight line between those three things.
Ubois: And yet, it’s really hard for people in a contemporary day-to-day setting to reference that. If you were sitting in a product meeting at Microsoft even today, it’d be probably difficult to say, “Hey, we’re going down the path of IBM’s anti-competitive practices of the 70’s,” or something.
Berkun: You’d be ostracized for bringing morality and questions of social responsibility into what’s primarily a corporate capitalistic culture. Any corporation, given the protections it is given, has very little outward motivation to take responsibility for things that it doesn’t have to, so you’re left with the discretion or the morality of whoever the leaders in a given company are. And that’s the point that the Google founders are trying to make is we’re going to take more responsibility than Microsoft did, which is a good thing for them to say.
Ubois: There is also an element of who ultimately gets to decide? The China privacy and censorship decisions that Google has made are interesting, right? They decided to do what they needed to do to remain available even in some compromised fashion because on balance, they said, it’s a better thing to do.
Berkun: Yes, that’s actually a great example. It’s fundamentally a censorship state and they’re basically saying, “We will endorse that.”
So again, it’s the same question. Why didn’t they put seat belts and anti-lock brakes in the cars? Because it’s expensive. There’s no other reason for it. They knew it was safer, but it’s expensive. Any of the car companies in the 70’s could have said, “We’re going to brand this new car as the safe car.” Those car companies could have used that as an opportunity for advantage. I don’t know enough of the history to know why they didn’t.
Ubois: Maybe belief in the viability of that in some way, right? Is that a viable corporate strategy? Is that a viable business strategy?
Berkun: Like you said before, the dominant player is unlikely to be the innovator, unlikely to take risks. They may have felt they were doing well enough, they didn’t feel the pressure to take a big risk like that. But, they could have. They could have made the seat belt, it could have been like the iPhone, you know? It could have been this thing about ease of use and about safety and taking things in another direction, but they didn’t.
Ubois: Just a couple of other questions – one is irresponsibility in innovation. There’s some discussion of that in Chapter 10 and the DDT example is particularly good. What are other negative examples people can learn from? Tom Lehrer had the line about Wernher Von Braun, “Once the rockets are up, who cares where they come down?”
Berkun: The tobacco industry is a ridiculously prolonged story of irresponsibility and how long and how disgustingly they were able to get away with what they were doing. You could think of McDonald’s and Happy Meal marketing to children. Fast Food Nation has a whole chapter about that, which is excellent. And the Happy Meal was an innovation. You could say you were trying to sell more cheeseburgers. This is a way to do it, you market to children.
Ubois: If you were going to sum up the values that you’re advocating, it’s implicit in the book, this idea of innovations that embody your values or writings that embody your values in some way. What are the values that you came away with when you published this book? Was there a set of them that you would like to reorder in society in some way or some that have been neglected and you’d like to push up and rank?
Berkun: I may have a different answer, but the first thing that comes to mind is just to value history. Whatever thing you think is so special and unique to the moment, if you spend 10 minutes, you can find someone smart and capable in the path who’s in the same situation and learn something from how they handled it. That’s the number one. I really feel like the book secretly is a history book for tech sector business capitalist types.
Ubois: So “in praise of prior art.”
Berkun: Yeah. The value of understanding something about those who came before you.
Ubois: It’s anti-waste in a sense, right, we don’t do duplication of effort unnecessarily? We need systems of discovery of those kinds of stories…
Berkun: That’s definitely one of the things motivating me. I hoped the book would have that effect on people. “There’s this Edison guy…maybe I should…oh yeah, this has happened before, someone’s tried to do this before. I can learn something from what he did and the odds of me making go things better in the world go up because I’m reusing this knowledge.”
Ubois: It’s often gratifying find some continuity that your work is a part of. Other people have had this problem, they’ve dealt with it successfully.
Berkun: Yes, absolutely. There’s connections. Helping people make connections with the past. That’d be number one.
Ubois: Thank you very much.