The latest issue of the Journal of Responsible innovation opens with an editorial from Editor in Chief Erik Fisher.
In Ends of Responsible Innovation, Fisher reflects upon talk about the end of RRI due to its possible withdrawing (in language terms at least) from the forthcoming EU Horizon Europe funding program. He argues however that deep structural ambivalence to science and expertise mean that responsible innovation as a necessary aspiration is here to stay, leading him to look forward to a shift toward more decentralized and diverse forms of responsible innovation policy that may allow learning and insight to flourish.
Fisher describes how the final issue of 2018 contains various discussions over the ‘ends’ of responsible innovation, including its aims, boundaries, and conditions, offering editorial comments that tie all of the articles, perspectives and reviews together.
In the first research article Reframing the governance of automotive automation: insights from UK stakeholder workshops, Tom Cohen, Jack Stilgoe and Clemence Cavoli report on the first systematic UK process of engagement with stakeholders regarding automated vehicles, using the findings from a series of workshops to raise questions that might inform the ongoing debate about the governance of self-driving cars.
In this entertaining and informative article, the authors describe how workshop discussions led to a broadening of the range of questions raised beyond the typical issues of risk and responsibility and into infrastructure and issues of social change. Examples given include what type of AV infrastructure should be supported and by whom? Should public authorities be responsible for investing in new infrastructure? How should drivers’ training be changed – either to keep drivers engaged or adapt driving skills to AVs? Who should own and control data generated by AVs? What ownership models should be supported for public mobility, socioeconomic opportunities and environmental benefit? Will the predicted loss of jobs be compensated by the creation of employment and, if not, how can this be mitigated? Will the mass use of AVs contribute to decreased physical activity?
The authors conclude that ‘as AV innovation and investment continues, there is a clear need for the active involvement of governance bodies, informed by critical social science research’.
In Liminal innovation practices: questioning three common assumptions in responsible innovation, Mayli Mertens argues that three assumptions made about RI do not hold for liminal innovation practices in clinical settings. The assumptions that emerging technologies require assessment because of their radical novelty and unpredictability, that early assessment is necessary to impact the innovation trajectory and that anticipation of unknowns is needed to prepare for the unpredictable do not hold, requiring RI approaches that move away from anticipation of the unknown and uncertain and return to observation of the known and predictable within clinical settings.
The author argues that different contexts require different approaches, describing how in the clinical setting research and care are practiced together. In such a scenario anticipation of futures may be less important that present results.
In Can the technological mediation approach improve technology assessment? A critical view from ‘within’, Bas de Boer, Jonne Hoek and Olga Kudina problematize the ambition of the technological mediation approach to address ethical concerns from ‘within’ human-technology relations leading to ethical Constructive Technology Assessment (eCTA).
The authors question whether the technological mediation approach can indeed function as complement, or is rather a replacement of existing practices of ethical TA, raising some interesting questions about the relationship between ethics, humans and technology. After explaining how the technological mediation approach aims to fill the normative deficit in TA by focusing on qualitative ethical concern and the normative basis that the technological mediation approach offers to address ethical concerns from ‘within’, the authors offer two readings of the relationship (strong and weak) and the effectiveness and relationship towards TA and CTA.
Through the reading of this paper the concept of technological mediation and its relationship to TA or RI really does offer some food for thought.
The issue continues with a Discussion Paper and series of responses.
In Introducing the dilemma of societal alignment for inclusive and responsible research and innovation, Barbara Ribeiro, Lars Bengtsson, Paul Benneworth, Susanne Bührer, Elena Castro-Martínez, Meiken Hansen, Katharina Jarmai, Ralf Lindner, Julia Olmos-Peñuela, Cordula Ott and Philip Shapira outline and reflect on some of the key challenges that influence the development and uptake of more inclusive and responsible forms of research and innovation.
The article opens with a discussion of the Collingridge Dilemma and institutional attempts to address the problems that the Dilemma raises, arguing the need to reorient research and innovation governance discourses around key dilemmas of social alignment (the alignment of scientific goals with those of the broader society).
After presenting a set of challenges, and potential approaches that may help in addressing them, the authors outline their main ideas that underpin the idea of social alignment, beginning with ‘key dimensions of relevance to the governance of science, technology and innovation. These are: the kinds of epistemic communities taking part in the production of knowledge, research and innovation; the governance focus and associated mechanisms; the nature of the governance problem; and the scope of action and analysis’.
A discussion of the creation of value and societal benefits by public institutions is followed by that of the role of the private sector and grass roots and co-production processes. A discussion of the challenges of sustainable development and social justice follows in which the authors state that ‘a crucial element missing from discussions is recognition of the inseparability between the concept of sustainability and that of equity’.
In the concluding section From Social Control to Social Alignment, the authors conclude that ‘the challenges outlined in this article suggest that the dilemmas of societal alignment emerge from a failure in acknowledging diversity of publics and institutions, situatedness of innovation processes and normative aspects in the governance of science, technology and innovation. These are important points that need to be considered as we continue developing frameworks for more inclusive and responsible forms of research and innovation’.
The first comment on this discussion piece comes from Alfred Nordmann. In The mundane alternative to a demiurgical conceit. Comment on Ribeiro et al. ‘Introducing the dilemma of societal alignment for inclusive and responsible research and innovation’, Nordmann argues that the authors’ framing of the alignment predicament runs into trouble as it is presented as a problem to be solved, rather that one to be constantly addressed, but that it does point to what emerges as a more mundane and workable alternative: muddling through.
In Cataloguing the barriers facing RRI in innovation pathways: a response to the dilemma of societal alignment, Jennifer Kuzma and Pat Roberts argue that the authors underestimate the barriers to engagement faced when doing RRI, proposing ways in which such barriers can be classified in order to further explore a less optimistic (but more effective) approach to RRI and engagement.
David Guston provides the third and final comment piece with Damned if you Don’t… Guston argues that Ribeiro and co-authors make a laudable attempt to offer up a ‘dilemma of societal alignment’ (DSA) to complement the Collingridge dilemma but raises several shortcomings, including doubts that what they propose as a dilemma can actually be seen as such and lack of descriptive clarity of their proposal.
The issue continues with a Review section.
In Chance as a value for artificial intelligence, Alexei Grinbaum raises the question (raised in the author’s recent publication Machina Delatrix) of artificial intelligence as a machine, if a machine is the innovator what meaning can we ascribe to responsible conduct as a classification?
He argues the benefits of chance within AI through a biblical example that demonstrates trust in a system that is beyond an individual’s control, a position that programmers of AI systems also find themselves in.
The issue closes with another review. In Weapons of math destruction, Thomas Woodson reviews the book of the same title authored by Cathy O’Neil.
Woodson describes O’Neil’s argument that models used to make decisions related to individuals through data collected about them are far from neutral mathematical machines. They are built upon faulty suppositions and flawed, a problem that is more damaging to more vulnerable communities within society. The use of proxy models, lack of updating and poor statistical practices are given as some of the major problems, with O’Neil writing from experience as she works as a data analyst.
Once again this issue of the Journal of Responsible Innovation offers insightful and entertaining reading, and we congratulate all of the authors and editorial team on their accomplishments. Several of the articles reviewed above are available on free download through the links in the text, and we urge our readers to take a browse through this and previous issues.
—————-