European Commission Coordinated Plan on Artificial Intelligence
Last week (7 December 2018) saw the release of the Coordinated Plan on Artificial Intelligence Communication from the Commission to the European Parliament, The European Council, The Council, The European Economic and Social Committee and the Committee of the Regions.
The aim of the Commission’s strategy on Artificial Intelligence (AI) is to ‘maximize the impact of investments at EU and national levels, encourage synergies and cooperation across the EU, exchange best practices and collectively define the way forward to ensure that the EU as a whole can compete globally’. The plan provides a strategic framework for national AI strategies, outlining investment levels and implementation measures.
The plan overview describes how the EU lags behind in private investment in AI, risking missing out on opportunities, facing a brain drain and having to use technology developed from outside its borders, arguing that both public and private investment must be scaled up.
The aim is to bring companies and research organizations together in order to develop a common strategic research agenda on AI, defining priorities in line with the needs of the market and encouraging exchanges between sectors and across borders. The plan includes both technical and academic training schemes, the building up of a European data space with particular focus on health and cybersecurity, and improvements in computing capacity.
The plan also describes how the Commission is working towards developing ethics guidelines with a global perspective and ensuring an innovation friendly legal framework, the first draft of which will be published before the end of the year to be followed with a final version of the guidelines in March of 2019.
The plan and the annex that contains a deeper description of implementation process are available here.
AI Now
December also sees the release of the third annual report published by AI Now.
The AI Now Institute at New York University is an interdisciplinary research institute dedicated to understanding the social implications of AI technologies. It is the first university research center focused specifically on AI’s social significance. The report opens with a series of 10 recommendations, before moving on to an executive summary and on into the report itself. The recommendations are as follows:
1. Governments need to regulate AI by expanding the powers of sector-specific agencies to oversee, audit, and monitor these technologies by domain.
2. Facial recognition and affect recognition need stringent regulation to protect the public interest.
3. The AI industry urgently needs new approaches to governance. As this report demonstrates, internal governance structures at most technology companies are failing to ensure accountability for AI systems.
4. AI companies should waive trade secrecy and other legal claims that stand in the way of accountability in the public sector.
5. Technology companies should provide protections for conscientious objectors, employee organizing, and ethical whistleblowers.
6. Consumer protection agencies should apply “truth-in-advertising” laws to AI products and services.
7. Technology companies must go beyond the “pipeline model” and commit to addressing the practices of exclusion and discrimination in their workplaces.
8. Fairness, accountability, and transparency in AI require a detailed account of the “full stack supply chain.”
9. More funding and support are needed for litigation, labor organizing, and community participation on AI accountability issues.
10. University AI programs should expand beyond computer science and engineering disciplines.
Each recommendation is described in a short paragraph that explains the reasoning behind it.
The explanations are clear and taken together provide a good overview of the report as a whole.
The executive summary that follows describes how building upon previous reports, this publication addresses the issues of the growing accountability gap in AI (which favors those who create and deploy these technologies at the expense of those most affected), the use of AI to maximize and amplify surveillance (especially in conjunction with facial and affect recognition), increasing the potential for centralized control and oppression, increasing government use of automated decision systems that directly impact individuals and communities without established accountability structures, unregulated and unmonitored forms of AI experimentation on human populations and the limits of technological solutions to problems of fairness, bias, and discrimination.
The summary concludes saying that the report develops these themes in detail, reflecting on the latest academic research, and outlines several strategies for moving forward:
Expanding AI fairness research beyond a focus on mathematical parity and statistical fairness toward issues of justice
Studying and tracking the full stack of infrastructure needed to create AI, including accounting for material supply chains
Accounting for the many forms of labor required to create and maintain AI systems
Committing to deeper interdisciplinarity in AI
Analyzing race, gender, and power in AI
Developing new policy interventions and strategic litigation
Building coalitions between researchers, civil society, and organizers within the technology sector.
In the following 30 pages the report guides the reader through a series of illuminating sections beginning with an overview of some of the main stories that involve AI that came to the fore in 2018 (from Cambridge Analytica to US State funded projects based upon race recognition). The problem of surveillance is amply addressed, while automated decision making systems in governance and the question of who bears the burden of such developments are also tackled in the first section.
Section 2 describes the development of emerging solutions from 2018, before section 3 (the largest of the 3 sections) moves on to ask the question of what is needed next.
The publication closes with a series of conclusions that include calls for the removal of the legal and technological barriers that prevent auditing,understanding, and intervening in these systems, proposes that AI companies should waive trade secrecy and other legal claims that would prevent algorithmic accountability in the public sector, and that governments and public institutions must be able to understand and explain how and why decisions are made, particularly when people’s access to healthcare, housing, and employment is on the line.
A breakdown of the recommendations is available here and the full report can be freely downloaded here.
————-