A new study gives a detailed guide on the code of conduct, the development of an ethical framework, and value-based design methodologies for AI application developers and researchers across Europe. The study has been reported in AI Communications.
In the spring of 2017, the “Barcelona Declaration for the Proper Development and Usage of Artificial Intelligence in Europe” was released at the B-Debate event held in Barcelona, to trigger more debates among application developers, researchers, industry leaders, and policymakers on the risks and opportunities of AI in the present “gold rush” environment.
The paper, apart from recording the entire Barcelona Declaration, analyzes the logic behind it; concentrates on ethics, reliability, and safety problems; and assesses developments on its major recommendations. The B-Debate, sponsored by the Biocat and l’Obra Social la Caixa, in association with ICREA, the Institut d’Investigacio en Intelligencia Artificial, and the Institut de Biologia Evolutiva, invited leading experts in Europe to discuss ways to tackle risks and benefits of AI. The event proved to be special, thanks to the participation of European researchers and developers in discourse that had earlier been ruled by UK and US business consultancy firms, legal experts, and social scientists. The majority of the participants signed the Barcelona Declaration, which can be accessed for signature and debate on the web.
Given the widespread interest in AI and the eagerness to develop applications that affect people in their daily lives, it is important that the research and application development community engages in open discussions to avoid unrealistic expectations, unintended consequences, and usage that causes negative side effects or human suffering.
Luc Steels, PhD, ICREA Research Professor, Catalan Institution for Research and Advanced Studies (ICREA) – IBE (UPF/CSIC)
Steels organized the Barcelona event along with his co-investigator Ramon Lopez de Mantaras, PhD, Research Professor of the Spanish National Research Council (CSIC) and Director of the Instituto de Investigación en Inteligencia Artificial (IIIA) – CSIC, Bellaterra, Barcelona.
Although most of the AI activity takes place in China and the United States, recent actions by the national governments and European Commission denote that research and development are scaling up in Europe—a progress that the debate around the Barcelona Declaration is helping to shape. One potential development was the latest allocation within the European H2020 framework program for the development of a platform and ecosystem to trigger European AI study, as called for in the Barcelona Declaration.
The study repeats its appeal to European companies and funding agencies to make investments in AI development at a scale that is sufficient for the challenge, and that too in such a manner that all regions and citizens in Europe can benefit considerably.
A quickly growing body of literature, the Barcelona Declaration, and B-Debate together are raising unrelenting queries that continue to resonate: Is AI prepared for large-scale deployment? Although AI is mainly being used for commercial purposes, can it also be used for the common good? What kind of applications should be encouraged? How to tackle the negative impacts on the deployment of AI? What are the new technical advances in AI and how do they influence applications? What kind of role should AI play in social media? What are the best practices for the design and deployment of AI?
“While rapid AI advances are widely anticipated with excitement, some anxiety about progress is necessary and justifiable,” stated Steels and Lopez de Mantaras. “The common fear that AI deployment will get out of hand may seem far-fetched, but there are already unintended consequences that need urgent remediation. For example, algorithms embedded in the web and social media have an impact on who talks to whom, how information is selected and presented, and how facts/falsehoods propagate and compete in public space. AI should (and could) help to support consensus formation rather than destroy it. AI systems should make it very clear that they are artificial rather than human. Fooling humans should never be a goal of AI.”
Moreover, questions are being investigated regarding the accountability and reliability of AI systems depending on deep learning that involves rule-governed behavior (for example, law enforcement, human resource management, or financial decision-making). In addition, embedded biases can result in unfair parole decisions or prevent experienced job seekers from passing screening. Different concerns are also posed by autonomous AI systems—who is accountable when something goes wrong with a self-driving car? Do limitations have to be placed on autonomous weapons?
“The Barcelona Declaration has set forth a governance framework to integrate best practices proactively, as part of the design process. We believe that AI can be a force for the good of society, but that there is a sufficient danger for inappropriate, premature or malicious use to warrant the need for raising awareness of the limitations of AI and for collective action to ensure that AI is indeed used for the common good in safe, reliable, and accountable ways,” said Steels and Lopez de Mantaras.
While the landscape of AI in Europe is undergoing a rapid change through all these activities and debates, the researchers surmise that problems raised in the Barcelona Declaration continue to be highly pertinent, and renew recommendations in a number of priority areas:
- Currently, there is an even greater requirement to explain the exact meaning of AI when debating ethical and legal issues. There is a lack of difference between data-oriented learning, often called machine learning, or knowledge-based AI, which is capable of modeling human knowledge in computational terms. For both approaches, the ethical and legal issues and applications are not similar; however, the full potential of AI can only be obtained with a combination of them.
- The query how much autonomy must be given to an AI system for several applications, like autonomous cars or weapon technology, is very significant. One method is to produce rules of governance as well as a legal framework that serves as a guideline for developers and also as a mechanism through which those negatively affected by the technology can seek redress.
- The focus should shift from machines substituting human workers to leveraging and complementing humans in executing tasks and making improved decisions. The automation discussion must concentrate not only on the number of jobs but also on the changing nature of work.
- There is still a long way to go to sufficiently support the design and deployment of AI in Europe. The Barcelona Declaration has helped in promoting awareness and has also provided more motivation to government initiatives; however, consistent funding allocations and concrete actions that directly affect AI deployment, education, and research in Europe continue to be rare.