This document is intended to serve as a starting point for reflection, guidance and discussion for the ICAI labs in Amsterdam. It is a living document in which senior artificial intelligence (AI) researchers leading an ICAI Lab in Amsterdam express what they consider the core values underlying their work with an aim to facilitate constructive and responsible AI research in The Netherlands and beyond. We welcome feedback to this document and will update it continuously.
In accordance with the Advisory report on Ethical and Legal Aspects of Informatics Research, by the Royal Netherlands Academy of Arts and Sciences, our researchers have a core responsibility to ensure that the research they conduct is ethically sound. Like all science, AI is not practiced in a vacuum, but in interaction with society. We therefore want to be explicit about our core values and the guidelines we follow to uphold these values. They apply to our researchers, the contents of our research as well as to the choice of our partners.
Considering that the development of this field of research is characterised by a fast pace and high uncertainty, we continuously improve our core values and guidelines to ensure they reflect the expectations of today’s society. ICAI would like to nurture existing and invites new dialogues with different sectors about the ethical challenges presented by AI research, and thereby facilitate ethically sound and responsible AI research.
Social values
- We strive for a positive social impact and economic value of our research (“AI for social good“), as expressed in the Sustainable Development Goals of the United Nations, and as endorsed by the Dutch universities.
- We subscribe to the Ethics Guidelines for Trustworthy AI as formulated by the High-Level Expert Group on AI of the European Commission.
- We are conscious of the transformative competencies of AI in shaping society, creating new values and reconciling tensions and dilemmas. We contribute to social awareness about how the world is changing and will change under the influence of AI by initiating and conducting the social debate about the role of AI in society.
Academic values
- We defend academic freedom and independence as formulated in the Dutch code of conduct, and we publish our scientific results, including publications, datasets, and experimental code, under open access, with the exception of trade secrets and IP protection, and sensitive data, on which we make transparent agreements.
- We are transparent about where we stand academically, where we are going and how. We are open about dilemmas we may face, how we will deal with them when they arise and what choices we make.
- We regard the continuous training of scientific talent as one of our core objectives.
- We engage in dialogues with and build bridges to both emerging and established scientists in other countries and cultures, contributing to national and international communities for knowledge sharing, collaboration and innovation.
- We strengthen our local ecosystem for AI research and its applications, in collaboration with universities, industry, government, civil society and social institutions.
- We promote an academic culture that recognizes and rewards academics for learning by doing and for bridging the gaps between a) fundamental and applied research, b) scientific fields, c) research and education, and d) science and society through AI.
Marieke van Erp (Cultural AI Lab)
Stratis Gavves (POP AART, QUVA)
Theo Gevers (ATLAS Lab, Delta Lab)
Sennay Ghebreab (Civic AI Lab)
Paul Groth (AIRLab Amsterdam, Discovery Lab)
Hinda Haned (Civics AI Lab)
Frank van Harmelen (Discovery Lab)
Laura Hollink (Cultural AI Lab)
Jan-Willem van de Meent (Delta Lab 2)
Joris Mooij (Mercury Machine Learning Lab)
Jacco van Ossenbruggen (Civic AI Lab, Cultural AI Lab)
Maarten de Rijke (AIRLab Amsterdam)
Clarisa Sánchez (AI for Oncology)
Cees Snoek (AIM Lab, ATLAS Lab, QUVA)
Marcel Worring (AIM Lab)