Responsible AI
“What exactly do we want from our technology?” This question has become increasingly more prevalent – and for a reason. Because not everything that is technologically possible is actually desirable. Data School seeks answers to this question in its research pillar Responsible AI.
At the heart of this is data ethics — in other words: responsible data practices, algorithms, and AI. This way, we go beyond questions about technical possibilities (“What can we do?”) and legal boundaries (“What are we allowed to do?”).
Practical tools for the professional field
Data School develops practical tools for the creation and monitoring of ethically responsible AI. Examples of these tools include:
- To support organizations in their ethical deliberations, Data School developed the Data Ethics Decision Aid (DEDA) in 2016, which is still widely used today by numerous (government) organizations, both within and outside the Netherlands.
- In the summer of 2021, we published the Fundamental Rights and Algorithms Impact Assessment (FRAIA), which provides guidance for ethical considerations surrounding algorithms that may have a significant impact on human rights.
- Another tool is the AI Performance Review, which is intended to monitor AI throughout its life cycle.
Advice and training
In addition to providing practical tools, Data School supports organizations in the implementation of responsible AI through facilitation of discussions, moderation of assessments, and advice. Additionally, Data School offers a variety of training courses for professionals on the topic.
Academic publications
The insights Data School gains in the field are translated into academic output on the impact that digitalization has on public administration, democracy, and public space.