The SIDN fund and Topsector ICT are working together on the call ‘Responsible AI in Practice’. It is the first call focusing on the third component within the Digitalisation Knowledge and Innovation Agenda (KIA): Reflections on digital information technologies. This component revolves around development and input of framework conditions to digitalise responsibly.
This specific call is about developing practically applicable frameworks, preconditions and design principles (by design) for responsible AI and AI applications, and solutions based on them. Those applications may be focused on a specific sector or on a specific societal challenge, but the starting point in this respect is always about concretising how to do so responsibly. A matchmaking event was organised in The Hague on 6 March 2025 to bring together and inspire interested parties. You can read a report on the event here.
1.1 million in grants
Projects had to be submitted by 31 March 2025 by a knowledge institution working with at least one business. After the close of the call and the initial assessment of the submitted projects, it became apparent that a large number of high-quality applications had been submitted. In total, 39 proposals were submitted.
After an initial assessment, 16 proposals were submitted to the Advisory Board, which gave a positive opinion on the following 10 proposals:
- From Responsible AI to Explainable AI in understanding the online public debate
- Democracy in Traffic: a 5-star model for responsible deployment of iVRIs
- Validation framework reliable LLMs for public information provision
- Responsible AI in the library
- DiBiLi: Diagnosing Bias in Library Recommender Systems
- The Responsible Design and Use of AI-Driven Period and Fertility Tracking Technologies ("Femtech")
- Performance review for AI (FG-AI)
- Explaining Fairness scores of AI-based assurance risk and analytics Responibly (eFAIR)
- Responsible AI for Clinical Decision Support (RACliDeS)
- Fair AI Attribution (FAIA)
The total budget for this call is €1.1 million. Projects will receive up to €125,000 in funding per project. A condition is that a minimum of 20% must be co-financed by the knowledge institution.
More about the projects
- From Responsible AI to Explainable AI in understanding the online public debate
Lead applicant: University of Utrecht of Applied Sciences
Partners: Municipality of Utrecht, Netherlands Enterprise Agency, Province of Utrecht, Netherlands Police and V&R.
Grant: 73,650 euros
Brief Description: The project "From Responsible AI to Explainable AI" is developing a reflexive learning environment for communication professionals to better analyse online polarisation and disinformation. The existing learning environment/tool (called BEP) is being further developed from responsible to explainable AI, with transparency about algorithms and a focus on human-machine collaboration. The main question of this project is: "How do we develop a data-driven learning environment and an AI-driven analysis tool in which the best practices from the responsible AI framework are understood and applied by users? How do we move from a responsible AI learning environment to an explainable AI learning environment?" The outcomes of this project can serve as an example for many other projects in which algorithms and AI are used.
- Democracy in Traffic: a 5-star model for responsible deployment of iVRIs
Lead applicant: University of Groningen
Partner: The Green Land
Grant: 117,580 euros
Brief Description: This project is developing a 5-star model to help municipalities democratically deploy intelligent traffic lights (iVRIs). It focuses on ensuring public values such as transparency, fair consideration of interests and citizen participation in AI-controlled traffic systems. The model is being tested in collaboration with municipalities and based on ethical frameworks such as Meaningful Human Control. In two municipalities, the project will explore this in four work packages: theoretical deepening, empirical embedding, model development and pilots/roll-out. The results of the project will be documented in reports, methodological manuals and, where relevant, open-access scientific publications. There will be a central project website (democratieinhetverkeer.nl), workshops for public sector bodies, businesses and civil society organisations, and documentation usable for educational purposes.
- Validation framework reliable LLMs for public information provision
Lead applicant: Technical University of Eindhoven
Partners: Judicial System, Stichting Algorithm Audit, T&T Data Consultancy and Deloitte Consultative Services
Grant: 82,044 euros
Brief Description: This project develops a practical validation framework for the responsible deployment of Large Language Models (LLMs) in the public sector, based on the successful voorRecht-Rechtspraak pilot of the Judicial System. VoorRecht-Rechtspraak provides conflict mediation support in construction disputes and disputes involving owners' associations. Users share information with an LLM-driven assistant, after which they receive advice in understandable language. This project proposal aims to create a validation framework with the help of this case study, thus supporting organisations in ensuring privacy, robustness and transparency when using LLMs to provide information to citizens.
The results of the project will be shared through the consortium partners with, among others, the VNG, VNO-NWC, AI Coalitie 4 NL, Rijks Innovatie Community, ECP, the Ministry of the Interior and at conferences organised by iBestuur and Binnenlands bestuur.
- Responsible AI in the library
Lead applicant: Eindhoven University of Technology/JADS
Partners: Datacation and Eindhoven Library (on behalf of 8 collaborating libraries in the Brainport region)
Grant: 125,000 euros
Brief Description: This project is developing a transparent and explainable AI system for libraries, based on the existing Bookbot (an AI book advisor). It focuses on solving three key problems:
(1) Lack of user understanding of how AI arrives at recommendations;
(2) Lack of practical guidelines for responsible AI in the library sector;
(3) Tension between personalisation and diversity (filter bubbles).
The aim of this project is to create a framework for explainable, fair and user-friendly AI in libraries, with concrete tools to increase transparency. A prototype transparent AI interface will be built (integrated into Bookbot), as will an Explainable AI Toolkit with implementation guides. Educational materials for librarians and users will also be developed, and academic publications on transparency and user trust will follow.
- DiBiLi: Diagnosing Bias in Library Recommender Systems
Lead applicant: Centrum Wiskunde en Informatica (CWI - Research institute for mathematics and computer science in the Netherlands)
Partners: KB (National Library of the Netherlands), Simon Dirks’ software company and Bookarang BV.
Grant: 123,900 euros
Brief Description: Recommendation systems play a major role in our digital interactions, especially in the culture and media sector. They help users by providing relevant suggestions, which can be valuable for public institutions such as libraries. Yet these systems are under discussion because of their tendency towards bias (prejudice), such as reinforcing stereotypes or filter bubbles. As a result, libraries are reluctant to use automated recommendations for fear of undermining public values such as inclusivity. This project focuses on developing a diagnostic dashboard to reveal bias in library recommendations. The dashboard analyses borrowing behaviour and recommended book lists, with a special focus on biases against certain author groups.
The project builds on previous research within the Cultural AI Lab, including Savvina Daniil's PhD research on responsible recommendation systems. The project also ties in with the KB's digital transformation, which is developing a personalised online Library Platform. In addition, the project contributes to the debate on ethical AI in the cultural sector and provides ICT companies with tools for better systems. The results of the project will be shared through a diagnostic dashboard, guidelines and reporting, a scientific publication, presentations at libraries, ICT companies and scientific events, and through practical application at libraries and ICT companies.
- The Responsible Design and Use of AI-Driven Period and Fertility Tracking Technologies ("Femtech")
Lead applicant: Stichting VU, Vrije Universiteit
Partners: Yoni.care, 28X and Feminist Generative AI Lab (initiative of TU Delft and Erasmus University)
Grant: 125,000 euros
Brief Description: The project "The Responsible Design and Use of AI-Driven Period and Fertility Tracking Technologies" focuses on developing responsible AI applications in the FemTech sector, specifically for menstruation and fertility tracking apps. With the FemTech market worth $55 billion worldwide and over 50 million users, ethical guidelines for these technologies are urgently needed. The project examines the risks of existing AI solutions, such as unsubstantiated medical advice and privacy issues.
The researchers are developing practical tools, such as decision aids for users and healthcare professionals, guidelines for responsible AI development, and educational modules. Results are disseminated through academic publications, workshops and networking events, focusing on both technical aspects (such as algorithmic bias) and user-focused elements (such as transparency).
- Performance review for AI (FG-AI)
Lead applicant: University of Utrecht
Partners: National Digital Infrastructure Inspectorate, the Netherlands Food and Consumer Product Safety Authority and Berenschot Groep BV
Grant: 125124.235 euros
Brief Description: The Performance Interview for AI helps organisations monitor AI. With a web application, reporting and research, it contributes to transparency, responsible innovation and a strong AI ecosystem. The project aims to further develop a prototype of the Performance Interview for AI. They are piloting the performance interview with several organisations and having two regulators evaluate this process. They will share results with other supervisors, through scientific articles and user workshops. The current Excel file will be developed into a web app. The consortium aims to scientifically validate the Performance Interview for AI.
The project will explore this through a long-term pilot with 8-12 social partners who will monitor at least one algorithm for a period of one year. After completion of the pilot, focus groups will be organised with pilot partners. The data generated by the pilot will also be evaluated with regulators. The results of the project will be shared at both academic and societal levels.
- Explaining Fairness scores of AI-based assurance risk and analytics Responsibly (eFAIR)
Lead applicant: University of Utrecht of Applied Sciences
Partners: Jheronimus Academy of Data Science (JADS) at Eindhoven University of Technology, MavenBlue and the Association of Insurers
Grant: 125,000 euros
Brief Description: The eFAIR project explores how Explainable AI (XAI) can contribute to making fairness measures and fairness principles transparent and understandable to different user groups. The project is developing a framework that presents fairness dimensions dynamically, taking into account the knowledge and needs of specific user groups. The framework will be interactive and modular, so that it can be adapted to different users, from technical experts to policy-makers and end users.
The project aims to translate fairness dimensions into actionable insights for policymakers, regulators and end users. For this project, they have chosen car insurance as a case study. In the Leidsche Rijn district, car insurance is 20 euros per month cheaper than in the Kanaleneiland district. If there are more burglaries in a neighbourhood, there is a higher premium, but in this case, more people from non-Western backgrounds also live there. To what extent is this still premium differentiation or is this perhaps also discrimination?
The results of the project will be shared through scientific publications on needs and best practices, publications in professional journals, workshops and newsletters through the Insurers' Association. The eFAIR framework will be shared through scientific open-access publications, an online open-source demonstration tool and by offering workshops and training session through the network of the Insurers' Association to support implementation of the framework.
- Fair AI Attribution (FAIA)
Lead applicant: Leiden University
Partners: GO FAIR Foundation and Liccium BV
Grant: 86,756 euros
Brief Description: FAIA, in collaboration with Leiden University, GO FAIR and Liccium, is developing an open framework for AI attribution, to ensure transparency on AI use in digital content. With the rise of generative AI, it is becoming increasingly difficult to distinguish between human-created content and machine-produced or edited content. This lack of transparency has serious implications - not only for public trust and the equitable reuse of creative content, but also for scientific integrity, the credibility of digital media and compliance with laws and regulations.
This project aims to create a standard vocabulary to clarify that digital content has been created with the help of AI. The first prototype will focus on academic papers. The aim is for it subsequently to be adaptable to other contexts. The project will design and implement a structured, interoperable and verifiable framework for Fair AI Attribution (FAIA). The outcomes of the project will be shared and implemented open-source through the Liccium platform.
- Responsible AI for Clinical Decision Support (RACliDeS)
Lead applicant: Stichting Radboud Universiteit
Partners: University of Utrecht, Hanze University of Applied Sciences, ConnectedCare Services B.V. and ICTRecht Amsterdam BV
Grant: 124,974 euros
Brief Description: This project investigates what it would take - from a technical, legal and medical-ethical point of view - to ensure that doctor and patient can use AI technology reliably and responsibly without losing grip on decision-making. As a use case, this project applies the practical experiences of the ENDORISK system for uterine cancer. Uterine cancer usually spreads first to the lymph nodes and from there to the rest of the body. With the help of a patient's examination results, ENDORISK can calculate the probability that the cancer will spread to the lymph nodes. With these results, the doctor and patient can discuss the pros and cons of surgically removing the lymph nodes.
This project is substantively and organisationally embedded within the ELSA lab Legal, Regulatory and Policy Aspects of Clinical Decision Support Systems, which will start in September 2025 with grant funding from AINed through the National Growth Fund. The project consists of a white paper with guidelines for certifiable AI in healthcare. It is also developing a practical roadmap for AI systems that comply with legal frameworks (such as the AI Act and GDPR), medical-ethical standards (such as informed choice and bias reduction) and technical requirements (such as explainability and traceability). In the area of interaction design, there will be tools that allow doctors to convey AI advice in an understandable way, as well as methods for patient empowerment through visual explanations of risks and treatment alternatives. Finally, knowledge will be disseminated through workshops for healthcare professionals, developers and regulators, and open access materials on responsible AI implementation in healthcare.