SAFEGUARD

Full name

SAFEGUARD – Auditing for Responsible AI Software Systems

Research Lines

  • Explainability
  • Reliability
  • Resilience

Sustainable Development Goals

The research of the SAFEGUARD Lab is focused on developing and validating new auditing theories, tools, and methodologies to ensure that AI-enabled enterprise systems and applications have a high degree of integrity. SAFEGUARD Lab is a collaboration between Deloitte and the Jheronimus Academy of Scientific Data Science.

We expect recruitment for this lab to open in early 2023. Please check back frequently for updated information about how to apply, and to register for an online information session in February. If you’d like to join our mailing list, please fill out the form, and we will be happy to keep you informed of all the latest developments.

The next generation of enterprise applications is quickly becoming AI-enabled, providing novel functionalities with unprecedented levels of automation and intelligence. As we recover, reopen, and rebuild, it is time to rethink the importance of trust. At no time has it been more tested or valued in leaders and each other. Trust is the basis for connection. Trust is all-encompassing: physical, emotional, digital, financial, and ethical. A nice-to-have is now a must-have; a principle is now a catalyst; a value is now invaluable.

Trust distinguishes and elevates sociality and business. Therefore, trust should be at the forefront of AI’s planning, strategy, and purpose. Consequently, we need new approaches to render AI-enabled enterprise systems and applications trustworthy, meaning they should fulfill the following six requirements: (1) fair, (2) explainable and transparent, (3) responsible and auditable, (4) robust and reliable, (5) respectful of privacy and (6) safe and secure.  SAFEGUARD aims at realizing systems that adhere to these requirements.

“Explore, develop and validate novel auditing theories, tools, and methodologies that will be able to monitor and audit whether AI applications adhere in terms of fairness (no bias), explainability, transparency (easy to explain), robustness and reliability (delivering same results under various execution environments), respect of privacy (respecting GDPR), and safety and security (with no vulnerabilities).” 

The research at the SAFEGUARD Lab is focused on several different directions. These include: developing a theoretical framework and prototypical tool for assessing bias and application smell metrics and exploring a socio-technical approach to explainability and transparency. Additionally, the SAFEGUARD lab focuses on creating a toolsuite and methodology for ensuring responsibility and accountability through internal audits, developing prototypes, and a methodology for ensuring robustness and reliability. Lastly, the lab will focus on creating an experimental toolchain with machine-learning enabled and continuous testing techniques for testing AI software components as part of a DevOps pipeline.

Sustainable Development Goals

SAFEGUARD Lab is part of the ROBUST program on Trustworthy AI-based Systems for Sustainable Growth which is financed under the NWO LTP funding scheme. To ensure that its research has tangible social value, the lab will focus on two Sustainable Development Goals (SDGs), 3 and 5.

SDG 3: Ensure healthy lives and promote well-being for all at all ages

Target 3.7: By 2030, ensure universal access to sexual and reproductive health-care services, including family planning, information and education, and the integration of reproductive health into national strategies and programs;
Target 3.d: Strengthen the capacity of all countries, in particular developing countries, for early warning, risk reduction, and management of national and global health risks

SDG 5: Achieve gender equality and empower all women and girls

Target 5.1: End all forms of discrimination against all women and girls everywhere

Staff

Willem-Jan van den Heuvel

Scientific Director

Eric Postma

Scientific Director

Partners

Deloitte will provide requirements for developing the framework, resources for a qualitative study, and the data to test the future product. JADS Tilburg University will provide knowledge and expertise concerning financial/economic, continuous and interactive auditing, and psychological and legal aspects. JADS Eindhoven University of Technology will bring its expertise on AI/ML in general and automatic (white/black/regression) DevOps-style testing and test case generation in particular.