TAIM

Full name

TAIM – Trustworthy AI for Media Lab

Research Lines

  • Accuracy
  • Explainability
  • Reliability

Sustainable Development Goals

The TAIM lab brings together two of the strongest groups on personalization and recommender systems in the Netherlands, the University of Amsterdam and the University of Maastricht, and a leading media organization, RTL, to develop trustworthy and personalized media.

We expect recruitment for this lab to open in early 2023. Please check back frequently for updated information about how to apply, and to register for an online information session in February. If you’d like to join our mailing list, please fill out the form, and we will be happy to keep you informed of all the latest developments.

With the largest commercial networks in the Netherlands, RTL plays an important role in society. They reach over 85% of Dutch people on a weekly basis, spending on average around 45 minutes a day with them and their content. RTL uses AI all across the value chain, from production to distribution, from automatically identifying interesting promotional material to providing a personalized content experience. At all these stages, we want to be able to trust our AI to be inclusive and steer away from unwanted bias. In particular, TAIM Lab focuses on developing AI that is reliable: RTL represents everyone in NL, that is their AI methods should not have a bias toward or against any group. Therefore, we adopt an intersectional approach to bias with regard to gender, age, background, etc., and optimize AI for diversity and inclusion. The research in this lab entails both ensuring diversity of voices (plurality) being expressed in the media, as well as a fair exposure to content for different groups of users. It is also essential to understand why some traditionally marginalized social groups distrust AI, and what can be done to develop trust, through the development of transparent systems in which the perspectives and needs of such groups are incorporated.

Throughout the TAIM Lab mixed methods are used: offline evaluation of open and internal data, as well as qualitative analysis (e.g., in structured interviews and panels). In this collaboration, we are able to jointly study fundamental issues of the long-term effects of AI in relation to fairness and inclusion. Having access to recommendation data and platforms allows us to study and measure fairness across different pipelines in a longitudinal manner which is rarely possible in academic projects. In fact, the findings of the majority of academic studies in recommender systems are difficult to translate to practice precisely for limited longitudinal data, relying many times on simulations and the assumptions made therein. At the same time, this allows us to overcome a frequent shortcoming in the industry where projects focus on short-term engagement metrics, potentially to the detriment of long-term metrics such as retention and conversion of new users.

Methods, algorithms, and artifact outputs within the lab will be demonstrated via pilots and POCs, together with RTL. We will demonstrate the reliability and accuracy of personalization solutions developed by the lab. Alongside this, we will study the biases the solutions might introduce or take away. In the long term, this can lead to improved systems in production, not only at RTL but informing best practices in the media industry in the Netherlands. 

Sustainable Development Goals

The TAIM Lab is part of the ROBUST program on Trustworthy AI-based Systems for Sustainable Growth which is financed under the NWO LTP funding scheme. To develop trustworthy and personalized media, the TAIM Lab will guide and evaluate its research using the  Sustainable Development Goals (SDGs) 10 and 16.

Inclusion is supported by developing technologies that ensure public access to information and remove barriers, such as subtitling for citizens who are deaf or hard of hearing. Additionally, we want to make sure that minority voices and groups are sufficiently visible, requiring the development of metrics for assessing such fairness, and the measurement in the cases where exclusion may happen (e.g., fair distribution of gender or ethnicity in the automatic development of promotional materials). In the study of the sociotechnical systems at RTL, stakeholder interviews, in particular regarding the development of metadata and content, will be a cross-cutting concern for the lab.

SDG 10: Reduce inequality within and among countries

Target 10.2: By 2030, empower and promote the social, economic and political inclusion of all, irrespective of age, sex, disability, race, ethnicity, origin, religion or economic or other status

SDG 16: Promote peaceful and inclusive societies for sustainable development, provide access to justice for all, and build effective, accountable, and inclusive institutions at all levels

Target 16.7:  Ensure responsive, inclusive, participatory, and representative decision-making at all levels;
Target 16.10: Ensure public access to information and protect fundamental freedoms, in accordance with national legislation and international agreements.

Staff

Nava Tintarev

Scientific Director

Maarten de Rijke

Scientific Director

Partners

The stakeholders in RTL reach far beyond data science, including but not limited to: content operation teams responsible for subtitling, product owners and developers of the RTL.nl platform, promotional content creators, sellers of advertisements at AdAlliance, and product owners for the ad-based Videoland experience, and many crosscutting stakeholders on the topic of diversity and inclusion, lead by the communication team. Researchers from UM will be involved in the lab management and supervision of PhD students, as well as contributions from experienced faculty members. Maastricht contributes expertise in fairness in recommender systems, user-centered studies, and social science expertise in gender, diversity, and inclusion. Researchers from UvA will be involved in the lab management and supervision of PhD students, as well as contributions from experienced faculty members. Amsterdam contributes expertise in video analysis, recommender systems, language technologies, and search.