With the largest commercial networks in the Netherlands, RTL plays an important role in society. They reach over 85% of Dutch people on a weekly basis, spending on average around 45 minutes a day with them and their content. RTL uses AI all across the value chain, from production to distribution, from automatically identifying interesting promotional material to providing a personalized content experience. At all these stages, we want to be able to trust our AI to be inclusive and steer away from unwanted bias. In particular, TAIM Lab focuses on developing AI that is reliable: RTL represents everyone in NL, that is their AI methods should not have a bias toward or against any group. Therefore, we adopt an intersectional approach to bias with regard to gender, age, background, etc., and optimize AI for diversity and inclusion. The research in this lab entails both ensuring diversity of voices (plurality) being expressed in the media, as well as a fair exposure to content for different groups of users. It is also essential to understand why some traditionally marginalized social groups distrust AI, and what can be done to develop trust, through the development of transparent systems in which the perspectives and needs of such groups are incorporated.
Throughout the TAIM Lab mixed methods are used: offline evaluation of open and internal data, as well as qualitative analysis (e.g., in structured interviews and panels). In this collaboration, we are able to jointly study fundamental issues of the long-term effects of AI in relation to fairness and inclusion. Having access to recommendation data and platforms allows us to study and measure fairness across different pipelines in a longitudinal manner which is rarely possible in academic projects. In fact, the findings of the majority of academic studies in recommender systems are difficult to translate to practice precisely for limited longitudinal data, relying many times on simulations and the assumptions made therein. At the same time, this allows us to overcome a frequent shortcoming in the industry where projects focus on short-term engagement metrics, potentially to the detriment of long-term metrics such as retention and conversion of new users.
Methods, algorithms, and artifact outputs within the lab will be demonstrated via pilots and POCs, together with RTL. We will demonstrate the reliability and accuracy of personalization solutions developed by the lab. Alongside this, we will study the biases the solutions might introduce or take away. In the long term, this can lead to improved systems in production, not only at RTL but informing best practices in the media industry in the Netherlands.