FEPLab - Free Energy Principle Lab
The FEPlab (Free Energy Principle Laboratory) is a collaboration between Eindhoven University of Technology (TU/e) and GN Hearing. The mission of the lab is to ameliorate the participation of hearing-impaired people in formal and informal social settings. The lab will focus its research on transferring a leading physics/neuroscience-based theory about computation in the brain, the Free Energy Principle (FEP), to practical use in human-centered agents such as hearing devices and VR technology.
GN Hearing, which is a globally leading hearing aid manufacturer with a strong research team (of about 20 persons) in Eindhoven, and the TU/e have already been collaborating for many years in BIASlab, which is a research team at the Electrical Engineering department at TU/e. This collaboration has produced theoretical foundations for synthetic FEP-based AI agents. FEPlab has been set up in 2022 and is expected to run until mid-2027. During this time, the partners will continue to develop these FEP agents into a technology that is ready for deployment in the professional hearing device industry.
FEPlab focuses on two Sustainable Development Goals: Goal 3, Good Health and Well-being, and Goal 5, Promote sustained, inclusive, and sustainable economic growth, full and productive employment, and decent work for all. Untreated hearing loss in the elderly increases the risk of developing dementia and Alzheimer’s disease  as well as emotional and physical problems . Therefore, this research neatly ties into SDG3 Target 1: reducing premature mortality from non-communicable diseases. Moreover, hearing loss negatively impacts work participation . Hence, this research also ties into SDG8 Target 1: achieve higher levels of economic productivity through technology upgrading and innovation.
The lab comprises experts from different fields of expertise such as Audiology, Autonomous Agents & Robotics, Decision Making, and Machine Learning to tackle the complex multidisciplinary challenges at hand. Socially aware AI and explainable AI are especially important in the lab’s research since the technology needs to be aware of the social context in which it is operating and be able to provide justification for its decisions and actions in a manner that is understandable by humans to ensure its safe use.
 Ralli, Massimo, et al. “Hearing loss and Alzheimer’s disease: A Review.” The international tinnitus journal 23.2 (2019): 79-85.
 Ciorba, Andrea, et al. “The impact of hearing loss on the quality of life of elderly adults.” Clinical interventions in aging 7 (2012): 159.
 Svinndal, Elisabeth Vigrestad, et al. “Hearing loss and work participation: a cross-sectional study in Norway.” International journal of audiology 57.9 (2018): 646-656.
… is part of the ROBUST program on Trustworthy AI-based Systems for Sustainable Growth which is financed under the NWO LTP funding scheme. To accelerate the energy transition and ensure tangible social value, the lab will focus on … specific targets of … Sustainable Development Goals (SDGs) related to …
Target 3.4: By 2030, reduce by one third premature mortality from non-communicable diseases through prevention and treatment and promote mental health and well-being
Target 8.2: Achieve higher levels of economic productivity through diversification, technological upgrading and innovation, including through a focus on high-value added and labour-intensive sectors
The main scientific challenge is how to realize robust real-time Bayesian inference in probabilistic models under situated conditions with limited computational resources. In general, Bayesian inference is intractable for moderately sized (and larger) real-world applications. In this WP, we will contribute to the development of an automated inference engine based on a reactive programming paradigm. Some promising initial results in this area have been reported by previous PhD students in our lab , but essential breakthroughs are still needed. We will focus on developing methods and code to maintain inference performance when model components fail at random (intrinsic trust: reliability) and develop FE heatmaps for under-the-hood visualization of the performance contributions of the components of the system (extrinsic trust: explainability).
 Cox, Marco, van de Laar, Thijs, and de Vries, Bert. “A factor graph approach to automated design of Bayesian signal processing algorithms.” International Journal of Approximate Reasoning 104 (2019): 185-204.
The main scientific challenge for this research topic is to realize robust and scalable FEP agents. This WP builds on research topic 1, where robust and resilient reactive message passing-based inference in generative models is developed. Here we further develop the methods and tools to facilitate real-time inference in practical FEP agents. An important issue is to endow practical models with both temporal depth (to support predicting the future) and hierarchical depth (to support abstract goal settings such as “end user must be happy”). Automated inference in these hierarchical dynamical systems leads to a series of computational scaling issues that will be addressed in this research. There is currently very little knowledge about these computational issues when they are dealt with in a reactive programming environment. An important thread that we wish to explore is the early stopping of passing messages when the FE seems to have converged. In a reactive programming environment, there is no preset message-passing schedule and since FE minimization is the only objective, there is no reason to keep passing messages after FE convergence. We conjecture that this mechanism may be a key factor in maintaining real-time inference in large systems.
An FEP agent mainly comprises FE minimization in a Probabilistic Generative Model (PGM) for its sensory signals. In this project, the agent’s sensory signals are audio signals from the hearing device and performance appraisals by the end user. The main research challenge in this PhD research is to develop a PGM for the audio signals that are processed by the hearing device. In general, acoustic signals in the real world are composed of mixtures of source signals (such as a mixture of a speech signal from a conversation partner, plus background chatter and traffic noise). We plan to develop a PGM for acoustic mixtures that can be applied to hearing devices. Crucially, the model must be compact enough to support the real-time inference of the constituent sources, and yet accurate enough to be accepted by hearing device users. These constraints lead to an engineering design trade-off (accuracy vs. model complexity) that is the essential research issue in this PhD research project.
Thirty-plus years of generative modeling research for real-world acoustic signals have not yet resulted in a compact yet accurate PGM for acoustic mixtures. In this PhD research project, we take a novel approach by framing audio processing as a reactive message-passing process. In research direction 1, we develop an RMP toolbox that maintains inference performance even if model components fail randomly (reliability issue). In this view, in-situ model structure adaptation should then also maintain the performance of the inference process. In this PhD research, we take advantage of this property and aim to explore in-situ model structure adaptation. Since FE can be decomposed as “model complexity minus accuracy”, we hypothesize that minimization of FE by model structure adaptation should lead to a highly accurate yet compact acoustic model.
An FEP agent mainly comprises FE minimization in a Probabilistic Generative Model (PGM) for its sensory signals. In this project, the agent’s sensory signals are both audio signals from the hearing device and performance appraisals by the end user. As discussed in research project 5, the user interface must be covert, very lightweight, and yet as informative as possible. In this research project, the challenge is to translate these ideas into a formal specification of a probabilistic user interaction model. A particular challenge in this research project will be to accommodate an adaptive user interaction protocol, where the protocol itself optimizes in situ under the pressure of FE minimization. Our goal will be to learn from user interactions which interaction protocol is most preferred (least cognitive burden, yet informative) from a parameterized model for user interactions. That is, in addition to adapting the parameters of the interaction model based on the modeling of covert and overt determinants of acceptance and trust (identified and tested in research project 5), this research project will create a model that also incorporates parameters of characteristics of the interaction itself (e.g., interaction modalities and resolution, interaction context, etc). For example, when the user is busy with another task (e.g., listening to an important discussion), the model can indicate that for optimal acceptance the user should be able to use only one single headshake (indicating only two levels of good or bad evaluation of the settings of the hearing device). In another context (e.g., when the user is walking in the forest), the model can indicate that for optimal acceptance the user should be able to express evaluation with more precision, and for example, use several headshakes (indicating more levels of evaluation of the settings of the hearing device). Together with the PhD student for research project 5, you will closely collaborate on integrating the proposed determinants of acceptance (research project 5) with the parameterized model for user interactions (research project 4).
This research project will focus on the human-technology interaction component of hearing aid personalization. Firstly, the tuning process should be as unobtrusive as possible to the HA wearer, since the tuning process should not interfere with daily tasks, such as ongoing conversations. Secondly, the tuning process should be covert in the sense that it should not be noticeable to conversation partners and other bystanders. Finally, there are privacy, security, and ethical issues relating to the in-situ recording of acoustic data. We wish to avoid storing recorded data for offline training. The main research challenges are to identify (social) psychological, ethical, legal, and practical issues of in-situ hearing device tuning and translate these issues to an appropriate interaction protocol. To formulate the interaction protocol, we need to understand why users choose personalized algorithms, and under which circumstances personalization might not be desirable. Based on earlier research on acceptance of technology, satisfaction, and usability, the current research will identify (social) psychological, ethical, legal, and practical issues of in-situ hearing device tuning and translate these issues to an appropriate interaction protocol. Different from earlier research on these topics, for understanding acceptance of a hearing device, it is crucial to investigate determinants of more covert acceptance. That is, preferably, users remain unaware of many characteristics of this technology. For example, users preferably remain only peripherally aware of many interface elements, and even when interacting with the hearing aid is necessary (e.g., for requesting personalization), that interaction remains as covert as possible (to the user and her/his surroundings). Additionally, as described in research project 4, research projects 4 & 5 closely collaborate on integrating the proposed determinants of acceptance (research project 4) with the parameterized interaction model for user interactions (research project 5) allowing not only to adapt the parameters of the interaction model based on the modeling of covert and overt determinants of acceptance and trust (identified and tested in research project 5), both also allowing creation (research project 4) of a model that incorporates parameters of characteristics of the interaction itself (e.g., interaction modalities and resolution, interaction context, etc). Thereby, this research opens new domains of research into technology acceptance, satisfaction with, and usability of more covert technologies.
Bert de Vries is a Professor in the Signal Processing Systems group at Eindhoven University of Technology (TU/e). His research focuses on the development of intelligent autonomous agents that learn from in-situ interactions with their environment.
Jaap Ham is an Associate Professor in the Industrial Engineering & Innovation Sciences at Eindhoven University of Technology (TU/e). His research focuses on (Ambient) Persuasive Technology; the basic and applied mechanisms of how technology can influence people.