ICAI Interview with Jesse Scholtes: Making an impact in the real world

Program manager of FAST LAB, Jesse Scholtes, makes sure the collaboration between the researchers of TU Eindhoven and their five industry partners runs smoothly. Scholtes: ‘My main role is to manage the expectations and create a win-win situation.’

FAST LAB (new Frontiers in Autonomous Systems Technology) has joined ICAI this week. The lab is in its fourth year of research and creates smart industrial mobile robots that can deal with sudden obstacles in environments like farms, airports and oil & gas sites. The researchers of Eindhoven University of Technology work together with the industry partners Rademaker, ExRobotics, Vanderlande, Lely and Diversey.

Jesse Scholtes

How is it to work with so many different partners in one lab?

Scholtes: ‘Academia and industry are different worlds. Our industry partners want to implement the technology into their products as soon as possible. They are short-term driven. The academic world wants to come up with the best idea and the best way of solving something. For me it’s important to manage the expectations continuously, be very transparent about what we do and how that will turn into a benefit for our partners.’

How do you make this work?

‘Two things are important. One is the realization of all parties that it’s a shared investment. If the companies would develop this research on their own, it would cost them a lot more. The second thing we did from the start is make sure the involved companies are not each other’s competitors. That’s the biggest prerequisite for success. The companies share the same kind of R&D questions, but are active in a different domain. If you take away the potential commercial risk, then people open up, start to talk, share ideas and learn from each other.’

How do you translate that into concrete results?

‘In the first year the researchers spend a lot of time at the industry partners to understand what their challenges are. One of the core things that we have adopted is the end-of-year-demonstration. Researchers bring their new ideas, implement them into the systems of the industry partner and test them in a real world environment. Here we can see the results of what we’ve made and we can also steer the project to a different direction if needed.’

What are you most proud of regarding FAST LAB?

‘That we created a very open friendship and constructive partnership with our researchers and the partners. They really came together as a team, working on the same topics and helping each other. That cannot be taken for granted.’

Where do you see FAST LAB in the next few years?

‘FAST LAB will continue for one more year. But we are working on a successor with existing partners and probably with some new partners. There is a lot of interest. I hope we can create an ecosystem of companies and university-researchers working together and creating this type of win-win situation. As long as we continue to do that, we can continue this cycle and provide much needed continuity in development of novel ideas.’

On the ICAI Lunch Meetup of January 21, 2021, Jesse Scholtes will present the FAST LAB. More info and sign up here.

ICAI interview with Georgios Vlassopoulos: Strengthening the friendship between AI and humans

Georgios Vlassopoulos is a PhD student at KPN Responsible AI Lab, located in Den Bosch. He works on explainability systems for AI-models. Vlassopoulos: ‘If we want artificial intelligence predicting stuff, we should be able to explain why it makes a certain decision. Without this transparency, AI could be dangerous.’

Georgios Vlassopoulos

What is your research about?

‘My algorithm tries to explain to the user why the AI-system has made a certain decision. I’ll give an example. KPN uses natural language processing on texts looking for complaints of customers. How do you explain to the user why the system has categorized certain texts? What is the decision of the computer based on? In this case, my algorithm tries to learn the semantics that people use in complaints to use in the explanation of the model.’

‘You can expand this to many domains. Say that a doctor uses AI to detect cancer. The doctor only sees the prediction of the model, so if the patient has cancer or not. The patient must be very eager to know why the computer has made a certain decision. With my algorithm, I would learn the attributes, like the shape of a tumour, to the system and build an understandable explanation based on these attributes.’

How do you approach your research?

‘Let’s stick to the KPN example. For a large amount of texts the classifier would say: I’m not sure if it’s a complaint or not. I focus on the decision boundary, which is the set of datapoints for which the classifier is completely uncertain. All the classification information is encoded in this decision boundary, which is very complex. My approach is to train a simple model which mimics the behaviour of only a certain part of this complex information. And this can be communicated to the user.’

Why is your research different from other methods?

‘The explanations of current popular explaining models can be misleading. When you use these methods on high dimensional data, e.g. images, they treat every pixel as an individual feature. My position is that you cannot build a proper explanation based on pixels. I introduced a different framework that scales well for high dimensional data. And the explanations become more humanlike.’

Why is your research important?

‘In a data-driven world it is very important for AI to become friends with human beings. People should be able to understand why an AI-system makes a certain decision. If a bank classifies its customers with an AI-system on whether they are fit to receive a loan or not, then they should be able to inform the customers why they are accepted or rejected.’

What are the main challenges you face doing this research?

‘It’s like you’re looking for aliens. There is no ground truth. The problem is that you don’t really have an accuracy measure. If we take the medical example, a doctor can say that an explanation from the system is close to his intuition. But how can you prove that this is actually correct? I need to design the experiments carefully and still everything can go wrong. Sometimes I have to repeat an experiment multiple times.’

What are you most proud of?

‘The fact that I have made something that works. And it has good chances to be published in a top conference. The final answer will come in January 2021. But I’m already proud that high impact scientists say that my work is good.’

In this months’ Lunch at ICAI Meetup on Transparency and Trust in AI on December 17 Georgios Vlassopoulos will discuss his research. Sign up here.