
ICAI interview with Georgios Vlassopoulos: Strengthening the friendship between AI and humans
Georgios Vlassopoulos is a PhD student at KPN Responsible AI Lab, located in Den Bosch. He works on explainability systems for AI-models. Vlassopoulos: ‘If we want artificial intelligence predicting stuff, we should be able to explain why it makes a certain decision. Without this transparency, AI could be dangerous.’
What is your research about?
‘My algorithm tries to explain to the user why the AI-system has made a certain decision. I’ll give an example. KPN uses natural language processing on texts looking for complaints of customers. How do you explain to the user why the system has categorized certain texts? What is the decision of the computer based on? In this case, my algorithm tries to learn the semantics that people use in complaints to use in the explanation of the model.’
‘You can expand this to many domains. Say that a doctor uses AI to detect cancer. The doctor only sees the prediction of the model, so if the patient has cancer or not. The patient must be very eager to know why the computer has made a certain decision. With my algorithm, I would learn the attributes, like the shape of a tumour, to the system and build an understandable explanation based on these attributes.’
How do you approach your research?
‘Let’s stick to the KPN example. For a large amount of texts the classifier would say: I’m not sure if it’s a complaint or not. I focus on the decision boundary, which is the set of datapoints for which the classifier is completely uncertain. All the classification information is encoded in this decision boundary, which is very complex. My approach is to train a simple model which mimics the behaviour of only a certain part of this complex information. And this can be communicated to the user.’
Why is your research different from other methods?
‘The explanations of current popular explaining models can be misleading. When you use these methods on high dimensional data, e.g. images, they treat every pixel as an individual feature. My position is that you cannot build a proper explanation based on pixels. I introduced a different framework that scales well for high dimensional data. And the explanations become more humanlike.’
Why is your research important?
‘In a data-driven world it is very important for AI to become friends with human beings. People should be able to understand why an AI-system makes a certain decision. If a bank classifies its customers with an AI-system on whether they are fit to receive a loan or not, then they should be able to inform the customers why they are accepted or rejected.’
What are the main challenges you face doing this research?
‘It’s like you’re looking for aliens. There is no ground truth. The problem is that you don’t really have an accuracy measure. If we take the medical example, a doctor can say that an explanation from the system is close to his intuition. But how can you prove that this is actually correct? I need to design the experiments carefully and still everything can go wrong. Sometimes I have to repeat an experiment multiple times.’
What are you most proud of?
‘The fact that I have made something that works. And it has good chances to be published in a top conference. The final answer will come in January 2021. But I’m already proud that high impact scientists say that my work is good.’
In this months’ Lunch at ICAI Meetup on Transparency and Trust in AI on December 17 Georgios Vlassopoulos will discuss his research. Sign up here.