eyeo is proud to host a Cologne AI and Machine Learning meetup. This meetup is focused on AI explainability and of course with our exploration of the use of machine learning for ad blocking we are very interested in the topic as well.
Explainabile AI (XAI) is an old problem of the field, but with recent advances mostly in Deep Learning it became much more prominent. After all people do want to know why a medical AI made this diagnosis as opposed to another one, or why did the autonomous vehicle take that specific action.
In the context of ad blocking, however, explainable AI question also quickly gains a whole lot of social connotations. If it’s an AI that’s making decisions about what to block, instead of a filter list community, we need to understand how that AI works so that we use it responsibly.
We want to know why our models make decisions they make, because we need to have them aligned with the user’s choice, which is key to everything we do. However, perhaps unsurprisingly, there are adversaries on the Web who are extremely motivated to bypass the user’s choice, aiming to trick the AI and force their ads on users. From that perspective we need to make sure our models are also robust against adversarial attacks.
On the meetup we will talk about how we see the questions of explainability and adversarial robustness as two sides of the same coin. We will talk about our experiment Sentinel and give some examples from the trenches about adversarial robustness.