High performing
and explainable
AI

We focus on two aspects for building and assessing machine learning models

WHY EXPLAINABILITY?

TRUST

The first step in building trust for a solution is to understand it. Explainability is centered around human oriented understanding and transparency, thereby forming the foundation for trustworthy AI for all stakeholders.

RELIABILITY

When explaining a model, including sensitivity and stress testing, we will be able to identify data leakages and any regions where the model behave in an unstable way. When identified these aspects can then be addressed to ensure that the models are robust and behave consistently.

FAIRNESS

Explainability will help us to build fair solutions by identifying and analysing potential biases. The analysis is performed on various levels and provides an improved basis for decisions and input for de-biasing of models.

CHALLENGES

THE BLACK BOX

Self learning systems are often referred to as "black boxes" due to the challenge for humans to understand how they operate. However, we are starting to unbox them using different methods that calculate and visualize how the outputs are generated...

VARIABILITY

Different stakholders as well as different individuals require various levels and types of explanations. The stakeholders range from the model developers through to product owners, internal/external compliance as well as the end users themselves...

LIMITS OF THE HUMAN MIND

The human mind has cognitive limits in terms of grasping high dimensional feature spaces and non-linear interactions. These are however key aspects to most high performing AI solutions...