In AI We Trust: When And How Should AI Explain its Decisions?

Smart Business

In AI We Trust: When And How Should AI Explain its Decisions?

As artificial intelligence (AI) increasingly makes decisions, there are growing concerns around AI decisions making, and how it reaches its answers.

by Sam Genway

AI can be complex. Unlike traditional algorithms, AI does not follow a set of predefined rules. Instead, they learn to recognize patterns – such as when a component of a machine will fail or whether a transaction is fraudulent – by building their own rules from training data. Once an AI model is shown to give the right answers, it is set loose in the real world.
However, getting the right answer does not necessarily mean it was reached in the right way. An AI model could be successfully trained to recognize, for instance, the difference between wolves and huskies. However, it might later transpire that the AI model really learned to tell the difference based on whether there was snow in the background.

Right now, completely automating decisions is considered a step too far.
Edna Kropp Sam Genway, Tessella
{bqalt}

 

This approach will work most the time, but as soon as it needs to spot a husky anywhere outside of its natural habitat, it will presumably fail. If we rely on AI (or indeed humans) being right for the wrong reasons, it limits where they can work effectively.
We may instinctively feel that any machine decision must be understandable, but that’s not necessarily the case. We must distinguish between trust (whether we are confident that our AI gets the right answer) and explainability (how it reached that answer). We always need to have a level of trust demonstrated when using an AI system, but only sometimes do we need to understand how it got there.
Take an AI model that decides whether a machine needs maintenance to avoid a failure. If we can show that the AI is consistently right, we don’t even need to know what features in the data it used to reach that decision. Of course, not all decisions will be correct, and that holds whether it’s a human or a machine making the decision. If AI gets 80% of calls on machine maintenance right, compared to 60% for human judgement, then it’s likely a benefit worth having, even if the decision-making isn’t perfect, or fully understood.
On the other hand, there are many situations where we do need to know how the decision was made. There may be legal or business requirements to explain why a decision was taken, such as why a loan was rejected. Banks need to be able to see what specific features in their data, or which combination of features, led to the final decision, for instance to grant a loan.

How do We know When AI decisions Are right?

In other cases, it is important to know why the decision is the right one; we wouldn’t want a cancer diagnosis tool to have the same flawed reasoning as the husky AI. Medicine in particular presents ethical gray areas. Let’s imagine an AI model is shown to recommend the right life-saving medical treatment more often than doctors do. Should we go with the AI even if we don’t understand how it reached the decision? Right now, completely automating decisions like this is considered a step too far. And explainability is not just about how AI reaches the right answer. There may be times when we know an AI model is wrong, for example if it develops a bias against women, without knowing why. Explaining how the AI system has exploited inherent biases in the data could give us the understanding we need to improve the model and remove the bias, rather than throwing the whole thing out. As with anything in AI, there are few easy answers, but asking how explainable you need your AI to be is a good starting point.

As AI turns to increasingly challenging problems further from human experience, there will still have to be human experts who can help qualify the explanations.

If complete model transparency is vital, then a white-box (as opposed to a black-box) approach is important. Transparent models which follow simple sets of rules allow us to explain which factors were used to make any decision, and how they were used. But there are trade-offs. Limiting AI to simple rules also limits complexity, which limits its ability to solve complex problems, such as beating world champions at complex games. Where complexity brings greater accuracy, there is a balance to be struck between the best possible result and understanding that result.
A compromise may be the ability to get some understanding of particular decisions, without needing to understand how the AI model functions in its entirety. For example, users of an AI model which classifies animals in a zoo may want to drill down into how a tiger is classified. This can tell them the information that it uses to say what is a tiger (perhaps the stripes, face, etc.), but not how it classifies other animals, or how it works generally. This allows you to use a complex AI model, but focus down into local models that drive specific outputs where needed.

Who should AI be explainable to?

There is also the question of “explainable to whom?” Explanations about an animal classifier can be understood by anyone: most people could appreciate that if a husky is being classified as a husky because there is snow in the background, the AI is right for the wrong reasons. But an AI which classifies, say, cancerous tissue would need to be assessed by an expert pathologist. For many AI challenges, such as automating human processes, there will have to be human experts who can help qualify the explanations.
However, as AI turns to increasingly challenging problems further from human experience, the utility of explanations will surely come into question. In the early days of mainstream AI, many were satisfied with a black box which gave answers. As AI is used more and more for applications where decisions need to be explainable, the ability to look under the hood of the AI model and understand how those decisions are reached will become more important.
There is no single definition of explainability: it can be provided at many different levels depending on need and problem complexity. Organizations need to consider issues such as ethics, regulations, and customer demand alongside the need for optimization – in relation to the business problem they are trying to solve – before deciding whether and how their AI decisions should be explainable. Only then can they make informed decisions about the role of explainability when developing their AI systems.


Read more about AI

AIoT: AI means Business

Tags:

Leave a Reply

Your email address will not be published. Required fields are marked *

*