What is AI explainability, and why is it important? My understanding of it and some examples.
➤ Often, when people want AI to be explainable, they want AI to provide human-understandable reasons for its decisions.
We typically don't have these reasons because of how AI works. AI searches for patterns in data it is trained on and then applies them to new data. We usually don't know which patterns it latched on to.
But we want explanations of AI decisions for various reasons, including:
➤ AI could make a decision for the wrong reasons
For example, an AI made to distinguish between images of wolves and huskies was seemingly highly accurate. But it turned out to be a snow detector on closer examination. It gave the label "wolf" to images containing snow and the label "husky" to images without it. Presumably, most images of wolves the AI trained on contained snow, and most images of huskies did not, so it latched on to the wrong pattern. Read more about it here.
➤ AI can make decisions for discriminatory and/or illegal reasons
For example, in 2014, Amazon developed a hiring algorithm that disfavored candidates with the word "women's" on their resumes or who attended more than one women-only college (Amazon shut it down before it went live). Presumably, most past successful candidates were men who did not have these words on their resumes, so the algorithm latched on to the wrong pattern. Read more about it here.
➤ Explanations can help us decide whether an algorithm is trustworthy
AI algorithms can be very helpful when they do a good job. Having explanations can help us decide which ones to trust.
➤ People have a right to get an explanation for high-stakes decisions
For example, in the US, the Equal Credit Opportunity Act requires that applicants who are denied credit will be given specific reasons for the denial. Read more about the right to explanation here
➤ Read the discussion on LinkedIn here