My understanding of AI fairness and and correction of a common misconception.
➤ When people talk about AI fairness, they often talk about making sure AI doesn't replicate and reinforce social inequalities.
➤ AI can replicate and reinforce social inequalities in many ways. A few examples:
-- Underperforming on social minorities
For example, facial recognition algorithms that have been trained on datasets that contained mostly white faces will likely fail to detect non-white faces
Read about a time that this actually happened here https://lnkd.in/eSJyDsEr
-- Reinforcing social patterns that lead to inequality
For example, if an algorithm notices that mostly women click on job ads to work at a supermarket, it can decide to show supermarket job ads mostly to women
Read about a time that this actually happened here https://lnkd.in/eqrvButt
-- More negative impacts on social minorities
For example, low-income employees are more likely to lose their jobs to AI systems. In that case, their economic disadvantage will increase as a result of AI usage.
-- Unequal distribution of benefits
For example, if socially dominant groups have more access to AI, they will enjoy more of its benefits, which can increase the gap between them and marginalized groups.
➤ Correction of a misconception: Fairness in AI is more than bias in datasets!
--When talking about fairness in AI, people often conflate between fairness and diversity in datasets.
-- Diversity in datasets can go a long way in improving the social impact of AI. The reason is that AI learns from past data it has been shown. If the past data is biased in favor of a social group, the AI will likely privilege that group. That is what happened in the facial recognition case.
-- But, as you can see from the examples, some fairness problems are not the result of learning biased patterns from biased past information.
➤ So, say it loud, say it proud: Fairness in AI is more than bias in datasets!
➤ Join the discussion about this topic on my LinkedIn here.