The enterprise as a whole is responsible for making an increasing number of decisions on a daily basis, the majority of which are made by algorithms housed within AI systems. It is possible for humans to make decisions while unconsciously harboring biases; therefore, we need to take precautions to ensure that these biases are not reflected in or given additional weight in the decision-making process of our AI. Artificial intelligence has the potential to assist in the reduction of unfair biases and in assisting us in ensuring responsibility in our decision making in order to avoid disparate impacts.
Because biases can be both deliberate and unintentional, it is important for us to understand the contexts in which they may emerge. A common illustration of unintentional bias is found in the phenomenon known as disparate impact (also known as indirect discrimination). A disparate impact occurs when a certain group of people is subjected to policies, rules, or outcomes that have a disproportionately negative effect on them, despite the fact that there were no underlying intentions to have such an effect. It is particularly important to address unintentional bias because it can cause quite a bit of harm to various groups of people who live in our society.
Artificial intelligence has the potential to assist us in identifying and addressing these biases. A solution to the problem of disparate impact is available in the form of disparate impact assessments and fairness assessments. The concept of fairness can be difficult to pin down, which is why it is essential to settle on distinct metrics and guidelines for fairness for each and every AI project that is carried out.
Some open source tools:
- IBM’s AI Fairness 360 (AIF360)
- Microsoft’s Fairlearn