Sources of Bias
- 01:40
How AI systems in finance can unintentionally perpetuate or worsen biases.
Downloads
No associated resources to download.
Transcript
In finance, fairness isn't just a moral obligation.
It is a regulatory and reputational requirement.
Yet AI systems, if not carefully designed, can unintentionally reinforce or even worsen existing biases.
Let's break down where bias can enter data collection.
AI models trained on historical data can reflect past inequalities.
For example, loan data influenced by redlining or gender disparities can embed discrimination into future credit decisions.
Algorithm design decisions on which features to include, how to weight them, and what outcomes to optimize for can all introduce bias.
Even independent data like zip codes can unintentionally correlate with protected characteristics like race deployment.
Once life models might behave differently across populations, especially if they were not tested on diverse data feedback loops, AI systems that continuously learn from user behavior can reinforce historical trends.
For example, if certain demographics are persistently denied credit, the model will continue to treat them as high risk according to the OECD and World Economic Forum.
These risks are especially serious in finance, where AI systems increasingly influence decisions around access to credit insurance and capital.