Audace Nakeshimana
MIT EECS | Keel Foundation Undergraduate Research and Innovation Scholar
Learned Intermediate Input Representation with Task-Independent Fairness Guarantees
2019–2020
EECS
- Artificial Intelligence & Machine Learning
Richard R. Fletcher
Unequal representation in datasets used to train machine learning algorithms can lead to systematic error rate disparities. This imbalance can result in unfavorable outcomes, especially when predictions are used to make decisions that affect people’s lives. Fairness metrics and techniques have been developed to mitigate this problem. However, a major limitation among current approaches is that they are usually limited to a single inference task, and a different technique must be applied for a new task performed on the same data. To prevent this repetitive and costly process, I explore techniques for generating data representations that satisfy desired fairness constraints, enabling the dataset to be used “as is” without any further efforts to remove systematic error disparities.
“A big challenge that our technologically advanced society faces today is that we have yet to
understand how AI-based technologies affect us. I work to quantify and reduce the social-cost of
errors made by predictive technologies. Mitigating unintended consequences of artificial intelligence will allow related technological advancements to steer the globe in a positive
direction, in a way that does not propel systematic inequality or injustice.”