Fairness Tree Workshop - Presented at APPAM (Association for Public Policy Management and Analysis) 2023
A Guide to Determining Fairness Goals for AI Systems through Facilitated Multistakeholder Discussions
Presenters
- Lingwei Cheng, Carnegie Mellon University
- Rayid Ghani, Carnegie Mellon University
- Kit Rodolfa, RegLab, Stanford University
Why this workshop?
The typical process used today in eliciting fairness goals when designing ML systems is not systematic and either involves selecting an arbitrary ML-based fairness metric to achieve or computing disparities (and auditing) across a variety of (often, all of the ones that can be computed) fairness metrics. This typically results in a system that is not designed for the use case under consideration and does not result in outcomes that are fair and equitable. Our framework, Fairness Tree (FT), provides a simple framework to help guide stakeholders involved in design and development decisions in:
- prioritizing notions of fairness that most appropriately match the use case setting and the deployment context of the socio-technical system being developed.
- supporting the collaborative and transparent process of eliciting fairness requirements
- supporting the mediation across conflicting fairness needs of different stakeholder groups by pinpointing the underlying source of the conflicting priorities.
Participants will leave the workshop with enhanced proficiency in eliciting, understanding, discussing, and managing conflicting fairness objectives which they can then use to design better and more equitable ML systems.
What will we cover?
In this interactive workshop, we will start with an overview of AI fairness and the AI fairness pipeline, survey the landscape of existing AI fairness tools, and show where FT fits in. To illustrate how FT works, we will present a series of case studies where AI is being considered as a tool to allocate resources. Participants will work through the case studies role-playing each group of stakeholders and determine fairness needs from the perspective of each stakeholder, assess the cost and benefit of the interventions available, and finally come up with appropriate fairness metrics for the task at hand. The key points for discussions are shown in the worksheets below. Participants may arrive at different fairness requirements based on their priorities, and we will show how FT can help them identify the underlying sources of conflicting priorities, which may result in different fairness needs for different stakeholder groups. The discussions will conclude with how to make sense of and operationalize these requirements.
Pre-requisites
- Please bring a laptop so you can interact with the fairness tree
- Caring about the world, fairness, and equity
Schedule and Structure
Worksheets:
- Exercise 1: Determining the societal and policy goals
- Exercise 2: Cost & Benefit Analysis of the Intervention
- Exercise 3: Determining Fairness Metrics to Prioritize
- Motivating Case Studies
- Case Study 1: Child Welfare: Determining whether to open an investigation for a reported child maltreatment case based on predicted risk of maltreatment
- Case Study 2: Housing: Prioritizing rental assistance allocation based on predicted risk of future homelessness
- Case Study 3 Tax/Fraud Audits: Determining which tax returns to prioritize to audit based on risk of fraud/abuse/error
- Understanding overall fairness and equity goals when building Data Science/ML/AI systems
- Exploring different views around ML and fairness
- Defining the desired societal goals of the system (from the perspective of each stakeholder group)
- overview
- breakout activity: Cost & Benefit Analysis of Intervention
- Understanding the cost and benefits of interventions (from the perspective of each stakeholder group)
- overview
- breakout activity - Cost & Benefit Analysis of the Intervention
- Applying the Fairness Tree Framework
- overview
- breakout activity - Determining Fairness Metrics to Prioritize
- Discussion
- Wrap-up