The Bias and Fairness Audit Toolkit for Machine Learning


Aequitas is an open-source bias audit toolkit for machine learning developers, analysts, and policymakers to audit machine learning models for discrimination and bias, and to make informed and equitable decisions around developing and deploying predictive risk-assessment tools.

Learn more about the project.

Sample Jupyter Notebook

Explore bias analysis of the COMPAS data using the Aequitas library.