PREDICTING RECIDIVISM

STATUS: [ML AUDIT] // FOCUS: ALGORITHMIC FAIRNESS
SCIKIT-LEARN ALGORITHMIC BIAS DATA ANALYSIS ETHICS

// AUDIT OBJECTIVE

This project involved a rigorous audit of machine learning models used to predict criminal recidivism (the likelihood of a convicted criminal reoffending). The goal was to evaluate the trade-offs between model complexity, accuracy, and fairness across different demographic groups.

// FINDINGS

Our analysis revealed that complex "black box" models did not significantly outperform simple, interpretable linear models in terms of accuracy. However, the complex models often obscured significant racial biases. By simplifying the model, we achieved comparable predictive power while making the decision-making process transparent and easier to audit for equity.

// ETHICAL IMPLICATIONS

The study underscores the danger of deploying opaque AI systems in high-stakes domains like criminal justice. It argues for "Right to Explanation" and the prioritization of interpretable models over marginally more accurate black boxes.

// ANALYSIS NOTEBOOK

Review the full data analysis and fairness metrics in the Colab notebook.

VIEW ANALYSIS
< RETURN TO BASE