2020-02-26 22:45:48

Discussion Paper:


Overview

  1. Predictive Risk Modelling: predict the likelihood of future adverse outcomes.
  2. Application: Child Mistreatment Screened-In Calls (Allegheny County, PA).
  3. Goal :
    • Compare the current model with Authors’ recommendation.
    • Understand the Predictive Bias associated with the algorithms.

Where’s the Need?

  • 3.6 million calls made to Child Protection Services yearly in US.
  • 37% of US children investigated for child abuse and neglect.
  • Calls screened using local practices and policies.
  • Even with admin data linked to calls the caseloads are overwhelming.
  • Enter : Predictive Risk Modelling

Predictive Risk Modelling:

  • With Admin data, predict likelihood of future adverse outcomes.
  • Thus targeting services to the riskiest cases \(\implies\) prevent adverse outcomes.

Negatives: Contentious Debate

  • Certain communities - e.g. living in poverty - disadvantaged with reliance on admin data
  • Such families flagged as high risk \(\implies\) exacerbating the original bias.

Positives:

  • Replace human judgements/bias.

Allegheny County, PA


Child Welfare Screening Process


The Modelling Process:


The Allegheny County Model: Logistic Regression

Comparison of Models:


Predictive Bias


Discussion Leader Thought:

  • Q: [Broad:] Doesn’t it all come down to the amount and type of data that exists “in nature”?
  • Q: To address racial bias, would applying the model separately to each group (e.g. race classes) be “fair”? (Though, i think, it would be prudent). And then may be combine these separate models , somehow?
  • Q: Combine numeric data with text as transcribed by the referral calls so that the severity of cases are based on situational than historical. For e.g. new cases in system have no history.
  • How about the State include “school attendance and performance” within the features so that a child’s welfare is observed?
    • Too far into Orwellian??