Faces Project Log

This file contains a log of the progress on the faces project. Each log-entry contains:



Date: Wednesday, January 20, 2020

Topic: TABLE 01 - Configuration 1 and 2

Overview: An overview of what the markdown contains

This update contains links to the baseline regressions from before, with an added section for definitions, as well as a link to two different configurations of table 01.

  1. Table 1 Configuration 1: This table is based on the agreed upon outline. As such it displays the adjusted R^2 from regressions of individual variables and combined models on the final release outcome. The models are:
  1. Single Variable Model:
  • Demographics on release outcome
  • Arrest charge features on release outcome
  • XgBoost Predicted risk on release outcome
  • MTurk labels on release outcome
  • p_hat_cnn on release outcome
  1. Multiple Variable Model:
  • Demographics + Charge Features in a combined model
  • Demographics + Charge Features in a combined model + predicted risk
  • Demographics + Charge Features + Risk + MTurk features
  • Demographics + Charge Features + Risk + p_hat_cnn
  • Demographics + Charge Features + Risk + p_hat_cnn + MTurk features
  1. Table 1 Configuration 2: This table is slightly different in that it displays the individual models incrementally with the addition of p_hat_cnn. The columns then include the two other major predictors: predicted risk and charge characteristics. The 95% C.I. is included (though arguably not in the best format). These were obtained via Bootstrapping.

Link:

  1. Baseline Regression Summary: https://rpubs.com/JonasKnecht/regression_summary

  2. Robustness Summary: https://rpubs.com/JonasKnecht/robustness_summary

  3. Table 01 configurations: https://rpubs.com/JonasKnecht/table01

Next Steps

  1. Jim’s CNN predictions
  • Since Jim is finishing his CNN this week, I will re-run our regressions with his predictions
  • His work produced a much wider spread in p_hat_cnn which will be good to compare to the results above
  • I’m especially interested in seeing how his distribution compares to ours.
  1. Election CNN regressions & MTurk labels
  • Once the new MTurk labels for the elections data are ready I will re-run all our primary regressions for the election CNN


Date: Tuesday, January 19, 2020

Topic: SUMMARY of baseline regressions and robustness checks

Overview: An overview of what the markdown contains

With these two markdowns I have split the output for our baseline regressions and our robustness analysis. This should hopefully make it easier to piece together what is important. The important headings are also in red. Here is an outline of their contents:

  1. Link 1 - Regression Summary
    • Baseline regression with the standard model
    • Split by gender
    • Baseline regression with the updated model (i.e. the new sampler)
    • Distribution plots for the new model
    • Split by gender
  1. Link 2 - Robustness Summary
    • MTurk label regressions
      • Combined and gender split
      • Split by race (B vs. nB)
    • Non-linearity in p-hat-cnn
      • Decile plots
      • Regression with the average decile values
      • Direct coding of the deciles
      • Higher order terms
    • Bivariate skin-tone regressions
      • Combined male and female regression (Mturk skin-tone + categories)
      • Gender split (not the signficant effect only for females)
      • Coefficient plots
    • Skin-tone sanity checks
      • Plotting skin tone and race labels
      • Skin tone and arrest rate
      • Skin tone and other MTurk labels

Link:

  1. Baseline Regression Summary: https://rpubs.com/JonasKnecht/regression_summary

  2. Robustness Summary: https://rpubs.com/JonasKnecht/robustness_summary

Next Steps

  1. Jim’s CNN predictions
  • Since Jim is finishing his CNN this week, I will re-run our regressions with his predictions
  • His work produced a much wider spread in p_hat_cnn which will be good to compare to the results above
  • I’m especially interested in seeing how his distribution compares to ours.
  1. Election CNN regressions & MTurk labels
  • Once the new MTurk labels for the elections data are ready I will re-run all our primary regressions for the election CNN


Date: Friday, January 15, 2020

Topic: NEW CNN with a minority-class-sampler in the DataLoader

Overview: An overview of what the markdown contains

  1. Prediction Distribution & Deciles: To check whether the new data-loader increases the spread of predictions from the CNN, I plot the new p_hat_cnn vs. the old models predictions. Since increased spead should help with the linearity in the decile plots, I present a comparison of those as well.
  • Histogram of p_hat_cnn overlapping the new and old models
  • Decile plots for the new and old model of p_hat_cnn vs. both the decile index and the respective mean-decile-value
  1. Baseline Regressions: I repeat all our baseline regressions with the new CNN data to make sure our coefficient estimated haven’t changed much. Here I present:
  • Our baseline regression of all covariates + p_hat_cnn on the release outcome
  • Split by gender
  1. Non-Linearity Check: Given the changes in the decile plots I repeat our primary non-linearity regressions.

Link:

  1. Minority Sampler CNN Regressions: https://rpubs.com/JonasKnecht/minority_sampler_regressions

Summary

  1. Prediction Distribution & Deciles: Here we can clearly see a massive improvement over the older model, with much more weight being given to the LHS of the original p_hat_cnn distribution. This would indicate that the new minority_class_sampler function works well. Notice that this new CNN was trained with the scaling parameter (which I coded to be the target ratio of majority:minority) set to 2, from the original 3.6 observed in the raw data (i.e. for every personal actually jailed, there are approx. 3.6 not). The distribution plot is the most striking evidence that the new data-loader worked as intended (Eureka !!). The decile plots tell a similar story, in that the new model is strikingly linear and as Decile Plot A - NEW MODEL indicates, the gap between successive deciles is much reduced.

  2. Baseline Regressions: Repeating our baseline regressions with these new CNN predictions doesn’t change the significance of our effect. We do however see a decrease in the coefficient value on p_hat_cnn from 0.403 (in our last baseline) to 0.380. Interestingly, the gap in adj-R-squared actually increases in the new model from 0.112 --> 0.123. The same holds true for the female/male split, with the female coefficient still being greater than the male coefficient. Both are reduced w.r.t. our last baseline. All in all, with the adjustment for the spread of p_hat_cnn we seen to have reduced our model coefficient across the board (given that the linear model is now a much better fit this is not surprising). Note This new data, and the reduction in regression coefficients, is entirely in-line with what we observed in the control for non-linearity !

  3. Non-linearity check: Repeating the non-linearity regressions we notice a much smaller coefficient reduction than seen with our previous CNN data. This links back to the summary above.

Next Steps

  1. Jim’s CNN predictions
  • Since Jim is finishing his CNN this week, I will re-run our regressions with his predictions
  • His work produced a much wider spread in p_hat_cnn which will be good to compare to the results above
  • I’m especially interested in seeing how his distribution compares to ours.
  1. Election CNN regressions & MTurk labels
  • Once the new MTurk labels for the elections data are ready I will re-run all our primary regressions for the election CNN


Date: Wednesday, January 13, 2020

Topic: Repeating primary regressions with increased MTurk detail

Overview: An overview of what the markdown contains

  1. After collecting at least 6 labels for each of our MTurk features we re-run our baseline regression on release outcome to see whether we have more signficant results.
    • Complete male+female sample with the inclusion of 18 skin-tones, age, MTurk labels, p_hat_cov, and p_hat_cnn.
    • Split by gender
  2. Along the same logic as above, we repeat the MTurk label regressions to see whether effects are more significant
    • We run each label individually and combined
    • Split by gender
    • Split by race

Link:

  1. Bivariate MTurk Skin-tone Labels: https://rpubs.com/JonasKnecht/mturk_label_detail

Summary

  1. We notice no changes in the overall regression significance of our baseline regression. The 4 MTurk labels are still all insignificant. The coefficient on p_hat_cnn decreases from 0.415 to 0.403 and remains very significant. Splitting by gender reveals that for the female subsample the dominance label becomes significant at the 5 and 10% level. The male subsample is entirely insignificant.

  2. The MTurk label regressions are fully insignificant on the entire male+female sample. Neither the individual labels nor the combined labels are significant at all. On the female subsample we see that all 4 labels are significant. (This is pretty big news !!!) All four labels display positive coefficients and attractiveness is significant at the 1% level. Dominance and Trustworthiness are both significant at the 5% level and Competenceat the 10% level. The male subsample has no significant coefficients. Splitting by race shows no significant coefficients for the black subsample. For non-black individuals we see that trsutworthiness is slighly significant at the 10% level.

Next Steps

  1. Focus on the new sampling data-loader for the CNN
  • Since we have an imbalanced dataset with about 3.67 times as many people in the released category, we want to test a new CNN dataloader
  • With a new sampling function for the training data we hope to balance the training process, by over sampling from our minority class
  1. Jim’s CNN predictions
  • Since Jim is finishing his CNN this week, I will re-run our regressions with his predictions
  • His work produced a much wider spread in p_hat_cnn which may be a good indication of what we can achieve with a re-written dataloader


Date: Tuesday, January 12, 2020

Topic: Regression of MTurk skin-tone and race labels

Overview: An overview of what the markdown contains

  1. A series of regression of collected skin-tone labels on our final arrest outcome. We want to investigate the relationship between race and release, and make sure we are not missing potentially worrying discrepancies.

We run the following regression on the final arrest outcome:

- Individual skin-tone effects 
- Skin-tone divided into three categories (dark, medium, light)
- MTurk race label 

Each of these includes:

- Control for gender (male/female split)
- Coefficient plots for each regression

Link:

  1. Bivariate MTurk Skin-tone Labels: https://rpubs.com/JonasKnecht/skin-tone-regressions

Summary

  1. We find no significant skin-tone effects for the combined sample or for the male/female subsample
  2. Combining skin-tone into three categories yields a significant negative effect for white females
  • This is the only significant effect among these regressions
  • We observe a -0.058 change in the arrest outcome for light females over dark females
  • This is significant at the 1% level
  1. Regressing on the race label yields no significant effect
  2. There is no noticeable pattern in the coefficient plots for the skin-tone regressions

Next Steps

  1. Focus on the new sampling data-loader for the CNN
  • Since we have an imbalanced dataset with about 3.67 times as many people in the released category, we want to test a new CNN dataloader
  • With a new sampling function for the training data we hope to balance the training process, by over sampling from our minority class
  1. MTurk label regression with more detail
  • We are actively collecting the missing label detail with further MTurk surveys and will re-run all our primary regressions
  • The hope is that we are able to pick up significant effects for labels such as attractivenss which literature suggests should play a role
  1. Jim’s CNN predictions
  • Since Jim is finishing his CNN this week, I will re-run our regressions with his predictions
  • His work produced a much wider spread in p_hat_cnn which may be a good indication of what we can achieve with a re-written dataloader


Date: Thursday, January 7, 2020

Topic: Bivariate Regression of MTurk Features and Non-Linearity in p_hat_cnn

Overview: An overview of what the markdown contains

  1. A set of bivariate regressions checking that our MTurk features correlate with arrest-outcome as suggested by the literature
  • Combined male+female regression of attractiveness, competence, dominance, and trustworthiness
    • Each label is regressed individually and once combined
  • Subsample by gender
  • Subsample by ethnicity (black vs. non-black)
  • Subsample on higher detailed labels
  • (In Appendix) Subsample of females with race-controll (as this is the only one signficant)
  • (In Appendix) Sanity check for skin-tone relationship to race label
  1. Checking non-linearity in p_hat_cnn
  • Fixing coding error with p_hat_cnn_decile_average
  • Raw coding integers 1-10 for decile
  • Including higher order terms

Link: Links to two separate markdowns for

  1. Bivariate MTurk Labels: https://rpubs.com/JonasKnecht/bivariate_mturk_regressions

  2. Non-linearity in p_hat_cnn: https://rpubs.com/JonasKnecht/non_linearity_updated

Note I kept these separate just because one is a simple sanity check and the other has potentially more next steps and I want to cut down on the length of these output files to make them a little more comprehensible.

Summary

  1. Bivariate Regressions (See Link 1):

Combined male+female regressions still yield no significant results for any individual MTurk label or the combined model on release. The same holds for the male subset. For females we see that attractiveness, dominance, and trustworthiness are significant on their own, but not when combined. This is in-line with what we observed in the previous log. All effects are insignificant for the black vs. non-black subsets. However, controlling for race, the female subset effects become more significant with similar coefficient magnitudes.

Repeating these regressions for the well-labeled subset results in no significant effects for females. This may well be due to the lack of data, as we are restricting to 101 observations for the female subset. However, as this had been significant in the previous log, with the inclusion of all other regression covariates, I am a little puzzled by this ! The combined male+female model sees significant effects for competence and trustworthiness on the well labeled subset. We notice the same significance with the black-subset on the well-labeled set.

All in all these regressions are a bit puzzling, but seem to suggest that there is some signal stemming from these labels.

  1. Skin-tone and race sanity check (See Link 1):

To make sure our skin-tone labels make sense I plot their respective relationship to our race indicator. All seems to be in order here with darker skin tones receiving higher proportions of the black label.

  1. Non-linearity (See Link 2):

The main point is that I found my mistake and re-running the decile-average regression now yields significant results with a coefficient similar to the one seen previously. I also went ahead and ran a regression with higher order terms, none of which appear significant.

Next Steps

  1. Re-configuring the CNN data-loader to adjust for the imbalance in our sample.
    • Need to adjust for the imbalance in our data as most people are released
    • Write a new sampler for the training data and ensure the validation data is unchanged
  2. Increase MTurk label collection with larger number of workers per image
    • As we see some signal coming from the labels we want more detail on them
    • Increase the number of workers from 3 to 6 per image
  3. Re-run the baseline regressions with higher detail MTurk labels


Date: Tuesday, January 5, 2020

Topic: Regression Sanity Checks

Overview: An overview of what the markdown contains

This analysis focused on extending the primary regression of risk, skin_tone, age, MTurk labels, covariates and p_hat_cnn on release outcome. Below is an overview of the different elements:

  1. Per-Decile regressions:
  • Running our primary regression on subsamples of the risk_pred_prob distribution
  • Rows are grouped into decile 1-3, 4-6, 7-10
  1. Investigating non-linearity in p_hat_cnn:
  • Replacing p_hat_cnn by the per-decile mean
  • Replacing p_hat_cnn by the per-decile mean and collapsing decile 1-3
  • Replacing p_hat_cnn by a decile indicator (1-10)
  1. Repeating main regression on a subset of detailed MTurk labels:
  • Restricting our data to those with more than 3 MTurk labelers.
  • Between 6-9 workers per image leaving 499 observations
  • Split further between male (_N = 348) and female (_N = 101)
  1. Investigating relationship between skin-tone, MTurk-features, and arrest-rate
  • We plot:
    • Mean arrest rate vs. skin-tone
    • Mean attractiveness vs. skin-tone
    • Mean competence vs. skin-tone
    • Mean dominance vs. skin-tone
    • Mean trustworthiness vs. skin-tone

Link: https://rpubs.com/JonasKnecht/updated_regressions

Summary: A summary of the conclusions drawn from the analysis

  1. Per-Decile Regressions

We primarily note that p_hat_cnn remains significant at all levels. Interestingly the magnitude of the coefficient increases with the decile of risk_pred_prob we control for. This would indicate that at higher underlying risk, as indicated by our xg_boost model, the CNN is able to account for more variance in the judges decision. The coefficient on p_hat_cnn increases from: 0.317 at deciles 1-3, to 0.367 at deciles 4-6, to 0.514 at deciles 7-10. We also note similar increases in explanatory power when looking at the relative changes in adjusted R-squared.

  1. Non-linearity in p_hat_cnn

Here we were prompted to account for the non-linearity in p_hat_cnn as indicated by the decile plots (top of the markdown). Including the p_hat_decile_average suggests no significant effect, something which is likely an error in the code and will be looked at further. Similarly collapsing the bottom three deciles yields no results, and the inclusion of simple 1-10 values also remains insignificant. Looking into this further is going to be a next step !

  1. Repeating regressions with more MTurk detail:

Repeating our main regression on a subset of individuals with higher-accuracy MTurk labels (i.e. those with more workers-per-image) yields no significant results for any of our labels on a combined male + female data-set. The labels in question are: attractiveness, competence, dominance, and trustworthiness. These regressions include our MTurk labels together with our other arrest covariates, skin-tone, and p_hat_cnn. We also notice that p_hat_cnn is now insignificant, which is most likely due to the reduction in data.

Controlling for gender yields no significant results for males. For the female population, which includes only 101 observations, we notice that attractiveness and trustrworthiness are now significant at the 5 and 10 % level respectively. This was intended as a proof of concept on whether it would be worth increasing the number of labelers per image, i.e. whether we are likely to see any significant signal coming from these features. The female subset suggests that this is indeed the case and we go forward with future MTurk surveys increasing the number of workers per image from 3 to 6.

  1. Skin-tone plots:

Skin tone seems to have no direct relation to arrest-rate. Attractiveness, trustworthiness, and competence are highly positively correlated, with lighter skin tones receiving higher scores on both labels. Dominance is highly negatively correlated to skin-tone with lighter skin receiving lower scores. This is in-line with other literature findings.

Next-Steps: A summary of what we are investigating next

  1. Checking the relationship between our MTurk labels and release rate. We want this to conform to the literature which indicates that attractiveness should be significant.
  2. Set of bivariate regressions of the MTurk labels on release
  3. Include controls for race and gender
  4. Look again at the non-linearity in p_hat_cnn and make sure there are no coding mistakes
  5. Re-configuring the CNN data-loader to adjust for the imbalance in our sample.