How often is the highest likelihood label the correct one?
25-40% range, intriguing potential patterns, but could be noise?
Split by tangram. We know that tangrams vary in codeability.
Tangrams vary widely in model performance; also vary in what round the
model is best at.
Alternative is to look at how much probability the correct answer got.
This mostly tracks the above, which makes sense.
Of top option.
Of probability mass.
Basically, we want to know how the model qualitatively compares to humans – i.e. is there alignment on what the harder / easier ones are.
Could look at this various ways, but the cleanest comparison is that we have naive human guessing data.
Each point is one of the 12 condition (round 1/6 x 2/6 person x rotate/thin/thick)
The model is very bad at ice skater?
This is slightly unfair in some ways since they might be seeing different subsets.
Model sees on a per utterance basis, humans see on a per transcript basis. It may in future make sense to show the model something more like what the people see if comparison is what we care about.
Assume first utterance is most contentful, and later ones may be more addressing questions or adding details.
This has less data especially in some conditions, but is the most comparable.
Same caveats as previous comparison with people apply. The model’s error
pattern does not seem particularly correlated with the human error
pattern.
Could consider doing within-tangram analyses for utterance by utterance or something?
part naming divergence (PND): “PND is computed identically to SND, but with the concatenation of all part names of an annotation as the input text”
Shape Naming Divergence (SND): “A tangram’s SND quantifies the variability among whole-shape annotations. SND is an operationalization of nameability,”
part segmentation agreement (PSA): “PSA quantifies the agreement between part segmentations as the maximum number of pieces that does not need to be”