Thoughts for later

Analyses of just CLIP results

Of highest likelihood option

How often is the highest likelihood label the correct one?

25-40% range, intriguing potential patterns, but could be noise?

Split by tangram. We know that tangrams vary in codeability.

Tangrams vary widely in model performance; also vary in what round the model is best at.

By probability assigned

Alternative is to look at how much probability the correct answer got.

This mostly tracks the above, which makes sense.

Confusion matrices

Of top option.

Of probability mass.

Compare with tg-matcher results

Basically, we want to know how the model qualitatively compares to humans – i.e. is there alignment on what the harder / easier ones are.

Could look at this various ways, but the cleanest comparison is that we have naive human guessing data.

Each point is one of the 12 condition (round 1/6 x 2/6 person x rotate/thin/thick)

The model is very bad at ice skater?

This is slightly unfair in some ways since they might be seeing different subsets.

Model sees on a per utterance basis, humans see on a per transcript basis. It may in future make sense to show the model something more like what the people see if comparison is what we care about.

Taking only the first utterance

Assume first utterance is most contentful, and later ones may be more addressing questions or adding details.

Taking only singleton utterances

This has less data especially in some conditions, but is the most comparable.

Comparions with mpt accuracies

Same caveats as previous comparison with people apply. The model’s error pattern does not seem particularly correlated with the human error pattern.

Could consider doing within-tangram analyses for utterance by utterance or something?

Comparison with kilogram naming divergence