Study Variables

Study Design

Study Design


Prediction Adjustment (\(\Delta\)Prediction)

Before we release the grades for each exam, we ask students to report their expected grades at two separate time points: immediately following each exam, and right before we release grades (usually 3-5 days later). Using these two predictions for each exam, we can check whether students are satisfied with their initial prediction, or if they make “adjustments” in the period between the completion of an exam and the reveal of the grade. We believe that prediction adjustment may be a useful proxy for a student’s uncertainty in their prediction.

Below, the absolute value of prediction adjustment scores are plotted across each of the exams in a semester. To see how prediction adjustment (or uncertainty) differs based on a student’s performance, the data are split into groups of students whose average grades fell below 60%, between 60 and 80%, and above 80%.

Here, it is clear that participants who score lower on exams tend to make larger prediction adjustments, and thus may be less certain in their grade predictions.


Prediction Confidence

Along with each exam grade prediction, we also ask participants to report their confidence in that prediction (i.e., how confidence are you that you will receive your expected grade?)

From the figures below, we can see some interesting patterns of variability in prediction confidence. In particular, students who perform worse on exams report the highest confidence in their predictions at the start of the semester (relative to students who receive higher exam grades). Conversely, participants who perform best on exams start the semester with relatively low confidence in their predictions. These two groups oscillate in their prediction confidence over the course of the semester. Ultimately, students who perform best on exams finish the semester with the highest relative confidence in their exam grade predictions.


Prediction Error

Here, instead of showing unsigned prediction errors, signed prediction errors are plotted over each exam. From this, we can clearly see that students who perform more poorly on exams receive more negative prediction errors throughout the semester, while participants who perform best tend to receive positive prediction errors.


Prediction Accuracy (Unsigned Prediction Error)

In addition to measuring confidence in predictions, we assess the accuracy of students’ grade expectations by computing prediction error (actual exam grade - expected exam grade). Prediction errors are positive if a student scores higher than they expected and negative if they score lower than expected. The absolute value of prediction error (i.e., unsigned prediction error) represents the overall accuracy of a student’s exam grade predictions. Lower unsigned prediction errors represent greater accuracy in grade expectations.

Below, unsigned prediction errors are plotted over each exam in the semester. Interestingly, students who perform more poorly on exams have less accurate grade predictions throughout the semester than students who perform well. However, at the second exam, prediction accuracy does not differ between high-scoring and low-scoring students. This suggests that after the first exam, all students accurately update their exam grade expectations, making accurate predictions for their grades on the second exam. Over the remaining exams, high-scoring students continue to fine-tune their predictions,thus improving their accuracy. Yet low-scoring students show the opposite trend, as their prediction accuracy worsens notably at the third exam.

This might suggest that low-scoring students update their exam grade expectations in a manner that differs from high-scoring participants, particularly after making a precise prediction at the second exam.