Introduction

Signal Detection Theory is a general framework to describe decisions made under uncertainty.

Examples

  • Assess person’s ability to recognize old/new face
  • Diagnosing cancerous tumour on an X-ray
  • Reporting whether a faint dim light was seen or not seen
  • Saying whether a tone was high or low pitch

Any situation where the information is ambiguous or insufficient and the descision is binary.

Errors are due to variability in the signal or within observer.

SDT aims to separate characteristics of the signal from characteristics of the decision maker.

Started in the domain of psychophysics, now used in many different fields (e.g., economics, medicine, etc)

Signal detection models

are various implementations of SDT that differ in

  • Observer’s task (Yes/No, 2 AFC, same/different)
  • How observer accumulates information
  • Assumptions about how signal is detected

Functions of Signal detection models

  1. Data analysis technique
    • provides better measures of performance than accuracy
  2. Interpret results across experimental conditions
    • explains how procedural changes should manifest in performance change
  3. Heuristic of Decision Process (psychological)
    • most controversial, but this aspect does not negate its usefulness in points 1 and 2.

Basic Signal Detection & One interval designs

Decision space represented in one dimension

  1. 2 classes of stimuli & 2 response choices (Yes/No, Chap 1 & 2)
  • Detection if one class is ‘noise’ or ‘null’ and the other is the stimulus
  • Discrimination if both classes are two different stimuli
  1. 2 classes of stimuli & multiple response choices (e.g., 1 - 6)
  • Rating experiment (Chapter 3)
  1. N classes of stimuli & N response choices
  • identification (Chapter 5)
  1. N classes of stimuli & M response choices, where \(M<N\)
  • classification (Chapter 5)

Multidimensional Detection Theory and Multi-interval designs

Decision space represented in two dimensions

  1. Detection or discrimination of compound stimuli (Chapter 6)
  2. Comparison and classification designs for discrimination (Chapt 7, 8, 9)
  • 2-interval forced-choice (2AFC), reminder
  • same-different, ABX, oddity

Yes/No experiment

2 stimuli & 2 responses: 4 possible outcomes.

stim/resp “Yes” “No”
signal hit miss
noise false alarms correct rejection

Hit rate (sensitivity): \[ \begin{align} H = p_{h} &= P(\text{"Yes"} | \text{ signal presented}) = \frac{\text{n yes}}{\text{n signal trials}} \end{align} \]

Miss rate: \[ \begin{align} M = p_{m} &= P(\text{"No"}| \text{ signal presented}) = \frac{\text{n no}}{\text{n signal trials}} \\ &= 1 - H \end{align} \]

False alarm rate (false positive): \[ \begin{align} F = p_{fa} &= P(\text{"Yes"} | \text{ noise presented}) = \frac{\text{n yes}}{\text{n noise trials}} \end{align} \]

Correct rejection rate (specificity): \[ \begin{align} CR = p_{cr} &= P(\text{"No"}| \text{ noise presented}) = \frac{\text{n no}}{\text{n noise trials}} \\ &= 1 - F \end{align} \]

Correct Responses Errors
hits misses
correct rejections false alarms

Face recognition example

  • Learn set of 25 faces
  • Tested with 25 old and 25 new faces.
  • Responses: Was this face shown before? “Yes” or “No”
## [1] "Group 1"
##     Yes No Total
## Old  20  5    25
## New  10 15    25
## [1] "Group 2"
##     Yes No Total
## Old   8 17    25
## New   1 24    25

How can we compare sensitivity or performance in the two groups?

##              Group1 Group2
## hits            0.8   0.32
## false alarms    0.4   0.04
  • Group 2 worse at recognizing old faces than Group 1 (worse hit rate)
  • but Group 2 is also less likely to misidentify New faces as Old compared to Group 1 (better false alarm rate)

Can we get a single measure of performance that combines both hits and false alarms?

What about proportion correct?

\[ \begin{align} p_{correct} &= \frac{1}{2}P(Yes | \text{ Old stimulus}) + \frac{1}{2} P(No| \text{ New stimulus}) \\ &= \frac{1}{2}\frac{\text{n yes}}{\text{n old trials}} + \frac{1}{2}\frac{\text{n no}}{\text{n new trials}}\\ &= \frac{1}{2} H + \frac{1}{2}CR\\ &=\frac{1}{2} (H + (1-F)) = \\ &=\frac{1}{2} + \frac{1}{2}(H-F) \end{align} \] Higher hits or lower false alarms leads to better accuracy.

Group 1 accuracy:

## [1] 0.7

Group 2 accuracy:

## [1] 0.64

Accuracies are very similar in the 2 groups.. Could this measure be confounded by the fact that the groups differ by their tendency to say “yes” (group 1) or to say “no” (group 2)?


The Basic Detection Model

  1. Evidence (internal representation) about the signal can be represented by a number along a single dimension
    • could be some vague feeling of ‘familiarity’
    • or response of neurons tuned to our stimulus parameter
    • could be more abstract idea of ‘evidence’
    • All the information extracted can be represented by 1 value along this dimension
  2. Each presentation of a stimulus leads an internal response with some variability
    • external noise sources
    • internal noise sources
    • variation in internal response affects responses both on signal and noise trials
  3. The choice of response is made by comparing the magnitude of the evidence (internal response) to a simple criterion
    • If evidence \(>\) criterion, then respond yes
    • If evidence \(<\) criterion, then respond no
  4. By convention,the signal distribution is plotted to the right of the noise distribution

\[ \begin{align} H = p_{h} &= P(yes | \text{old face})\\ &=P(X_{s} > \lambda) = \text{area under the red curve, right of the criterion line}\\ &=1 - P(X_{s} < \lambda)\\ &=1 - F_{s}(\lambda), \text{where } F_{s}(\lambda) \text{ is the cumulative distribution function for the signal} \end{align} \]

If \(\lambda = 1\) as shown, then proportion hits is obtained from the cumulative normal distribution for signal trials

criterion = 1
(H = 1- pnorm(criterion, mean = 2, sd =1))
## [1] 0.8413447

False alarms occur when we say “yes”, but a noise stimulus was presented.

\[ \begin{align} F = p_{fa} &= P(yes | \text{new face})\\ &=P(X_{n} > \lambda) = \text{area under the black curve, right of the criterion line}\\ &=1 - P(X_{n} < \lambda)\\ &=1 - F_{n}(\lambda), \text{where } F_{n}(\lambda) \text{ is the cumulative distribution for the noise} \end{align} \] For example, if \(\lambda = 1\) , then:

criterion = 1
(fa = 1- pnorm(criterion))
## [1] 0.1586553

The H and FA depend on:

  1. Overlap of the two signal and noise distributions
  • If little overlap, then H can be high and FA can be low
  • The more overlap, the more similar will H and FA be.
  1. Placement of the criterion
  • shifting right, both H and FA decrease.
  • shifting left, both H and FA increase.


Gaussian detection model

The most general form of the model assumes that the internal responses to the signal and noise stimuli are normally distributed random variables \[ Xn \sim N(\mu_{n},\sigma^2_{n}), \text{and } Xs \sim N(\mu_{s},\sigma^2_{s})\] and \(\lambda\), the criterion, is another free parameter.

However, since we only have access to H and F, only the relative positions and scale of the distributions matter, not their absolute positions and scale. So, without loss of generality, we can get rid of one mean and variance parameter by giving them arbitrary values.

Assume noise distribution is the standard normal Gaussian \(Xn \sim N(0,1)\)

This is one way of fixing two parameters. There are other (equally valid) ways.

Example 1: Fixed sensitivity, changing criterion

\[ \text{Suppose } Xn \sim N(0,1) \text{ and } Xs \sim N(1.5,4). \text{What are the hit and false alarm rates for } \lambda = -2, 0, \text{and } 2? \]

\[ \begin{align} \text{Normal density function: } P(x) &= \frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{1}{2}(\frac{x-\mu}{\sigma})^2}\\ \text{let } z = \frac{(X-\mu)}{\sigma}, \text{then } P(x) &= \frac{1}{\sqrt{2\pi}} e^{\frac{-z^2}{2}}, \text{ standard normal distribution}\\ \end{align} \]

\[ \begin{align} \text{Normal distribution function: } \Phi(z) &= \frac{1}{\sqrt{2\pi}} \int_0^z e^{-x^2/2} dx\\ &= \text{error function computed numerically, available in tables (e.g. A5.2 M&C)}\\ &= \text{pnorm(x) in R, normdist(x,0,1,TRUE) in Excel} \end{align} \]

Remember that \[ \begin{align} F = P(yes | \text{noise}) &=P(X_{n} > \lambda) = \text{area under the black curve, right of the criterion line}\\ &=1 - F_{n}(\lambda), \text{where } F_{n}(\lambda) \text{ is the cumulative distribution for the noise}\\ &= 1 - \Phi(\lambda)\\ &\text{where } \Phi(x) \text{ is the standard normal distribution function}\\ &\text{and because } \Phi(x) \text{ is symmetric}\\ &=1-\Phi(\lambda) = \Phi(-\lambda) \end{align} \] and

\[\begin{align} H = P(yes | \text{signal}) &=P(X_{s} > \lambda) = \text{area under the red curve, right of the criterion line}\\ &=1 - F_{s}(\lambda), \text{where } F_{s}(\lambda) \text{ is the cumulative distribution for the signal}\\ &=1- \Phi(\frac{\lambda-\mu_{s}}{\sigma_s}), \text{need to scale to use the standard normal} \end{align}\] \[\begin{align} \text{If } Xs \sim N(1.5,4) &\text{ and } Xn \sim N(0,1) \text{, then } \\ H &= 1-\Phi(\frac{\lambda-1.5}{2}) \\ FA &= 1-\Phi(\lambda) = \Phi(-\lambda) \end{align}\]
lambdas = c(-2,0,2)
FA = round(1 - pnorm(lambdas),2)
H = round(1-pnorm((lambdas - 1.5)/sqrt(4)),2)

(changingCriterion = data.frame(lambdas, H, FA))
##   lambdas    H   FA
## 1      -2 0.96 0.98
## 2       0 0.77 0.50
## 3       2 0.40 0.02

Plotting false alarms vs hits: ROC curve

This plot allos to see how hits and false alarm change when we shift our criterion.

  1. As \(\lambda\) decreases, tendency to say “yes” increases, and so do hits and false alarms.
  2. The slope of the ROC curve decreases as criterion increases: as response bias moves towards ‘yes’, the false alarm rate increases more than the hit rate.
  3. The dotted line shows plots the hits and false alarms for a changing criterion from -4 (top right) to +4 (bottom left). > Given that we did not change the positions of the signal and noise distributions, this curve depicts the range of (H, FA) pairs that can be generated by an observers with the same ‘sensitivity’.

Example 2: Varying sensitivity

Let’s suppose we have another observer, whose internal response to the signal is greater (further to the right). \[ Xs_1 \sim N(\mu_s = 1.5,\sigma^2=4), \text{and } Xs_2 \sim N(\mu_s = 2.5,\sigma^2=4)\\ \text{the noise distribution is the same.. } Xn \sim N(\mu = 0,\sigma^2=1)\\ \text{What are the hit and false alarm rates for both observers, if their criteria are } \lambda = -2, 0, \text{or } 2? \]

##   lambdas    H   FA   H2  FA2
## 1      -2 0.96 0.98 0.99 0.98
## 2       0 0.77 0.50 0.89 0.50
## 3       2 0.40 0.02 0.60 0.02
  • Observer #2 has higher hits
  • False alarms are the same. Why?

Here I am plotting observer 2 with open symbols, and observer 1 with filled symbols.

  1. All points on Observer 2’s curve are further away from the diagonal line (where FA = H) than those of observer #1.
  2. Points of the same colour show observers with the same response bias, but different sensitivity.

Can we get a single measure of sensitivity? Yes, but we will first see it in a simpler situation.

Equal variance Gaussian model

The previous (general) model had 3 parameters: \(\mu_s\), \(\sigma_s\), and \(\lambda\).
In a single Yes/No experiment, we only get 2 (unique) values: H, FA. Need to fix another parameter!

The equal-variance model makes the assumption that: \(\sigma_n^2 = \sigma_s^2 = 1\)

\[Xn \sim N(0,1) \text{ and } Xs \sim N(\mu_s,1)\]

As we saw before, the distance between the noise and signal distribution affects sensitivity. In this model,the mean of the signal distribution \(\mu_s\) also represents the distance between the means of the signal and noise distributions. This distance is a measure of sensitivity or detectability, called d' (dee-prime).

How do d’ and \(\lambda\) relate to hits and false alarms?

\[\begin{align} CR &= \Phi(\lambda) \\ FA = 1-CR &= 1-\Phi(\lambda) = \Phi(-\lambda)\\ \text{we can also apply the inverse cumulative normal function on both sides..} &\\ z(CR) &= z(\Phi(\lambda))\\ z(CR) &= \lambda\\ z(1-FA) &= \lambda\\ \text{because of symmetry of z.. } z(1-FA)= -z(FA) &= \lambda\\ z(FA) &= -\lambda \end{align}\]

So the false alarms give us the location of the criterion \(\lambda\) with respect to the mean of the noise distribution.

What about hits?

\[\begin{align} M &= F_s(\lambda) \\ & = \Phi(\lambda - d') , \text{after rescaling to standard normal}\\ H = 1-M &= 1-\Phi(\lambda - d') = \Phi(-(\lambda - d')) = \Phi(d'-\lambda) \\ \text{we can also apply the inverse cumulative normal function on both sides..} &\\ z(M) &= z(\Phi(\lambda-d'))\\ z(M) &= \lambda-d'\\ z(1-H) &= \lambda-d'\\ \text{because of symmetry of z.. } -z(H) &= \lambda - d'\\ \text{from above, we know that} \lambda &= -z(FA). \text{Replacing }\lambda\\ -z(H) &= -z(FA)-d'\\ \text{with some rearranging... } -z(H)+d' &= -z(FA)\\ d' &= z(H)-z(FA) \end{align}\]

Performance in a Yes/No experiment, under the assumption of equal variance, can be represented by H, FA, or d’ & \(\lambda\). These are interchangeable, but represent different types of information.

From hits and false alarms to d’ and \(\lambda\)

\[d' = z(H)-z(FA) \text{ is a measure of sensitivity, under the equal variance Gaussian model}. \] \[\lambda = -z(FA) \text{ is one measure of bias (not the best measure, see next lecture)}. \]

Face memory example revisited:
## [1] "Group 1"
##     Yes No Total
## Old  20  5    25
## New  10 15    25
## [1] "Group 2"
##     Yes No Total
## Old   8 17    25
## New   1 24    25
##              Group1 Group2
## hits            0.8   0.32
## false alarms    0.4   0.04
## pc              0.7   0.64

What are their d’?

(dprime_1 = round(qnorm(0.8) - qnorm(0.4),3))
## [1] 1.095
(dprime_2 = round(qnorm(0.32) - qnorm(0.04),3))
## [1] 1.283

Group 2 has higher sensitivity, even though they show lower proportion correct! Here is what our model looks like (Group 1: red, Group2: blue):

Let’s plot these results on the ROC curve

We estimated d’s to be 1.095 and 1.283. If we manipulate the response bias by changing instructions, how will performance change? Here, I simulate FA changing from 0 to 1, and calculate H assuming equal variance Gaussian model

\(Xs_1 \sim N(\mu_s = 1.095,\sigma^2=1)\) and \(Xs_2 \sim N(\mu_s = 1.0283,\sigma^2=1)\).

The assumptions behind the equal-variance Gaussian signal detection model generate curved iso-sensitivity lines, symmetrical around the negative diagonal.

Some characteristics of this model:

  • when performance is at chance (d’ = 0), ROC is the major diagonal (H = FA).
  • Increasing sensitivity shifts curve to upper left corner.
  • Perfect performance with signal trials leads to complete failure with noise trials and vice versa.
  • Does not predict situations where hits < FAs (bottom right corner), because d’ is positive (except for error).

ROCs on z -coordinates

If we rearrange \(z(H)-z(FA)=d'\), we also get

\[z(H) = z(FA) + d'\]

  1. Transformed ROCs have slope = 1 in the equal variance model
  2. d’ is the y-intercept when z(FA) = 0.
  3. Can use this relationship to predict change in H if FA changes, and vice versa.
Implied ROC by proportion correct measure

Any measure of sensitivity (like proportion correct), assumes a certain ROC curve. From above,

\[ \begin{align} p_{correct} &=\frac{1}{2} (H + (1-F)) \\ 2*p_{correct} &= (H + (1-F)) \\ 2*p_{correct}-1 +F &= H \\ H &=F+2p_{correct}-1 \end{align} \]

Experimentally estimating the ROC curve can help distinguish whether d’ or pc is a better model of performance.

Some practicalities of calculating d’

What happens when H = 1 or FA = 0?

qnorm(0)
## [1] -Inf
qnorm(1)
## [1] Inf
Adjustments for H = 1 or FA = 0:
  1. Adjust only the expreme proportion, assuming that observer made an error on 1/2 of a trial (criticised for introducing bias.)
    • rate of \(0 \text{ becomes } \frac{1}{2N} \text{ and } 1 \text{ becomes } 1-\frac{1}{2N}\)
  2. Better method called loglinear: add 0.5 to counts in all the cells. (Hautus, 1995)
    • \(H_t = \frac{nYes+0.5}{\text{N signal trials}+1}\)
  3. Pool data across subjects (Macmillan & Kaplan, 1985). Should be applied carefully (only when sensitivity and bias are similar.)