Preference Assessment

Preamble

We choose the park area in the image below as the location for the conditioning assessment. This was because this is a quiet, secluded area that has enough space and is unlikely to have other animals nearby. This location is displayed below;

Conditioning location

Conditioning location

It was noted to us by the animal’s owner that the animal had a tendency to attack other dogs on occasion. To further avoid this risk, the Murphy was kept on a leash at all times. The conditioning process took around 35-40 minutes. There was quite a lot of subjective decision making occuring when recording each trial as a yes or no. The data. The process was as follows;

  • Wait until animal is wandering, not directly facing trainer, at ease and not directly next to trainer
  • Use clicker \(S^r\) and immediately offer \(S^R\) by extending arm out with food
  • IF pet moves towards trainer wthin 3 secs (either moves heads towards trainer or physically moves closer) - record YES for that trial
  • ELSE record NO for that trial

The conditioning process was video recorded and this video file is available to project supervisors on the shared Google Drive directory. Unfortunately, one of the files does not have audio avaiable. We completed 42 trials using this format. The criteria for establishing a conditioned reinforcer was ten trials in a block.

After feedback from the project supervisor about the validty of the system we were using, it was decided we need to change the conditioing criteria, timing and method we were using. Previously;

  • We were conducting trials with variable timing (sometimes over 30s between blocks)
  • We more moving around while doing the reinforcing - not controlling other stimuli (our movement could act as a discriminative stimulus)
  • We were using large biscuits, so it possibly Murphy was become satiated, reducing the effectiveness of the primary reinforcer

Hence we were instructred to adopt a new system;

  • Quarter the pellets we used as reinforcers
  • Stand still to avoid creating a discrimative stimulus
  • Use the clicker then quicky (within 2 secs) feed Murphy a quarter pellet - repeat 15 times
  • Stand 1m away from Murphy then activate clicker
  • If we observed any indication (movement of head, being approached) showing he was expecting to recieve food, then we were to provide the conditioned reinforcer

We completed 58 trials under the new system. It was not neccesary to video record this second stage of data collection. I have clearly indicated in the graphs were this change in method occured.

Raw data

There were three main video files documenting the conditioning process. These are available to view for members of PSYC314: Behaviour Analysis on the shared Google Drive directory. The raw data sheets are displayed below;

Data sheet for conditioning stage

Data sheet for conditioning stage

Final data sheet after conditioning session 2

Final data sheet after conditioning session 2

Data Analysis

The data was entered into excel in long format and imported into R. “YES” was then recoded to 1 and “NO” to 0.

As were were under time constraints and the weather was turning, we slighlty loosening the intial establishment criteria of 10 YES in a single block (i.e. from\(y_i = 1 for i = 1,...,10\) or \(i = 11,...,20\)) to any 10 YES trials in a row (i.e. \(y_i = 1\) for \(i = n_k,...n_{k2}\) where \(n_k2 - n_k \geq 10\)). However, after discussion with the project supervisor, this approach was rejected in favour in performing another conditioning session. This dataset was combined with the first. Hence, \(n = 42\) to \(n = 100\) is data from the second session.

This dataframe, along with some basic frequency tables, is displayed below;

conditioning.df <- read.csv("Conditioned_Reinforcer_session1.csv") #import
kable(conditioning.df,caption = "Conditioned Reinforcer Data") %>%
  kable_styling("striped", full_width = T) %>%
  scroll_box(height = "200px")
cat("\n")
table(conditioning.df$Result) ->  x #freq tables
x <- as.data.frame(x)
colnames(x) <- c("Trial Result","Frequency")
cat("\n")
kable(x,caption = "Trial result frequency") %>%
  kable_styling("striped", full_width = T); rm(x)
cat("\n")
x <- table(conditioning.df$Session.No.)
x <- as.data.frame(x) 
colnames(x) <- c("Session","Number of Trials")
cat("\n")
kable(x,caption = "Session trials frequency") %>%
  kable_styling("striped", full_width = T)

conditioning.df$Result <- as.character(conditioning.df$Result) #recoding
conditioning.df$Result[conditioning.df$Result == "YES"] <- 1
conditioning.df$Result[conditioning.df$Result == "NO"] <- 0
conditioning.df$Result <- as.numeric(conditioning.df$Result)
Conditioned Reinforcer Data
Run.No. Block.No. Result Session.No.
1 1 NO 1
2 1 NO 1
3 1 NO 1
4 1 NO 1
5 1 NO 1
6 1 NO 1
7 1 YES 1
8 1 NO 1
9 1 NO 1
10 1 NO 1
11 2 NO 1
12 2 NO 1
13 2 NO 1
14 2 NO 1
15 2 NO 1
16 2 NO 1
17 2 YES 1
18 2 YES 1
19 2 YES 1
20 2 YES 1
21 3 YES 1
22 3 NO 1
23 3 NO 1
24 3 NO 1
25 3 YES 1
26 3 YES 1
27 3 YES 1
28 3 YES 1
29 3 YES 1
30 3 YES 1
31 4 YES 1
32 4 NO 1
33 4 YES 1
34 4 YES 1
35 4 YES 1
36 4 YES 1
37 4 YES 1
38 4 YES 1
39 4 YES 1
40 4 YES 1
41 5 YES 1
42 5 YES 1
43 5 NO 2
44 5 NO 2
45 5 NO 2
46 5 NO 2
47 5 NO 2
48 5 NO 2
49 5 YES 2
50 5 YES 2
51 6 NO 2
52 6 YES 2
53 6 NO 2
54 6 YES 2
55 6 NO 2
56 6 YES 2
57 6 NO 2
58 6 YES 2
59 6 YES 2
60 6 YES 2
61 7 YES 2
62 7 YES 2
63 7 YES 2
64 7 YES 2
65 7 YES 2
66 7 NO 2
67 7 YES 2
68 7 YES 2
69 7 YES 2
70 7 YES 2
71 8 YES 2
72 8 YES 2
73 8 YES 2
74 8 NO 2
75 8 YES 2
76 8 YES 2
77 8 YES 2
78 8 YES 2
79 8 YES 2
80 8 YES 2
81 9 YES 2
82 9 YES 2
83 9 YES 2
84 9 YES 2
85 9 YES 2
86 9 YES 2
87 9 NO 2
88 9 YES 2
89 9 YES 2
90 9 YES 2
91 10 YES 2
92 10 YES 2
93 10 YES 2
94 10 YES 2
95 10 YES 2
96 10 YES 2
97 10 YES 2
98 10 YES 2
99 10 YES 2
100 10 YES 2
Trial result frequency
Trial Result Frequency
NO 32
YES 68
Session trials frequency
Session Number of Trials
1 42
2 58

The raw data is plotted below. Obviously, fitting a LOESS curve with a confidence interval (grey shaded region) is an inappropriate way of handling probabilistic data, but it gives an idea of trend between YES and NO trial outcomes.

ggplot(data = conditioning.df) + geom_smooth(mapping = aes(x = Run.No.,y = Result),method = 'loess') + geom_vline(xintercept = 42,color = "red",width = 2) + theme_light() + labs(title = "Plot of Trial result versus Run No using LOESS",subtitle = "Vertical red line indicates change in method (2nd session)",caption = "original data points overlaid") + ylab("Result, 1 = Y, 0 = N") + xlab("Run") + xlim(0,100) + geom_point(mapping = aes(x = Run.No., y = Result))

We were required to plot the sequential probabilities of trial success for each block. This was calculated and plotted.

conditioning.df$Block_probability <- conditioning.df$Block_probability #assigning p(block) to all rows using a loop
for(i in 1:nrow(conditioning.df)) {
    conditioning.df$Block_probability[conditioning.df$Block.No. == i] <- mean(conditioning.df$Result[conditioning.df$Block.No. == i])
}

ggplot(data = conditioning.df) + geom_line(mapping = aes(x = Block.No., y = Block_probability)) + theme_light()  + theme(panel.grid.minor.x = element_blank()) + geom_point(mapping = aes(x = Block.No., y = Block_probability), size = 3, shape = "triangle") + labs(title = "Plot of Probability of Trial Success versus Block Number",subtitle = "Vertical red line indicates change in method (2nd session)",caption = "original data points overlaid") + ylab("Block Probability") + xlab("Block Number") + xlim(0,10) + ylim(0,1) + geom_vline(xintercept = 4.2,color = "red",lwd = 1) + scale_x_continuous(breaks = c(1:10))

For the sheer sake of curiosity, we can fit a logistic general linear regression model and plot the probability of trial success by trial number. This gives us the probability of trial sucess by trial number. We can note a few obvious problems with this model, the most obvious of which is that a very low probability of trial success (less than 0.25) would require (the clearly impossible scenario of) negative trials!

We could extrapolate our model to cover trial numbers not in the original data, i.e. 100-200 trials. Extrapolating outside the original data range is an action that should be done with extreme caution. This extrapolation is shown in the second graph below.

conditioning.df$Result <- as.factor(conditioning.df$Result)
conditioning.glm <- glm(data = conditioning.df, Result ~ Run.No.,family = "binomial") #link default is logit
stargazer(conditioning.glm, align = T, type = "html", omit.stat = c("aic","bic","ll"))
Dependent variable:
Result
Run.No. 0.046***
(0.010)
Constant -1.300***
(0.480)
Observations 100
Note: p<0.1; p<0.05; p<0.01
conditioning.glm$coefficients -> logistic.coefficients.vec
#block_no <- sort(unique(conditioning.df$Block.No.))
trial_no <- seq(from = 0, to = 100, by = 0.01)
trial_logits <- logistic.coefficients.vec[1]+ logistic.coefficients.vec[2]*trial_no #intercept + coefficient*trial no.
#this gives log(p/(1-p))
trial_probs <- exp(trial_logits)/(1 + exp(trial_logits)) # turning odds ration to probability: odds / 1 + odds 

logit_conditioning.df <- as.data.frame(cbind(trial_no, trial_probs)) #binding x and y variables

condition_logistic.g <- ggplot(logit_conditioning.df, aes(x = trial_no, y = trial_probs)) + geom_line() + theme_light() + theme(panel.grid.minor.x = element_blank()) + labs(title = "Logistic Regression of Trial Success versus Run No.") + ylab("P(Trial Sucess)") + xlab("Trial") + xlim(0,100) + ylim(0,1) + scale_x_continuous(breaks = seq(from = 0, to = 100, by = 10))

rm(trial_no,trial_probs,logit_conditioning.df,trial_logits)

#extrapolating model
trial_no <- seq(from = -100, to = 200, by = 0.01)
trial_logits <- logistic.coefficients.vec[1]+ logistic.coefficients.vec[2]*trial_no #intercept + coefficient*trial no.
#this gives log(p/(1-p))
trial_probs <- exp(trial_logits)/(1 + exp(trial_logits)) # turning odds ration to probability: odds / 1 + odds

logit_conditioning.df <- as.data.frame(cbind(trial_no, trial_probs)) #binding x and y variables
extrap_conditioned_logistic.g <- ggplot(logit_conditioning.df, aes(x = trial_no, y = trial_probs)) + geom_line() + theme_light() + theme(panel.grid.minor.x = element_blank()) + labs(title = "Extrapolated Logistic Regression of Trial Success versus Run No.") + ylab("P(Trial Sucess)") + xlab("Trial") + xlim(-100,200) + ylim(0,1) + scale_x_continuous(breaks = seq(from = -100, to = 200, by = 10)) + theme(axis.text.x = element_text(angle = 45, hjust = 1))

grid.arrange(condition_logistic.g,extrap_conditioned_logistic.g)

Conclusion

We reached our conditioning criteria in final block, where we achieved ten yes trials in a row, such that probability of sucess = 1.0. Hence, we can work on the assumption we have established a conditioned reinforcer for Murphy, which leads into Stage 3: Shaping.