This document provides supporting information to the Rethink Priorities report, “What are the most promising interventions for reducing rodenticide use in the U.S.?” In particular, it shows how we processed and analyzed a poll of U.S. adults on their attitudes towards various types of legislation that would restrict access to rodenticides. The survey was administered on Prolific from August 31, 2022 to September 10, 2022, and the goal was recruit 3,000 respondents. Due to late and incomplete responses (the majority of respondents started and completed the survey on either August 31 or September 1) there were a total of 3,071 respondents. Respondents were paid $1.38 USD for completing the survey.
Detailed examinations of the legislation and the the headline results of the poll are in the report, so this document is mostly for those interested in scrutinizing how we cleaned and weighted our data. Nevertheless, we do examine the results in greater detail here than in the report. If you are only interested in the results, skip to the Results section.
The choice to obtain 3,000 responses was based on the following considerations:
The “direct” condition in which respondents would report their own support/opposition for legislation. This condition contains the estimates that we focus on here.
The “indirect” condition in which respondents would support what percentage of their community they would support the legislation. The deviation in estimates between the direct and indirect conditions functions as a measure of social desirability bias in the direct condition (see Lehrer et al., 2019 and Lusk & Norwood, 2010), which can be an issue for measure attitudes toward animal welfare initiatives (Lai et al., 2022).
Accordingly, a power analysis revealed that we needed 404 responses to estimate the noisiest estimate of support (viz., 50%) with a margin of error of 5% (assuming a 95% confidence interval):
library(MKpower)
ssize.propCI(prop = 0.5, width = 0.10, conf.level = 0.95)
##
## Sample size calculation by method of wald-cc
##
## n = 403.8983
## prop = 0.5
## width = 0.1
## conf.level = 0.95
##
## NOTE: Two-sided confidence interval
Rounding down, this means we would need 400 valid responses in each of the two conditions.
We needed to account for a design effect larger than 1 due to the fact that we were using sampling weights to make our sample representative of U.S. adults. We guessed that the design effect due to unequal weighting would be 3. That means we would need to collect at least three times as much data as we would have needed in each condition had we collected a simple random sample. The upshot is that while our power analysis suggested we only needed ~800 responses in total, we actually needed 800*3 = 2,400 responses for our weighted analyses.
We assumed that 20% of responses would be excluded due to inattention. This was a pessimistic estimate based on our previous experiences with Prolific.
We concluded that if we collected 3,000 responses, that we would have 2,400 valid responses after excluding invalid responses.
Before we clean the data for analyses, let’s start by uploading the required packages.
# upload packages
library(tidyverse)
library(readxl)
library(weights)
library(anesrake)
library(tidystats)
library(survey)
## Warning: package 'Matrix' was built under R version 4.2.2
library(psych)
library(gt) # https://gt.rstudio.com/articles/intro-creating-gt-tables.html
library(data.table)
library(formattable)
library(broom)
library(forcats)
Now let’s read in the data. This part of the code, as with some of the other sections, was written by Jamie Elsey.
# Setting the working directory. Obviously, this is machine-specific.
setwd("~/Rethink Priorities/animal welfare/Rodenticides/Rodenticides Polling/Rodenticides polling quantitative")
#A function to process raw qualtrics files collected on prolific.
read_qualtrics <- function(file,
provider = "Prolific",
row_removal = TRUE,
preview_removal = TRUE,
convert_time = TRUE,
extras_removal = TRUE) {
qual <- readxl::read_xlsx(file)
if(preview_removal == TRUE) {
qual <- qual %>%
filter(grepl('review', Status) == FALSE)
}
qual_names <- names(qual)
qual_names <- str_replace_all(tolower(qual_names), " ", "_")
names(qual) <- qual_names
if(row_removal == TRUE) {
qual <- qual[-1, ]
}
if(extras_removal == TRUE) {
qual <- qual %>%
select(-c("recipientlastname",
"recipientfirstname",
"recipientemail",
"externalreference"))
}
if(convert_time == TRUE) {
qual <- qual %>%
mutate(across(.cols = contains("_time_"),
~ as.numeric(.)))
}
qual <- qual %>%
mutate(provider = provider)
return(qual)
}
#use function to read in raw data file
survey <- read_qualtrics(file = "Rodenticides_Deidentified.xlsx")
#survey <- read_qualtrics(file = "Rodenticides August - Short_September 12, 2022_12.01.xlsx") #raw data file
Now let’s drop responses that did not pass at least of our various attention and honesty checks. We flag that dropping respondents based on measured inattentiveness will induce selection if inattentiveness is correlated with our estimands of interest Alvarez et al., 2019. Our sampling weights can in theory correct for this selection bias, though it is possible that there weighting variables we did not measure that explain inattentiveness but do not explain selection on Prolific. Fortunately, we find that relatively few respondents were inattentive or dishonest.
#respondents were asked at the end how honestly they responded to survey items.
survey<-filter(survey, honesty == "Very honestly"| honesty == "Completely honestly")# deleted "not honestly at all" and "somewhat honestly"
#respondents were asked which option was not a mode of transportation.
survey<-filter(survey, atten_check2 == "Tree")
#Upload the manual evaluation of open-ended recall of the arguments presented in favor and against
# Hannah McKay conducted the evaluations.
oe_check <- fread("Rodenticides polling inattention check_deidentified - Data.csv ")
oe_check = slice(oe_check, -(1:2)) #delete first two rows without data
names(oe_check) <- tolower(names(oe_check))
oe_check<- dplyr::select(oe_check, c("responseid", "favor_passfail", "oppose_passfail"))
#join attention check evaluations to original dataset.
survey<-left_join(survey, oe_check, by = "responseid")
#drop respondents who failed one or both of the open-ended attention checks
survey<- subset(survey, favor_passfail != "F" & oppose_passfail != "F")
Next, we categorize variables according to our standard weighting scheme.
#Define regions to put states in
northeast <- c('Connecticut', 'Maine','Massachusetts', 'New Hampshire', 'New Jersey', 'New York',"Pennsylvania",'Rhode Island', 'Vermont')
midwest <- c("Illinois","Indiana","Iowa", "Kansas", "Michigan", "Minnesota", "Missouri","Nebraska", "North Dakota", "Ohio","South Dakota","Wisconsin")
south <- c("Alabama","Arkansas",'Delaware',"Florida","Georgia","Kentucky","Louisiana","Maryland", "Mississippi","North Carolina","Oklahoma", "South Carolina","Tennessee","Texas","Virginia","West Virginia", 'Washington (District of Columbia)')
west <- c("Alaska","Arizona","California","Colorado","Hawaii","Idaho","Montana","Nevada", "New Mexico", "Oregon","Utah","Washington", "Wyoming")
# Define divisions to put states in
east_north_central<- c("Illinois","Indiana","Michigan", "Ohio","Wisconsin")
east_south_central<- c("Alabama","Kentucky","Mississippi","Tennessee")
middle_atlantic<- c('New Jersey', 'New York',"Pennsylvania")
mountain<- c("Arizona","Colorado","Idaho","Montana","Nevada", "New Mexico", "Utah","Wyoming")
new_england<- c('Connecticut', 'Maine','Massachusetts', 'New Hampshire','Rhode Island', 'Vermont')
pacific<- c("Alaska","California","Hawaii","Oregon","Washington")
south_atlantic<- c('Delaware',"Florida","Georgia","Maryland", "North Carolina","South Carolina","Virginia","West Virginia", 'Washington (District of Columbia)')
west_north_central<- c("Iowa", "Kansas", "Minnesota", "Missouri","Nebraska", "North Dakota", "South Dakota")
west_south_central<- c("Arkansas","Louisiana","Oklahoma", "Texas")
# create demographic groupings
survey_wt <- survey %>%
mutate(age_wt = case_when(age < 25 ~ "18-24",
age< 35 ~ "25-34",
age < 45 ~ "35-44",
age < 55 ~ "45-54",
age < 65 ~ "55-64",
age > 64 ~ "65+"),
age_wt = factor(age_wt,
levels = c("18-24","25-34","35-44", "45-54","55-64", "65+")),
sex_wt = case_when(sex == "Male" ~ "Male",
sex == "Female" ~ "Female"),
sex_wt = factor(sex,
levels = c("Female", "Male")),
male = case_when(sex == "Male" ~ .5,
sex == "Female" ~ -.5),
education_wt = case_when(education == "Some high school" ~ "Less than high school",
education == "Graduated from high school (Diploma/GED or equivalent)" ~ "Graduated from high school",
education == "Some college, no degree" ~ "Some college, no degree",
education == "Completed associate’s degree" ~ "Some college, no degree",
education == "Completed bachelor’s degree" ~ "Graduated from college",
education == "Completed master’s degree" ~ "Completed graduate school",
education == "Completed professional degree beyond a bachelor’s degree (e.g., M.D., J.D.)" ~ "Completed graduate school",
education == "Completed doctorate degree" ~ "Completed graduate school"),
education_wt = factor(education_wt,
levels = c('Completed graduate school',
'Graduated from college',
'Some college, no degree',
'Graduated from high school',
'Less than high school')),
income_wt = case_when(income == "Under $15,000" ~ "Under $20,000",
income == "Between $15,000 and $19,999" ~ "Under $20,000",
income == "Between $20,000 and $49,999" ~ "Between $20,000 and $49,999",
income == "Between $50,000 and $74,999" ~ "Between $50,000 and $79,999",
income == "Between $75,000 and $79,999" ~ "Between $50,000 and $79,999",
income == "Between $80,000 and $99,999" ~ "Between $80,000 and $99,999",
income == "Between $100,000 and $150,000" ~ "Between $100,000 and $150,000",
income == "Over $150,000" ~ "Over $150,000"),
income_wt = factor(income_wt,
levels = c("Under $20,000",
"Between $20,000 and $49,999",
"Between $50,000 and $79,999",
"Between $80,000 and $99,999",
"Between $100,000 and $150,000",
"Over $150,000")),
race_wt = case_when(hispanic == "Yes" ~ 'Hispanic or Latino',
grepl('hite', race) == TRUE ~ 'White or Caucasian',
grepl('frican', race) == TRUE ~ 'Black or African American',
grepl('sian', race) == TRUE ~ 'Asian or Asian American',
grepl('two or more', race) == TRUE ~ 'Other',
grepl('ative', race) == TRUE ~ 'Other',
grepl('Other', race) == TRUE ~ 'Other'),
race_wt = factor(race_wt,
levels = c("White or Caucasian",
"Hispanic or Latino",
"Other",
"Black or African American",
"Asian or Asian American")),
region_wt = case_when(region_us %in% midwest == TRUE ~ 'Midwest',
region_us %in% northeast == TRUE ~ 'Northeast',
region_us %in% west == TRUE ~ 'West',
region_us %in% south == TRUE ~ 'South'),
region_wt = factor(region_wt,
levels = c("Northeast",
"Midwest",
"South",
"West")),
division_wt = case_when(region_us %in% east_north_central == TRUE ~ 'East North Central',
region_us %in% east_south_central == TRUE ~ 'East South Central',
region_us %in% middle_atlantic == TRUE ~ 'Middle Atlantic',
region_us %in% mountain == TRUE ~ 'Mountain',
region_us %in% new_england == TRUE ~ 'New England',
region_us %in% pacific == TRUE ~ 'Pacific',
region_us %in% south_atlantic == TRUE ~ 'South Atlantic',
region_us %in% west_north_central == TRUE ~ 'West North Central',
region_us %in% west_south_central == TRUE ~ 'West South Central'),
division_wt = factor(division_wt,
levels = c('East North Central',
'East South Central',
'Middle Atlantic',
'Mountain',
'New England',
'Pacific',
'South Atlantic',
'West North Central',
'West South Central')),
party_wt = case_when(party_raw == "Independent" ~ "Independent",
party_raw == "Democrat" ~ "Democrat",
party_raw == "Republican" ~ "Republican",
party_raw == "Other" ~ "Independent",
party_raw == "Not sure" ~ "Independent"),
party_wt = factor(party_wt,
levels = c("Republican", "Independent", "Democrat")),
left_right_wt = case_when(grepl('ibera', libcon_raw) == TRUE ~ "Liberal",
grepl('odera', libcon_raw) == TRUE ~ "Moderate",
grepl('ervat', libcon_raw) == TRUE ~ "Conservative"),
left_right_wt = factor(left_right_wt,
levels = c("Liberal", "Moderate", "Conservative")),
bible_wt = case_when(grepl('ancient', bible) ~ "fables",
grepl('inspired', bible) ~ "inspired",
grepl('literal', bible) ~ "literal"),
bible_wt = factor(bible_wt,
levels = c("literal", "inspired", "fables")),
spanking_wt = factor(spanking,
levels = c("Strongly disagree",
"Disagree",
"Agree",
"Strongly agree")),
trust_wt = factor(trust,
levels = c("Can't be too careful", "Can trust people"))
)
# We don't need to get rid of anyone person who had NA on any one weighting variable, but we can exclude people who have NAs on all weighting variables.
#It turns out that none of the attentive respondents are missing on all of the weighting variables in this study.
survey_wt<-subset(survey_wt, is.na(age_wt) == FALSE|is.na(sex_wt) == FALSE|is.na(education_wt)|is.na(income_wt) == FALSE|is.na(race_wt) == FALSE|is.na(region_wt) == FALSE|is.na(division_wt) == FALSE|is.na(party_wt) == FALSE|is.na(bible_wt) == FALSE|is.na(spanking_wt) == FALSE|is.na(trust_wt) == FALSE)
How large are each of the categories on which we are weighting? Small cells will result in large weights. It is possible to reduce the mean squared error by collapsing small categories together (Battaglia et al., 2009). Although treating distinct categories as the same risks biasing estimates, categories that possess few responses do not possess sufficient signal to provide an unbiased estimate. Heuristic minimums like 50 responses per cell are common. Rather than use some numerical cut-off, which we suspected would be insufficient for keeping the design effect manageable, we compared the proportions in the sample and in the U.S. population. In cases where… (a) there were large discrepancies due to the observed category being a much smaller proportion than in the population, (b)the observed category was small, and (c) we felt that collapsing categories would not induce considerable bias given our priors about how the weighting variables relate to rodenticide attitudes
…we collapsed categories.
Below we compare our sample proportions to our weighting targets, which are based on 2020 American Community Survey 5-year estimates, 2018 and 2021 General Social Survey Data, and Gallup data. For information on how we created our weighting targets, see WeightingTargets.R.
###compare sample proportions of race to national proportions
race_nr <- matrix(c(0.630, 0.161, 0.0313, 0.120, 0.0573), ncol = 5)
colnames(race_nr)<-c("White or Caucasian", "Hispanic or Latino", "Other",
"Black or African American", "Asian or Asian American")
race_nr<-rbind(table(survey_wt$race_wt)/length(survey_wt$race_wt), race_nr)
row.names(race_nr) <- c("Sample", "National")
race_nr #no need to collapse categories
## White or Caucasian Hispanic or Latino Other
## Sample 0.7032847 0.1025547 0.0379562
## National 0.6300000 0.1610000 0.0313000
## Black or African American Asian or Asian American
## Sample 0.08065693 0.07554745
## National 0.12000000 0.05730000
###compare sample proportions of sex to national proportions
sex_nr <- matrix(c(0.513, 0.487, 0), ncol = 3)
colnames(sex_nr)<-c('Female', 'Male', 'Other identification')
sex_nr<-rbind(table(survey_wt$sex)/length(survey_wt$sex_wt), sex_nr)
row.names(sex_nr) <- c("Sample", "National")
sex_nr #no need to collapse categories, though other identification will be ignored in the weighting because the American Community Survey data only has options for Female and Male
## Female Male Other identification
## Sample 0.4992701 0.4810219 0.01970803
## National 0.5130000 0.4870000 0.00000000
###compare sample proportions of age groups to national proportions
age_nr <- matrix(c(0.120, 0.179, 0.164, 0.164, 0.166, 0.207), ncol = 6)
colnames(age_nr)<-c("18-24","25-34","35-44", "45-54","55-64", "65+")
age_nr<-rbind(table(survey_wt$age_wt)/length(survey_wt$age_wt), age_nr)
row.names(age_nr) <- c("Sample", "National")
age_nr #18-24 and 25-34 are considerably over-represented, but this will just result in downweighting. Potentially more problematic is the small percentage of 65+ respondents relative to their population share.
## 18-24 25-34 35-44 45-54 55-64 65+
## Sample 0.1605839 0.3251825 0.2335766 0.1218978 0.09890511 0.05985401
## National 0.1200000 0.1790000 0.1640000 0.1640000 0.16600000 0.20700000
#Since 55-64 is also underrepresented, we collapse the two categories together.
survey_wt$age_co<-as.character(survey_wt$age_wt)
survey_wt$age_co[survey_wt$age_co == "55-64"] <- "55+"
survey_wt$age_co[survey_wt$age_co == "65+"] <- "55+"
survey_wt$age_co = factor(survey_wt$age_co,
levels = c("18-24","25-34","35-44", "45-54","55+"))
###compare sample proportions of educational attainment to national proportions
education_nr <- matrix(c(0.112, 0.190, 0.306, 0.275, 0.116), ncol = 5)
colnames(education_nr)<-c('Completed graduate school', 'Graduated from college',
'Some college, no degree', 'Graduated from high school',
'Less than high school')
education_nr<-rbind(table(survey_wt$education_wt)/length(survey_wt$education_wt), education_nr)
row.names(education_nr) <- c("Sample", "National")
education_nr # Combine lower two categories
## Completed graduate school Graduated from college
## Sample 0.1510949 0.380292
## National 0.1120000 0.190000
## Some college, no degree Graduated from high school
## Sample 0.3135036 0.1364964
## National 0.3060000 0.2750000
## Less than high school
## Sample 0.01861314
## National 0.11600000
#collapsing "Graduated from high school" and "Less than high school" into "Less than college"
survey_wt$education_co<-as.character(survey_wt$education_wt)
survey_wt$education_co[survey_wt$education_co == "Graduated from high school"] <- "Less than college"
survey_wt$education_co[survey_wt$education_co == "Less than high school"] <- "Less than college"
survey_wt$education_co = factor(survey_wt$education_co,
levels = c('Completed graduate school',
'Graduated from college',
'Some college, no degree',
'Less than college'))
###compare sample proportions of income bracket to national proportions
income_nr <- matrix(c(0.0965,0.214,0.203,0.110,0.183,0.194), ncol = 6)
colnames(income_nr)<-c('Under $20,000','Between $20,000 and $49,999','Between $50,000 and $79,999','Between $80,000 and $99,999', 'Between $100,000 and $150,000','Over $150,000')
income_nr<-rbind(table(survey_wt$income_wt)/length(survey_wt$income_wt), income_nr)
row.names(income_nr) <- c("Sample", "National")
income_nr #150k+ will likely increase design effect, but the issue is not severe.
## Under $20,000 Between $20,000 and $49,999 Between $50,000 and $79,999
## Sample 0.1394161 0.2806569 0.2565693
## National 0.0965000 0.2140000 0.2030000
## Between $80,000 and $99,999 Between $100,000 and $150,000
## Sample 0.1025547 0.1427007
## National 0.1100000 0.1830000
## Over $150,000
## Sample 0.07810219
## National 0.19400000
###compare sample divisions to national proportions
division_nr <- matrix(c(0.144,0.0584,0.128,0.0740,0.0470,0.163,0.202,0.0646,0.119), ncol = 9)
colnames(division_nr)<-c('East North Central',
'East South Central',
'Middle Atlantic',
'Mountain',
'New England',
'Pacific',
'South Atlantic',
'West North Central',
'West South Central')
division_nr<-rbind(table(survey_wt$division_wt)/length(survey_wt$division_wt), division_nr)
row.names(division_nr) <- c("Sample", "National")
division_nr #looks good
## East North Central East South Central Middle Atlantic Mountain
## Sample 0.1489051 0.05583942 0.1270073 0.05729927
## National 0.1440000 0.05840000 0.1280000 0.07400000
## New England Pacific South Atlantic West North Central
## Sample 0.04781022 0.1551095 0.2324818 0.05583942
## National 0.04700000 0.1630000 0.2020000 0.06460000
## West South Central
## Sample 0.119708
## National 0.119000
##We wouldn't collapse party affiliation together, so we don't examine sample-national comparison here.
###compare sample proportions of bible interpretation to national proportions
bible_nr <- matrix(c(0.310, 0.473, 0.217), ncol = 3)
colnames(bible_nr)<-c("literal", "inspired", "fables")
bible_nr<-rbind(table(survey_wt$bible_wt)/length(survey_wt$bible_wt), bible_nr)
row.names(bible_nr) <- c("Sample", "National")
bible_nr # need to combine literal and inspired. belief that bible stories are fables will be heavily downweighted.
## literal inspired fables
## Sample 0.08175182 0.3711679 0.5470803
## National 0.31000000 0.4730000 0.2170000
#collapsing "bible" into "believe" (literal, inspired) and "fables"
survey_wt$bible_co<-as.character(survey_wt$bible_wt)
survey_wt$bible_co[survey_wt$bible_co == "literal"] <- "believe"
survey_wt$bible_co[survey_wt$bible_co == "inspired"] <- "believe"
survey_wt$bible_co = factor(survey_wt$bible_co,
levels = c("believe",
"fables"))
###compare sample proportions of spanking attitudes to national proportions
spanking_nr <- matrix(c(0.170, 0.308, 0.364, 0.158), ncol = 4)
colnames(spanking_nr)<-c("Strongly disagree","Disagree","Agree","Strongly agree")
spanking_nr<-rbind(table(survey_wt$spanking_wt)/length(survey_wt$spanking_wt), spanking_nr)
row.names(spanking_nr) <- c("Sample", "National")
spanking_nr #combine into disagree and agree: "strongly agree" category is poorly represented
## Strongly disagree Disagree Agree Strongly agree
## Sample 0.3762774 0.3357664 0.2430657 0.04489051
## National 0.1700000 0.3080000 0.3640000 0.15800000
# collapsing "spanking" into agree and disagree
survey_wt$spanking_co<-as.character(survey_wt$spanking_wt)
survey_wt$spanking_co[survey_wt$spanking_co == "Strongly disagree"] <- "Disagree"
survey_wt$spanking_co[survey_wt$spanking_co == "Strongly agree"] <- "Agree"
survey_wt$spanking_co = factor(survey_wt$spanking_co,
levels = c("Disagree",
"Agree"))
###compare sample proportions of generalized trust to national proportions
trust_nr <- matrix(c(0.669, 0.331), ncol = 2)
colnames(trust_nr)<-c("Can't be too careful", "Can trust people")
trust_nr<-rbind(table(survey_wt$trust_wt)/length(survey_wt$trust_wt), trust_nr)
row.names(trust_nr) <- c("Sample", "National")
trust_nr #no need to collapse categories
## Can't be too careful Can trust people
## Sample 0.6105839 0.3894161
## National 0.6690000 0.3310000
Now we set up the weighting categories. In cases where we collapsed categories, we comment out the original categories and replace them with the new categories
target <- list(
race_wt = wpct(c("White or Caucasian", "Hispanic or Latino", "Other",
"Black or African American", "Asian or Asian American"),
c(0.630, 0.161, 0.0313, 0.120, 0.0573)),
sex_wt = wpct(c('Female', 'Male'),
c(0.513, 0.487)),
# age_wt = wpct(c("18-24","25-34","35-44", "45-54","55-64", "65+"),
# c(0.120, 0.179, 0.164, 0.164, 0.166, 0.207)),
age_co = wpct(c("18-24","25-34","35-44", "45-54","55+"),
c(0.120, 0.179, 0.164, 0.164, 0.373)),
# education_wt = wpct(c('Completed graduate school', 'Graduated from college',
# 'Some college, no degree', 'Graduated from high school',
# 'Less than high school'),
# c(0.112, 0.190, 0.306, 0.275, 0.116)),
education_co = wpct(c('Completed graduate school', 'Graduated from college',
'Some college, no degree', 'Less than college'),
c(0.112, 0.190, 0.306, 0.391)),#combining Graduated from high school and Less than high school
income_wt = wpct(c('Under $20,000',
'Between $20,000 and $49,999',
'Between $50,000 and $79,999',
'Between $80,000 and $99,999',
'Between $100,000 and $150,000',
'Over $150,000'),
c(0.0965,
0.214,
0.203,
0.110,
0.183,
0.194)),
division_wt = wpct(c('East North Central',
'East South Central',
'Middle Atlantic',
'Mountain',
'New England',
'Pacific',
'South Atlantic',
'West North Central',
'West South Central'),
c(0.144,
0.0584,
0.128,
0.0740,
0.0470,
0.163,
0.202,
0.0646,
0.119)),
party_wt = wpct(c("Republican", "Independent", "Democrat"),
c(0.309, 0.443, 0.247)),
#spanking_wt = wpct(c("Strongly disagree",
# "Disagree",
# "Agree",
# "Strongly agree"),
# c(0.170,
# 0.308,
# 0.364,
# 0.158)),
spanking_co = wpct(c("Disagree",
"Agree"),
c(0.478, 0.522)), # Combining "Strongly Disagree" with "Disagree" and "Agree" and "Strongly agree"
#bible_wt = wpct(c("literal", "inspired", "fables"),
# c(0.310, 0.473, 0.217)),
bible_co = wpct(c("believe", "fables"), # combining "literal" and "inspired" into "believe".
c(0.783, 0.217)),
trust_wt = wpct(c("Can't be too careful", "Can trust people"),
c(0.669, 0.331)))
Finally, it is time to create the sampling weights. We capped the weights at 10 because that was the lowest cap we could use while still having the algorithm converge without issues. The design effect is 4.25, higher than the design effect of 3 that formed the basis of our power analysis.
survey_wt$id <- 1:nrow(survey_wt)
survey_wt_raking <- anesrake(inputter = target,
data = as.data.frame(survey_wt),
caseid = survey_wt$id,
cap = 10, # Maximum allowed weight per iteration. Trying to find the smallest number we can while achieving convergence
choosemethod = "total", # How are parameters compared for selection?
type = "pctlim", # What selection criterion is used?
pctlim = 0.05, # Threshold for selection, currently at 5% diff between sample and target
nlim = 15,
maxit = 10000,
force1 = TRUE)
## [1] "Raking converged in 58 iterations"
summary(survey_wt_raking) # # design effect due to unequal weighting (Kish, 1992) is 4.25
## $convergence
## [1] "Complete convergence was achieved after 58 iterations"
##
## $base.weights
## [1] "No Base Weights Were Used"
##
## $raking.variables
## [1] "race_wt" "age_co" "education_co" "income_wt" "division_wt"
## [6] "party_wt" "spanking_co" "bible_co" "trust_wt"
##
## $weight.summary
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.005264 0.102999 0.310781 1.000000 0.997982 10.000078
##
## $selection.method
## [1] "variable selection conducted using _pctlim_ - discrepancies selected using _total_."
##
## $general.design.effect
## [1] 4.250018
##
## $race_wt
## Target Unweighted N Unweighted % Wtd N
## White or Caucasian 0.63025210 1927 0.70328467 1726.89076
## Hispanic or Latino 0.16106443 281 0.10255474 441.31653
## Other 0.03131253 104 0.03795620 85.79632
## Black or African American 0.12004802 221 0.08065693 328.93157
## Asian or Asian American 0.05732293 207 0.07554745 157.06483
## Total 1.00000000 2740 1.00000000 2740.00000
## Wtd % Change in % Resid. Disc. Orig. Disc.
## White or Caucasian 0.63025210 -0.073032571 0.000000e+00 -0.073032571
## Hispanic or Latino 0.16106443 0.058509681 2.775558e-17 0.058509681
## Other 0.03131253 -0.006643679 0.000000e+00 -0.006643679
## Black or African American 0.12004802 0.039391085 -1.387779e-17 0.039391085
## Asian or Asian American 0.05732293 -0.018224516 0.000000e+00 -0.018224516
## Total 1.00000000 0.195801532 4.163336e-17 0.195801532
##
## $sex_wt
## Target Unweighted N Unweighted % Wtd N Wtd % Change in %
## Female 0.513 1368 0.5093075 1395.575 0.5129685 0.003660982
## Male 0.487 1318 0.4906925 1325.011 0.4870315 -0.003660982
## Total 1.000 2686 1.0000000 2720.585 1.0000000 0.007321963
## Resid. Disc. Orig. Disc.
## Female 3.149793e-05 0.003692480
## Male -3.149793e-05 -0.003692480
## Total 6.299585e-05 0.007384959
##
## $age_co
## Target Unweighted N Unweighted % Wtd N Wtd % Change in % Resid. Disc.
## 18-24 0.120 440 0.1605839 328.80 0.120 -0.04058394 0.000000e+00
## 25-34 0.179 891 0.3251825 490.46 0.179 -0.14618248 0.000000e+00
## 35-44 0.164 640 0.2335766 449.36 0.164 -0.06957664 -2.775558e-17
## 45-54 0.164 334 0.1218978 449.36 0.164 0.04210219 -2.775558e-17
## 55+ 0.373 435 0.1587591 1022.02 0.373 0.21424088 0.000000e+00
## Total 1.000 2740 1.0000000 2740.00 1.000 0.51268613 5.551115e-17
## Orig. Disc.
## 18-24 -0.04058394
## 25-34 -0.14618248
## 35-44 -0.06957664
## 45-54 0.04210219
## 55+ 0.21424088
## Total 0.51268613
##
## $education_co
## Target Unweighted N Unweighted % Wtd N
## Completed graduate school 0.1121121 414 0.1510949 307.1872
## Graduated from college 0.1901902 1042 0.3802920 521.1211
## Some college, no degree 0.3063063 859 0.3135036 839.2793
## Less than college 0.3913914 425 0.1551095 1072.4124
## Total 1.0000000 2740 1.0000000 2740.0000
## Wtd % Change in % Resid. Disc. Orig. Disc.
## Completed graduate school 0.1121121 -0.038982778 -1.387779e-17 -0.038982778
## Graduated from college 0.1901902 -0.190101781 -2.775558e-17 -0.190101781
## Some college, no degree 0.3063063 -0.007197343 0.000000e+00 -0.007197343
## Less than college 0.3913914 0.236281902 5.551115e-17 0.236281902
## Total 1.0000000 0.472563805 9.714451e-17 0.472563805
##
## $income_wt
## Target Unweighted N Unweighted % Wtd N
## Under $20,000 0.09645177 382 0.13941606 264.2779
## Between $20,000 and $49,999 0.21389305 769 0.28065693 586.0670
## Between $50,000 and $79,999 0.20289855 703 0.25656934 555.9420
## Between $80,000 and $99,999 0.10994503 281 0.10255474 301.2494
## Between $100,000 and $150,000 0.18290855 391 0.14270073 501.1694
## Over $150,000 0.19390305 214 0.07810219 531.2944
## Total 1.00000000 2740 1.00000000 2740.0000
## Wtd % Change in % Resid. Disc. Orig. Disc.
## Under $20,000 0.09645177 -0.042964284 1.387779e-17 -0.042964284
## Between $20,000 and $49,999 0.21389305 -0.066763881 2.775558e-17 -0.066763881
## Between $50,000 and $79,999 0.20289855 -0.053670792 2.775558e-17 -0.053670792
## Between $80,000 and $99,999 0.10994503 0.007390283 0.000000e+00 0.007390283
## Between $100,000 and $150,000 0.18290855 0.040207816 0.000000e+00 0.040207816
## Over $150,000 0.19390305 0.115800859 0.000000e+00 0.115800859
## Total 1.00000000 0.326797915 6.938894e-17 0.326797915
##
## $division_wt
## Target Unweighted N Unweighted % Wtd N Wtd %
## East North Central 0.1440 408 0.14890511 394.560 0.1440
## East South Central 0.0584 153 0.05583942 160.016 0.0584
## Middle Atlantic 0.1280 348 0.12700730 350.720 0.1280
## Mountain 0.0740 157 0.05729927 202.760 0.0740
## New England 0.0470 131 0.04781022 128.780 0.0470
## Pacific 0.1630 425 0.15510949 446.620 0.1630
## South Atlantic 0.2020 637 0.23248175 553.480 0.2020
## West North Central 0.0646 153 0.05583942 177.004 0.0646
## West South Central 0.1190 328 0.11970803 326.060 0.1190
## Total 1.0000 2740 1.00000000 2740.000 1.0000
## Change in % Resid. Disc. Orig. Disc.
## East North Central -0.0049051095 0.000000e+00 -0.0049051095
## East South Central 0.0025605839 0.000000e+00 0.0025605839
## Middle Atlantic 0.0009927007 2.775558e-17 0.0009927007
## Mountain 0.0167007299 0.000000e+00 0.0167007299
## New England -0.0008102190 -6.938894e-18 -0.0008102190
## Pacific 0.0078905109 -2.775558e-17 0.0078905109
## South Atlantic -0.0304817518 0.000000e+00 -0.0304817518
## West North Central 0.0087605839 0.000000e+00 0.0087605839
## West South Central -0.0007080292 0.000000e+00 -0.0007080292
## Total 0.0738102190 6.245005e-17 0.0738102190
##
## $party_wt
## Target Unweighted N Unweighted % Wtd N Wtd % Change in %
## Republican 0.3093093 456 0.1664234 847.5075 0.3093093 0.1428860
## Independent 0.4434434 935 0.3412409 1215.0350 0.4434434 0.1022026
## Democrat 0.2472472 1349 0.4923358 677.4575 0.2472472 -0.2450885
## Total 1.0000000 2740 1.0000000 2740.0000 1.0000000 0.4901770
## Resid. Disc. Orig. Disc.
## Republican -5.551115e-17 0.1428860
## Independent 0.000000e+00 0.1022026
## Democrat 0.000000e+00 -0.2450885
## Total 5.551115e-17 0.4901770
##
## $spanking_co
## Target Unweighted N Unweighted % Wtd N Wtd % Change in %
## Disagree 0.478 1951 0.7120438 1309.72 0.478 -0.2340438
## Agree 0.522 789 0.2879562 1430.28 0.522 0.2340438
## Total 1.000 2740 1.0000000 2740.00 1.000 0.4680876
## Resid. Disc. Orig. Disc.
## Disagree 1.110223e-16 -0.2340438
## Agree -1.110223e-16 0.2340438
## Total 2.220446e-16 0.4680876
##
## $bible_co
## Target Unweighted N Unweighted % Wtd N Wtd % Change in % Resid. Disc.
## believe 0.783 1241 0.4529197 2145.42 0.783 0.3300803 0.000000e+00
## fables 0.217 1499 0.5470803 594.58 0.217 -0.3300803 2.775558e-17
## Total 1.000 2740 1.0000000 2740.00 1.000 0.6601606 2.775558e-17
## Orig. Disc.
## believe 0.3300803
## fables -0.3300803
## Total 0.6601606
##
## $trust_wt
## Target Unweighted N Unweighted % Wtd N Wtd % Change in %
## Can't be too careful 0.669 1673 0.6105839 1833.06 0.669 0.05841606
## Can trust people 0.331 1067 0.3894161 906.94 0.331 -0.05841606
## Total 1.000 2740 1.0000000 2740.00 1.000 0.11683212
## Resid. Disc. Orig. Disc.
## Can't be too careful 0 0.05841606
## Can trust people 0 -0.05841606
## Total 0 0.11683212
#attach weights to dataset
survey_wt$weight <- survey_wt_raking$weightvec
#fwrite(survey_wt, "rodenticide_closeended_11_17_2022.csv") # save dataset with weights
Before running analyses, we recode some variables so that they can be entered into ordinal regressions. This recoding needs to be done before the sample design function is run. After that, we implement the sampling design into a function that will feed into all of the weighted analyses.
#collapse attitude towards state ban into support/oppose
survey_wt$ban_ao[survey_wt$state_ban == "Strongly oppose"] <- "Oppose"
## Warning: Unknown or uninitialised column: `ban_ao`.
survey_wt$ban_ao[survey_wt$state_ban == "Oppose"] <- "Oppose"
survey_wt$ban_ao[survey_wt$state_ban == "Neither support nor oppose"] <- "Neither"
survey_wt$ban_ao[survey_wt$state_ban == "Support"] <- "Approve"
survey_wt$ban_ao[survey_wt$state_ban == "Strongly support"] <- "Approve"
#create binary have/don't have animals
survey_wt$animals_1<-as.numeric(survey_wt$animals_1)
survey_wt$animals_2<-as.numeric(survey_wt$animals_2)
survey_wt$animals_3<-as.numeric(survey_wt$animals_3)
survey_wt$animals_4<-as.numeric(survey_wt$animals_4)
survey_wt$animals<-ifelse((survey_wt$animals_1+survey_wt$animals_1+survey_wt$animals_1+survey_wt$animals_4) >0, 1, 0)
#dummy code political party
survey_wt$democrat[survey_wt$party_wt == "Democrat"]<-1
## Warning: Unknown or uninitialised column: `democrat`.
survey_wt$democrat[survey_wt$party_wt == "Republican"]<-0
survey_wt$democrat[survey_wt$party_wt == "Independent"]<-0
survey_wt$independent[survey_wt$party_wt == "Democrat"]<-0
## Warning: Unknown or uninitialised column: `independent`.
survey_wt$independent[survey_wt$party_wt == "Republican"]<-0
survey_wt$independent[survey_wt$party_wt == "Independent"]<-1
#make education numeric
survey_wt$education_num[survey_wt$education_wt == "Less than college"] <- 0
## Warning: Unknown or uninitialised column: `education_num`.
survey_wt$education_num[survey_wt$education_wt == "Some college, no degree"] <- 1
survey_wt$education_num[survey_wt$education_wt == "Graduated from college"] <- 2
survey_wt$education_num[survey_wt$education_wt == "Completed graduate school"] <- 3
#make income numeric
survey_wt$income_num[survey_wt$income_wt == "Under $20,000"]<- 0
## Warning: Unknown or uninitialised column: `income_num`.
survey_wt$income_num[survey_wt$income_wt == "Between $20,000 and $49,999"]<- 1
survey_wt$income_num[survey_wt$income_wt == "Between $50,000 and $79,999"]<- 2
survey_wt$income_num[survey_wt$income_wt == "Between $80,000 and $99,999"]<- 3
survey_wt$income_num[survey_wt$income_wt == "$100,000 and above"]<- 4
#make age numeric
survey_wt$age_num[survey_wt$age_wt == "18-24"]<- 0
## Warning: Unknown or uninitialised column: `age_num`.
survey_wt$age_num[survey_wt$age_wt == "25-44"]<- 1
survey_wt$age_num[survey_wt$age_wt == "45-64"]<- 2
survey_wt$age_num[survey_wt$age_wt == "65+"]<- 3
#make gender dummy variables
survey_wt$female[survey_wt$sex == "Female"]<- 1
## Warning: Unknown or uninitialised column: `female`.
survey_wt$female[survey_wt$sex == "Male"]<- 0
survey_wt$female[survey_wt$sex == "Other identification"]<- 0
survey_wt$otheri[survey_wt$sex == "Female"]<- 0
## Warning: Unknown or uninitialised column: `otheri`.
survey_wt$otheri[survey_wt$sex == "Male"]<- 0
survey_wt$otheri[survey_wt$sex == "Other identification"]<- 1
#create biden approval/disapproval/don't know variable
survey_wt$biden[survey_wt$pres_approve_nonbin == "Somewhat approve"] <- "Approve"
## Warning: Unknown or uninitialised column: `biden`.
survey_wt$biden[survey_wt$pres_approve_nonbin == "Strongly approve"] <- "Approve"
survey_wt$biden[survey_wt$pres_approve_nonbin == "Somewhat disapprove"] <- "Disapprove"
survey_wt$biden[survey_wt$pres_approve_nonbin == "Strongly disapprove"] <- "Disapprove"
survey_wt$biden[survey_wt$pres_approve_nonbin == "Don't know / No opinion"] <- "DK"
#collapse support and strongly support in direct condition so it can be directly compared to indirect condition
survey_wt$state_ban_supportall<-ifelse(survey_wt$state_ban == "Support"|survey_wt$state_ban == "Strongly support", "Support", "Does Not Support")
survey_wt$state_lidlock_supportall<-ifelse(survey_wt$state_lidlock == "Support"|survey_wt$state_lidlock == "Strongly support", "Support", "Does Not Support")
survey_wt$state_preemp_supportall<-ifelse(survey_wt$state_preemp == "Support"|survey_wt$state_preemp == "Strongly support", "Support", "Does Not Support")
survey_wt$town_elim_supportall<-ifelse(survey_wt$town_elim == "Support"|survey_wt$town_elim == "Strongly support", "Support", "Does Not Support")
survey_wt$state_cons_supportall<-ifelse(survey_wt$state_cons == "Support"|survey_wt$state_cons == "Strongly support", "Support", "Does Not Support")
#make state ban a factor variable for olr
survey_wt$state_ban <- as.factor(survey_wt$state_ban)
survey_wt$state_ban <-factor(survey_wt$state_ban,
levels = c("Strongly oppose", "Oppose", "Neither support nor oppose", "Support","Strongly support"))
#make state lidlock a factor variable for olr
survey_wt$state_lidlock <- as.factor(survey_wt$state_lidlock)
survey_wt$state_lidlock <-factor(survey_wt$state_lidlock,
levels = c("Strongly oppose", "Oppose", "Neither support nor oppose", "Support","Strongly support"))
#make town elim a factor variable for olr
survey_wt$town_elim <- as.factor(survey_wt$town_elim)
survey_wt$town_elim <-factor(survey_wt$town_elim,
levels = c("Strongly oppose", "Oppose", "Neither support nor oppose", "Support","Strongly support"))
#make consent form a factor variable for olr
survey_wt$state_cons <- as.factor(survey_wt$state_cons)
survey_wt$state_cons <-factor(survey_wt$state_cons,
levels = c("Strongly oppose", "Oppose", "Neither support nor oppose", "Support","Strongly support"))
#make state preemption a factor variable for olr
survey_wt$state_preemp <- as.factor(survey_wt$state_preemp)
survey_wt$state_preemp <-factor(survey_wt$state_preemp,
levels = c("Strongly oppose", "Oppose", "Neither support nor oppose", "Support","Strongly support"))
#sampling design
sample_design = svydesign(ids=~1,#no clustering
weights=~weight,
data=survey_wt)
When there are discrepancies between weighted and unweighted estimates, how much more should we trust the weighted estimates? On the face of it, the weighted analyses should be much more trustworthy, as we know that the weighted distributions of the demographic variables that we weighted on match the distributions of United States adults. On the other hand, there may be other variables we did not weight on that better explain why our unweighted sample would return different estimates than a random sample. Furthermore, the raking procedure we used to create the weights does not account for any potential interactions among the weighting variables.
One imperfect way of testing how much weighting helped us achieve a representative sample is to benchmark a sample estimate against another, arguably more trustworthy estimate. We asked respondents whether they approved of Joe Biden’s performance as President, and compared our unweighted and weighted estimates to FiveThirtyEight’s estimate as of September 1, 2022. This is an imperfect test: FiveThirtyEight’s estimate may itself be biased, and the biases that affect estimation of presidential job approval may differ from the biases that affect estimation of rodenticide attitudes. Nevertheless, benchmarking against presidential job approval does provide one external indicator of how even our weighted sample likely differs in its attitudes from the U.S. as a whole.
The outcome of this exercise is mixed. The weighted analyses overcorrect for the sample’s undestimation of disapproval of Biden, although the absolute discrepancy is smaller. The absolute discrepancy actually gets larger for the approval estimate, as the weighted estimate overcorrects for the sample’s overestimate of approval. Finally, the weights exacerbate an already inflated estimate of Don’t Know responses. So, at least when it comes to presidential approval rating, it is not clear that our weighted analyses are less biased than our unweighted analyses. The main difference is that the weighted analyses markedly upweight the responses of those who do not approve of Biden. Insofar we as suspect that opposition to rodenticide use is more common among the political left (which would be consistent with other areas of animal welfare; e.g., see McKendree et al., 2014), we can at least be confident that our weighted estimates are not overestimating support for anti-rodenticide legislation.
president_nr <- matrix(c(.427, .530, .043), ncol = 3)
president_nrb<-rbind(president_nr,table(survey_wt$biden)/length(survey_wt$biden), svytable(~biden, sample_design)/length(survey_wt$biden))
president_nrb<- rbind(president_nrb, president_nrb[2, ] - president_nrb[1, ])
president_nrb<- rbind(president_nrb, president_nrb[3, ] - president_nrb[1, ])
row.names(president_nrb) <- c("FiveThirtyEight", "Unweighted", "Weighted","Unweighted Discrepancy" , "Weighted Discrepancy")
president_nrb #point estimates from unweighted, weighted, and FiveThirtyEight Estimates
## Approve Disapprove DK
## FiveThirtyEight 0.42700000 0.53000000 0.04300000
## Unweighted 0.50291971 0.42810219 0.06897810
## Weighted 0.30837657 0.59426831 0.09735512
## Unweighted Discrepancy 0.07591971 -0.10189781 0.02597810
## Weighted Discrepancy -0.11862343 0.06426831 0.05435512
Finally, we are ready to examine support for various types of legislation that would result in less rodenticide use.
First, we examine the effect of weighting and social desirability on the percentage of respondents that support a rodenticide ban in their own state. We find very little effect of weighting in the indirect condition, but a ~5 percentage point difference in the direct condition. Asking about state-level bans does not appear to elicit social desirability concerns, as the indirect estimate was actually slightly higher than the direct estimates for the weighted data.
Overall, we estimate that just under a third of U.S. adults would support a rodenticide ban in their own states. Note that we asked respondents about an outright ban, whereas the California ban and the ban in British Columbia both have exemptions, including for the agriculture and food processing industries. Perhaps we would have observed more support for a ban had we spelled out that, whether or not anti-rodenticide advocates like it, state-level bans on rodenticides do not apply to cases in which one might worry it is most important to deal with a rodent problem very quickly.
#fetching unweighted, indirect estimate
survey_wt$prop_state_ban_1<-as.numeric(survey_wt$prop_state_ban_1)
ind_ban_un<-psych::describe(survey_wt$prop_state_ban_1)
ind_ban_un<-c(ind_ban_un$mean,ind_ban_un$mean -(ind_ban_un$se*1.96), ind_ban_un$mean +(ind_ban_un$se*1.96)) # mean, LCL, UCL
#weighted, indirect estimate
ind_ban_wei<-svymean(~survey_wt$prop_state_ban_1, sample_design, na.rm = TRUE)
ind_ban_wei<-c(ind_ban_wei,confint(ind_ban_wei, level = 0.95))# mean, LCL, UCL
#unweighted, direct estimate
direct_ban_un<-prop.test(table(survey_wt$state_ban_supportall)[2], table(survey_wt$state_ban_supportall)[2]+ table(survey_wt$state_ban_supportall)[1])
direct_ban_un<-c(direct_ban_un$estimate, direct_ban_un$conf.int) #mean, LCL, UCL
direct_ban_un<-direct_ban_un*100 # convert into percentages
#weighted, direct
direct_ban_wei<-svyciprop(~I(state_ban_supportall == "Support"), sample_design)
direct_ban_wei<-cbind(direct_ban_wei[1], confint(direct_ban_wei)[1],confint(direct_ban_wei)[2])
direct_ban_wei<-direct_ban_wei*100 # convert into percentages
percent_support_ban<-rbind(ind_ban_un, direct_ban_un, ind_ban_wei, direct_ban_wei)
colnames(percent_support_ban)<-c("Est.", "LCL", "UCL")
#percentage support, depending on weighting and directness
rownames(percent_support_ban)<- c("Indirect, Unweighted", "Direct, Unweighted", "Indirect, Weighted", "Direct, Weighted")
percent_support_ban
## Est. LCL UCL
## Indirect, Unweighted 37.14106 36.01938 38.26275
## Direct, Unweighted 37.37374 34.82944 39.98882
## Indirect, Weighted 36.94768 34.41822 39.47713
## Direct, Weighted 32.77079 28.04165 37.87766
Now we visualize the entirety of our weighted results. Note that although there is more opposition than support for a state-level ban, almost 18% of respondents are either ambivalent or simply do not yet have an opinion. (See Sturgis et al., 2014 for evidence that neutral responses more often represent a lack of opinion rather than an opinion in the middle.) If all of these respondents could be persuaded to support the ban, then there would be more support than opposition.
#Weighted
so_ban<-svyciprop(~I(state_ban == "Strongly oppose"), sample_design)
so_ban<-cbind(so_ban[1], confint(so_ban)[1],confint(so_ban)[2])
o_ban<-svyciprop(~I(state_ban == "Oppose"), sample_design)
o_ban<-cbind(o_ban[1], confint(o_ban)[1],confint(o_ban)[2])
nons_ban<-svyciprop(~I(state_ban == "Neither support nor oppose"), sample_design)
nons_ban<-cbind(nons_ban[1], confint(nons_ban)[1],confint(nons_ban)[2])
s_ban<-svyciprop(~I(state_ban == "Support"), sample_design)
s_ban<-cbind(s_ban[1], confint(s_ban)[1],confint(s_ban)[2])
ss_ban<-svyciprop(~I(state_ban == "Strongly support"), sample_design)
ss_ban<-cbind(ss_ban[1], confint(ss_ban)[1],confint(ss_ban)[2])
stateban_table<-rbind(setNames(as.data.frame(so_ban), c("Est.", "LCL", "UCL")),
setNames(as.data.frame(o_ban), c("Est.", "LCL", "UCL")),
setNames(as.data.frame(nons_ban), c("Est.", "LCL", "UCL")),
setNames(as.data.frame(s_ban), c("Est.", "LCL", "UCL")),
setNames(as.data.frame(ss_ban), c("Est.", "LCL", "UCL")))
stateban_table$Est.<-formattable::percent(stateban_table$Est., digits = 1)
stateban_table$LCL<-formattable::percent(stateban_table$LCL, digits = 1)
stateban_table$UCL<-formattable::percent(stateban_table$UCL, digits = 1)
stateban_table$Response<- c("Strongly Oppose", "Oppose", "Neither Support nor Oppose", "Support", "Strongly Support")
stateban_table<-stateban_table %>%relocate(Response)
table_stateban<-tibble(stateban_table)
gt_stateban<-gt(table_stateban)
gt_stateban %>%
tab_header(title = "More Opposition Than Support for State-Level Bans on Rodenticides",
subtitle = "Imagine the state you live in was considering banning the use of rodenticides for the entire state, even by licensed pest managers. All other forms of rodent control that are currently legal in your state would remain legal.
Would you strongly support, support, neither support nor oppose, oppose, or strongly oppose the state you live in banning the use of rodenticide?")%>%
tab_source_note(source_note = "Unweighted N = 1386. Weighted to be nationally representative of U.S. adults. LCL and UCL are the lower and upper limit of the 95% confidence interval, respectively.")%>%
tab_options(column_labels.font.weight = "bold", source_notes.font.size = px(10))
| More Opposition Than Support for State-Level Bans on Rodenticides | |||
| Imagine the state you live in was considering banning the use of rodenticides for the entire state, even by licensed pest managers. All other forms of rodent control that are currently legal in your state would remain legal. Would you strongly support, support, neither support nor oppose, oppose, or strongly oppose the state you live in banning the use of rodenticide? | |||
| Response | Est. | LCL | UCL |
|---|---|---|---|
| Strongly Oppose | 17.2% | 13.5% | 21.7% |
| Oppose | 32.4% | 27.6% | 37.6% |
| Neither Support nor Oppose | 17.6% | 13.8% | 22.1% |
| Support | 22.4% | 18.3% | 27.1% |
| Strongly Support | 10.4% | 7.8% | 13.8% |
| Unweighted N = 1386. Weighted to be nationally representative of U.S. adults. LCL and UCL are the lower and upper limit of the 95% confidence interval, respectively. | |||
What predicts responses to legislation we asked about? We enter as predictors age, income, education, region of the country (the northeast is the reference category), party affiliation (Republican is the reference category), whether the respondent has an animal in their home, whether the respondent has hired a pest manager in the past 10 years to deal with a rodent problem, and whether the respondent has tried to deal with a rodent problem themselves in the past 10 years. We exponentiate the results so that they are on the odds ratio metric. All ordered items (including the outcome variable) were coded low to high, such that an odds ratio greater than 1 would mean that a one-unit increase in the predictor would result in a greater likelihood of supporting the legislation, whereas an odds ratio lower than 1 would mean that a one-unit increase in the predictor translates into a lower likelihood of supporting the legislation.
We counsel caution in interpreting the results of any of our regression models as unbiased causal estimates for the following reasons:
We did not test for any interactions or non-linear functional forms
We may not have controlled for all of the variables that may confound the relationship between a given predictor and support for a state-level ban.
Odds ratios are non-collapsible (Cummings, 2009)
For simplicity, we make the proportional odds assumption that the effect of a given predictor is constant across different pairs of the outcome’s response categories. This assumption probably does not strictly hold, but even when it does not one can interpret the results as providing the average odds ratio (Harrell, 2020).
When using this model to predict attitudes toward a state ban, a few results stand out:
The midwest is the region that is least supportive of a ban, with a statistically significant difference from the northeast, the most supportive region.
The odds of support may be much higher for Democrats and Independents than Republicans, significant effects that aligns with the general trend that Democrats are more supportive of regulations that promote animal welfare. That said, the confidence intervals are very wide, including values that would not lead one to conclude that the issue seems hopelessly partisan.
Having dealt with a rodent problem on one’s own is statistically significant, but not in the expected direction-it is associated with greater support for a ban. Possibly, including this variable in the same model as having hired a pest manager to deal with a rodent problem partials out the negative association between having dealt a rodent problem on one’s own and supporting a ban.
#ordinal logistic regression
olr_stateban<-svyolr(state_ban~ female+ otheri +age_num + income_num + education_num + region_wt + democrat + independent +animals + hired_pest + self_remove, sample_design)
tidy(olr_stateban, exponentiate = TRUE, conf.int = TRUE)
State preemption of pesticide use forbids local jurisdictions from passing stricter laws against pesticides than exist at the state level. Thus, a town could not ban rodenticide use on private property if rodenticide was legal at the state level. We again find that the direct estimates are actually lower than the indirect estimates. The weighted direct estimate is 7 percentage points lower than the unweighted direct estimate. Overall support is perhaps slightly higher for repealing state preemption than for a state ban, though this may just be sampling error. Either way, repealing state preemption is still not a majority position.
#fetching unweighted, indirect estimate
survey_wt$prop_state_preemp_1<-as.numeric(survey_wt$prop_state_preemp_1)
ind_preemp_un<-psych::describe(survey_wt$prop_state_preemp_1)
ind_preemp_un<-c(ind_preemp_un$mean,ind_preemp_un$mean -(ind_preemp_un$se*1.96), ind_preemp_un$mean +(ind_preemp_un$se*1.96)) # mean, LCL, UCL
#weighted, indirect estimate
ind_preemp_wei<-svymean(~survey_wt$prop_state_preemp_1, sample_design, na.rm = TRUE)
ind_preemp_wei<-c(ind_preemp_wei,confint(ind_preemp_wei, level = 0.95))# mean, LCL, UCL
#unweighted, direct estimate
direct_preemp_un<-prop.test(table(survey_wt$state_preemp_supportall)[2], table(survey_wt$state_preemp_supportall)[2]+ table(survey_wt$state_preemp_supportall)[1])
direct_preemp_un<-c(direct_preemp_un$estimate, direct_preemp_un$conf.int) #mean, LCL, UCL
direct_preemp_un<-direct_preemp_un*100 # convert into percentages
#weighted, direct
direct_preemp_wei<-svyciprop(~I(state_preemp_supportall == "Support"), sample_design)
direct_preemp_wei<-cbind(direct_preemp_wei[1], confint(direct_preemp_wei)[1],confint(direct_preemp_wei)[2])
direct_preemp_wei<-direct_preemp_wei*100 # convert into percentages
percent_support_preemp<-rbind(ind_preemp_un, direct_preemp_un, ind_preemp_wei, direct_preemp_wei)
colnames(percent_support_preemp)<-c("Est.", "LCL", "UCL")
#percentage support, depending on weighting and directness
rownames(percent_support_preemp)<- c("Indirect, Unweighted", "Direct, Unweighted", "Indirect, Weighted", "Direct, Weighted")
percent_support_preemp
## Est. LCL UCL
## Indirect, Unweighted 45.07755 43.93734 46.21776
## Direct, Unweighted 43.36219 40.73935 46.02223
## Indirect, Weighted 44.15912 41.67943 46.63880
## Direct, Weighted 36.62500 31.66805 41.88229
From looking at all five response categories, we see that there is roughly equal support for and opposition against repealing the state preemption law for pesticides in one’s own state. But the headline result is that the most common response is to neither support nor oppose repeal. A few reasons why we might observed so much neutrality:
so_preemp<-svyciprop(~I(state_preemp == "Strongly oppose"), sample_design)
so_preemp<-cbind(so_preemp[1], confint(so_preemp)[1],confint(so_preemp)[2])
o_preemp<-svyciprop(~I(state_preemp == "Oppose"), sample_design)
o_preemp<-cbind(o_preemp[1], confint(o_preemp)[1],confint(o_preemp)[2])
nons_preemp<-svyciprop(~I(state_preemp == "Neither support nor oppose"), sample_design)
nons_preemp<-cbind(nons_preemp[1], confint(nons_preemp)[1],confint(nons_preemp)[2])
s_preemp<-svyciprop(~I(state_preemp == "Support"), sample_design)
s_preemp<-cbind(s_preemp[1], confint(s_preemp)[1],confint(s_preemp)[2])
ss_preemp<-svyciprop(~I(state_preemp == "Strongly support"), sample_design)
ss_preemp<-cbind(ss_preemp[1], confint(ss_preemp)[1],confint(ss_preemp)[2])
statepreemp_table<-rbind(setNames(as.data.frame(so_preemp), c("Est.", "LCL", "UCL")),
setNames(as.data.frame(o_preemp), c("Est.", "LCL", "UCL")),
setNames(as.data.frame(nons_preemp), c("Est.", "LCL", "UCL")),
setNames(as.data.frame(s_preemp), c("Est.", "LCL", "UCL")),
setNames(as.data.frame(ss_preemp), c("Est.", "LCL", "UCL")))
statepreemp_table$Est.<-formattable::percent(statepreemp_table$Est., digits = 1)
statepreemp_table$LCL<-formattable::percent(statepreemp_table$LCL, digits = 1)
statepreemp_table$UCL<-formattable::percent(statepreemp_table$UCL, digits = 1)
statepreemp_table$Response<- c("Strongly Oppose", "Oppose", "Neither Support nor Oppose", "Support", "Strongly Support")
statepreemp_table<-statepreemp_table %>%relocate(Response)
table_statepreemp<-tibble(statepreemp_table)
gt_statepreemp<-gt(table_statepreemp)
gt_statepreemp %>%
tab_header(title = "State Preemption Pesticide Laws Mostly Do Not Elicit Strong Opinions",
subtitle = "Most states in the United States have a 'state preemption' pesticide law. This means that if the state government says that a certain pesticide (e.g., rodenticides) can be used, then local governments within the state cannot ban the use of that pesticide. If a state got rid of its preemption law, then local governments within the state could ban businesses and residents in their jurisdiction from using a given pesticide, even if that pesticide was not banned at the state level.
Imagine that the state you live in has a state preemption pesticide law, and that rodenticide use is allowed at the state level. Would you strongly support, support, neither support nor oppose, oppose, or strongly oppose your state getting rid of its state preemption pesticide law, allowing local governments to ban rodenticide use in homes and businesses?")%>%
tab_source_note(source_note = "Unweighted N = 1386. Weighted to be nationally representative of U.S. adults. LCL and UCL are the lower and upper limit of the 95% confidence interval, respectively.")%>%
tab_options(column_labels.font.weight = "bold", source_notes.font.size = px(10))
| State Preemption Pesticide Laws Mostly Do Not Elicit Strong Opinions | |||
| Most states in the United States have a 'state preemption' pesticide law. This means that if the state government says that a certain pesticide (e.g., rodenticides) can be used, then local governments within the state cannot ban the use of that pesticide. If a state got rid of its preemption law, then local governments within the state could ban businesses and residents in their jurisdiction from using a given pesticide, even if that pesticide was not banned at the state level. Imagine that the state you live in has a state preemption pesticide law, and that rodenticide use is allowed at the state level. Would you strongly support, support, neither support nor oppose, oppose, or strongly oppose your state getting rid of its state preemption pesticide law, allowing local governments to ban rodenticide use in homes and businesses? | |||
| Response | Est. | LCL | UCL |
|---|---|---|---|
| Strongly Oppose | 13.5% | 10.1% | 17.8% |
| Oppose | 22.8% | 18.6% | 27.6% |
| Neither Support nor Oppose | 27.1% | 22.7% | 32.0% |
| Support | 25.0% | 20.6% | 30.0% |
| Strongly Support | 11.6% | 8.8% | 15.2% |
| Unweighted N = 1386. Weighted to be nationally representative of U.S. adults. LCL and UCL are the lower and upper limit of the 95% confidence interval, respectively. | |||
The results for the state preemption ordinal regression are overall similar to the results found for the state ban, albeit with smaller coefficients in some cases.
olr_state_preemp<-svyolr(state_preemp~ female+ otheri +age_num + income_num + education_num + region_wt + democrat + independent +animals + hired_pest + self_remove, sample_design)
tidy(olr_state_preemp, exponentiate = TRUE, conf.int = TRUE)
Although a state preemption law would prevent a local municipality from banning businesses and homes from using rodenticide, it would not prevent that municipality from ending its own use of rodenticide on public property such as parks. Overall, there is greater support for one’s own town banning its public use of rodenticide than there was for state-level bans or repeal of state preemption. Perhaps some respondents want to have the option to use rodenticides themselves, but like the idea of reducing the amount of rodenticide being used in public places. The indirect estimates are slightly lower than the direct estimates, but the difference is only pronounced in the unweighted data.
#fetching unweighted, indirect estimate
survey_wt$prop_town_1<-as.numeric(survey_wt$prop_town_1)
ind_elim_un<-psych::describe(survey_wt$prop_town_1)
ind_elim_un<-c(ind_elim_un$mean,ind_elim_un$mean -(ind_elim_un$se*1.96), ind_elim_un$mean +(ind_elim_un$se*1.96)) # mean, LCL, UCL
#weighted, indirect estimate
ind_elim_wei<-svymean(~survey_wt$prop_town_1, sample_design, na.rm = TRUE)
ind_elim_wei<-c(ind_elim_wei,confint(ind_elim_wei, level = 0.95))# mean, LCL, UCL
#unweighted, direct estimate
direct_elim_un<-prop.test(table(survey_wt$town_elim_supportall)[2], table(survey_wt$town_elim_supportall)[2]+ table(survey_wt$town_elim_supportall)[1])
direct_elim_un<-c(direct_elim_un$estimate, direct_elim_un$conf.int) #mean, LCL, UCL
direct_elim_un<-direct_elim_un*100 # convert into percentages
#weighted, direct
direct_elim_wei<-svyciprop(~I(town_elim_supportall == "Support"), sample_design)
direct_elim_wei<-cbind(direct_elim_wei[1], confint(direct_elim_wei)[1],confint(direct_elim_wei)[2])
direct_elim_wei<-direct_elim_wei*100 # convert into percentages
percent_support_elim<-rbind(ind_elim_un, direct_elim_un, ind_elim_wei, direct_elim_wei)
colnames(percent_support_elim)<-c("Est.", "LCL", "UCL")
#percentage support, depending on weighting and directness
rownames(percent_support_elim)<- c("Indirect, Unweighted", "Direct, Unweighted", "Indirect, Weighted", "Direct, Weighted")
percent_support_elim
## Est. LCL UCL
## Indirect, Unweighted 48.83678 47.61992 50.05364
## Direct, Unweighted 52.02020 49.35204 54.67704
## Indirect, Weighted 46.44839 43.89079 49.00600
## Direct, Weighted 46.98827 41.71683 52.32778
Breaking the results down into five response categories, we see that although under half of respondents support a local municipal ban, there is about an even split between those who oppose it and who neither support nor oppose it. So in practice there would be more vocal individuals in favor of a local municipal ban than against it in an average U.S. town.
so_elim<-svyciprop(~I(town_elim == "Strongly oppose"), sample_design)
so_elim<-cbind(so_elim[1], confint(so_elim)[1],confint(so_elim)[2])
o_elim<-svyciprop(~I(town_elim == "Oppose"), sample_design)
o_elim<-cbind(o_elim[1], confint(o_elim)[1],confint(o_elim)[2])
nons_elim<-svyciprop(~I(town_elim == "Neither support nor oppose"), sample_design)
nons_elim<-cbind(nons_elim[1], confint(nons_elim)[1],confint(nons_elim)[2])
s_elim<-svyciprop(~I(town_elim == "Support"), sample_design)
s_elim<-cbind(s_elim[1], confint(s_elim)[1],confint(s_elim)[2])
ss_elim<-svyciprop(~I(town_elim == "Strongly support"), sample_design)
ss_elim<-cbind(ss_elim[1], confint(ss_elim)[1],confint(ss_elim)[2])
townelim_table<-rbind(setNames(as.data.frame(so_elim), c("Est.", "LCL", "UCL")),
setNames(as.data.frame(o_elim), c("Est.", "LCL", "UCL")),
setNames(as.data.frame(nons_elim), c("Est.", "LCL", "UCL")),
setNames(as.data.frame(s_elim), c("Est.", "LCL", "UCL")),
setNames(as.data.frame(ss_elim), c("Est.", "LCL", "UCL")))
townelim_table$Est.<-formattable::percent(townelim_table$Est., digits = 1)
townelim_table$LCL<-formattable::percent(townelim_table$LCL, digits = 1)
townelim_table$UCL<-formattable::percent(townelim_table$UCL, digits = 1)
townelim_table$Response<- c("Strongly Oppose", "Oppose", "Neither Support nor Oppose", "Support", "Strongly Support")
townelim_table<-townelim_table %>%relocate(Response)
table_townelim<-tibble(townelim_table)
gt_townelim<-gt(table_townelim)
gt_townelim %>%
tab_header(title = "Only About A Quarter of U.S. Adults Oppose Local Government Discontinuing Rodenticide Use",
subtitle = "Imagine the town you live in planned to eliminate the use of rodenticides on municipal property, such as government buildings and parks. Residents and businesses would still be allowed to use rodenticides on their own property.
Would you strongly support, support, neither support nor oppose, oppose, or strongly oppose your town eliminating its own use of rodenticides on government-owned properties?")%>%
tab_source_note(source_note = "Unweighted N = 1386. Weighted to be nationally representative of U.S. adults. LCL and UCL are the lower and upper limit of the 95% confidence interval, respectively.")%>%
tab_options(column_labels.font.weight = "bold", source_notes.font.size = px(10))
| Only About A Quarter of U.S. Adults Oppose Local Government Discontinuing Rodenticide Use | |||
| Imagine the town you live in planned to eliminate the use of rodenticides on municipal property, such as government buildings and parks. Residents and businesses would still be allowed to use rodenticides on their own property. Would you strongly support, support, neither support nor oppose, oppose, or strongly oppose your town eliminating its own use of rodenticides on government-owned properties? | |||
| Response | Est. | LCL | UCL |
|---|---|---|---|
| Strongly Oppose | 8.7% | 6.0% | 12.4% |
| Oppose | 18.5% | 14.6% | 23.0% |
| Neither Support nor Oppose | 25.9% | 21.5% | 30.8% |
| Support | 32.1% | 27.3% | 37.2% |
| Strongly Support | 14.9% | 11.6% | 19.0% |
| Unweighted N = 1386. Weighted to be nationally representative of U.S. adults. LCL and UCL are the lower and upper limit of the 95% confidence interval, respectively. | |||
There are no statistically significant predictors in these models, though the direction of the effects is consistent with attitudes towards other the legislation we asked about.
olr_town_elim<-svyolr(town_elim~ female+ otheri +age_num + income_num + education_num + region_wt + democrat + independent +animals + hired_pest + self_remove, sample_design)
tidy(olr_town_elim, exponentiate = TRUE, conf.int = TRUE)
The Massachusetts state legislature proposed a bill, a key provision of which is the requirement that pest managers provide a written disclosure that second-generation anticoagulant rodenticides may cause rodent predators harm. Consumers would need sign a form that they have received the disclosure and are choosing to proceed with rodenticide use. This requirement may reduce rodenticide use by ensuring that people are aware of the risks they incur by using rodenticide. The bill appears likely to pass, but the written consent provision was dropped after industry opposition.
Weighting seems to have no statistically significant effect on the results. But for the first time the indirect condition elicits substantially less support than the direct condition, suggesting that almost a fourth of the support in the direct condition could be due to socially desirable responding. Possibly, there are respondents who oppose the provision because they anticipate that consumers will overreact to learning about the risk of secondary exposure to rodenticide, but also worry that wanting to withhold relevant information from consumers might appear sinister, even if well-intentioned.
#fetching unweighted, indirect estimate
survey_wt$prop_state_consent_1<-as.numeric(survey_wt$prop_state_consent_1)
ind_cons_un<-psych::describe(survey_wt$prop_state_consent_1)
ind_cons_un<-c(ind_cons_un$mean,ind_cons_un$mean -(ind_cons_un$se*1.96), ind_cons_un$mean +(ind_cons_un$se*1.96)) # mean, LCL, UCL
#weighted, indirect estimate
ind_cons_wei<-svymean(~survey_wt$prop_state_consent_1, sample_design, na.rm = TRUE)
ind_cons_wei<-c(ind_cons_wei,confint(ind_cons_wei, level = 0.95))# mean, LCL, UCL
#unweighted, direct estimate
direct_cons_un<-prop.test(table(survey_wt$state_cons_supportall)[2], table(survey_wt$state_cons_supportall)[2]+ table(survey_wt$state_cons_supportall)[1])
direct_cons_un<-c(direct_cons_un$estimate, direct_cons_un$conf.int) #mean, LCL, UCL
direct_cons_un<-direct_cons_un*100 # convert into percentages
#weighted, direct
direct_cons_wei<-svyciprop(~I(state_cons_supportall == "Support"), sample_design)
direct_cons_wei<-cbind(direct_cons_wei[1], confint(direct_cons_wei)[1],confint(direct_cons_wei)[2])
direct_cons_wei<-direct_cons_wei*100 # convert into percentages
percent_support_cons<-rbind(ind_cons_un, direct_cons_un, ind_cons_wei, direct_cons_wei)
colnames(percent_support_cons)<-c("Est.", "LCL", "UCL")
#percentage support, depending on weighting and directness
rownames(percent_support_cons)<- c("Indirect, Unweighted", "Direct, Unweighted", "Indirect, Weighted", "Direct, Weighted")
percent_support_cons
## Est. LCL UCL
## Indirect, Unweighted 61.03397 59.82078 62.24717
## Direct, Unweighted 80.23088 78.01447 82.27730
## Indirect, Weighted 61.77361 59.43396 64.11327
## Direct, Weighted 78.38421 73.93782 82.25409
Putting aside the concerns about socially desirable responding, there appears to be very little opposition to requiring written disclosure of secondary exposure risk prior to rodenticide use.
so_cons<-svyciprop(~I(state_cons == "Strongly oppose"), sample_design)
so_cons<-cbind(so_cons[1], confint(so_cons)[1],confint(so_cons)[2])
o_cons<-svyciprop(~I(state_cons == "Oppose"), sample_design)
o_cons<-cbind(o_cons[1], confint(o_cons)[1],confint(o_cons)[2])
nons_cons<-svyciprop(~I(state_cons == "Neither support nor oppose"), sample_design)
nons_cons<-cbind(nons_cons[1], confint(nons_cons)[1],confint(nons_cons)[2])
s_cons<-svyciprop(~I(state_cons == "Support"), sample_design)
s_cons<-cbind(s_cons[1], confint(s_cons)[1],confint(s_cons)[2])
ss_cons<-svyciprop(~I(state_cons == "Strongly support"), sample_design)
ss_cons<-cbind(ss_cons[1], confint(ss_cons)[1],confint(ss_cons)[2])
statecons_table<-rbind(setNames(as.data.frame(so_cons), c("Est.", "LCL", "UCL")),
setNames(as.data.frame(o_cons), c("Est.", "LCL", "UCL")),
setNames(as.data.frame(nons_cons), c("Est.", "LCL", "UCL")),
setNames(as.data.frame(s_cons), c("Est.", "LCL", "UCL")),
setNames(as.data.frame(ss_cons), c("Est.", "LCL", "UCL")))
statecons_table$Est.<-formattable::percent(statecons_table$Est., digits = 1)
statecons_table$LCL<-formattable::percent(statecons_table$LCL, digits = 1)
statecons_table$UCL<-formattable::percent(statecons_table$UCL, digits = 1)
statecons_table$Response<- c("Strongly Oppose", "Oppose", "Neither Support nor Oppose", "Support", "Strongly Support")
statecons_table<-statecons_table %>%relocate(Response)
table_statecons<-tibble(statecons_table)
gt_statecons<-gt(table_statecons)
gt_statecons %>%
tab_header(title = "High Support for Informed Consent About Off-Target Rodenticide Risks",
subtitle = "Imagine the state you live in required pest managers to have their customers sign a consent form before using rodenticides on their property. The consent form would explain that pets and wildlife can be poisoned if they eat rodenticide baits or feed on poisoned rodents.
Would you strongly support, support, neither support nor oppose, oppose, or strongly oppose the state you live in requiring a signed consent form explaining the potential impacts of rodenticides on pets and wildlife before using rodenticide?")%>%
tab_source_note(source_note = "Unweighted N = 1386. Weighted to be nationally representative of U.S. adults. LCL and UCL are the lower and upper limit of the 95% confidence interval, respectively.")%>%
tab_options(column_labels.font.weight = "bold", source_notes.font.size = px(10))
| High Support for Informed Consent About Off-Target Rodenticide Risks | |||
| Imagine the state you live in required pest managers to have their customers sign a consent form before using rodenticides on their property. The consent form would explain that pets and wildlife can be poisoned if they eat rodenticide baits or feed on poisoned rodents. Would you strongly support, support, neither support nor oppose, oppose, or strongly oppose the state you live in requiring a signed consent form explaining the potential impacts of rodenticides on pets and wildlife before using rodenticide? | |||
| Response | Est. | LCL | UCL |
|---|---|---|---|
| Strongly Oppose | 2.5% | 1.3% | 4.7% |
| Oppose | 7.4% | 5.1% | 10.6% |
| Neither Support nor Oppose | 11.8% | 9.0% | 15.2% |
| Support | 39.8% | 34.7% | 45.3% |
| Strongly Support | 38.5% | 33.5% | 43.8% |
| Unweighted N = 1386. Weighted to be nationally representative of U.S. adults. LCL and UCL are the lower and upper limit of the 95% confidence interval, respectively. | |||
There is a large, significant effect of whether respondents own at least one pet, likely because they have a personal stake in knowing about potential dangers to their pets.
olr_state_cons<-svyolr(state_cons~ female+ otheri +age_num + income_num + education_num + region_wt + democrat + independent +animals + hired_pest + self_remove, sample_design)
tidy(olr_state_cons, exponentiate = TRUE, conf.int = TRUE)
A lid-lock ordinance requires dumpsters have tight-fitting lids and are locked when they are not used. Malibu, California has a lid-lock ordinance, though to our knowledge it applies only to businesses, not residential buildings.
Weighting only has a marked effect on the confidence intervals, not the point estimates. But the indirect estimates are about 10 percentage points lower than in the direct estimates. These statistically significant differences are consistent with socially desirable responding inflating support in the direct condition.
#fetching unweighted, indirect estimate
survey_wt$prop_state_lidlock_1<-as.numeric(survey_wt$prop_state_lidlock_1)
ind_lidlock_un<-psych::describe(survey_wt$prop_state_lidlock_1)
ind_lidlock_un<-c(ind_lidlock_un$mean,ind_lidlock_un$mean -(ind_lidlock_un$se*1.96), ind_lidlock_un$mean +(ind_lidlock_un$se*1.96)) # mean, LCL, UCL
#weighted, indirect estimate
ind_lidlock_wei<-svymean(~survey_wt$prop_state_lidlock_1, sample_design, na.rm = TRUE)
ind_lidlock_wei<-c(ind_lidlock_wei,confint(ind_lidlock_wei, level = 0.95))# mean, LCL, UCL
#unweighted, direct estimate
direct_lidlock_un<-prop.test(table(survey_wt$state_lidlock_supportall)[2], table(survey_wt$state_lidlock_supportall)[2]+ table(survey_wt$state_lidlock_supportall)[1])
direct_lidlock_un<-c(direct_lidlock_un$estimate, direct_lidlock_un$conf.int) #mean, LCL, UCL
direct_lidlock_un<-direct_lidlock_un*100 # convert into percentages
#weighted, direct
direct_lidlock_wei<-svyciprop(~I(state_lidlock_supportall == "Support"), sample_design)
direct_lidlock_wei<-cbind(direct_lidlock_wei[1], confint(direct_lidlock_wei)[1],confint(direct_lidlock_wei)[2])
direct_lidlock_wei<-direct_lidlock_wei*100 # convert into percentages
percent_support_lidlock<-rbind(ind_lidlock_un, direct_lidlock_un, ind_lidlock_wei, direct_lidlock_wei)
colnames(percent_support_lidlock)<-c("Est.", "LCL", "UCL")
#percentage support, depending on weighting and directness
rownames(percent_support_lidlock)<- c("Indirect, Unweighted", "Direct, Unweighted", "Indirect, Weighted", "Direct, Weighted")
percent_support_lidlock
## Est. LCL UCL
## Indirect, Unweighted 55.77917 54.52149 57.03685
## Direct, Unweighted 64.43001 61.83645 66.94267
## Indirect, Weighted 55.60316 52.80838 58.39793
## Direct, Weighted 65.16443 60.03816 69.96211
Looking at the full results in the direct condition, there is very little strong opposition to the lid-lock ordinance. So even if we trust the indirect estimate of support over the direct estimate, we still might think that the opposing faction does not feel as strongly as the supportive faction.
Note that the pro and con arguments we used to describe lid-lock ordinances are from the Malibu Times and personal communication with the advocacy group Poison Free Malibu.
so_lid<-svyciprop(~I(state_lidlock == "Strongly oppose"), sample_design)
so_lid<-cbind(so_lid[1], confint(so_lid)[1],confint(so_lid)[2])
o_lid<-svyciprop(~I(state_lidlock == "Oppose"), sample_design)
o_lid<-cbind(o_lid[1], confint(o_lid)[1],confint(o_lid)[2])
nons_lid<-svyciprop(~I(state_lidlock == "Neither support nor oppose"), sample_design)
nons_lid<-cbind(nons_lid[1], confint(nons_lid)[1],confint(nons_lid)[2])
s_lid<-svyciprop(~I(state_lidlock == "Support"), sample_design)
s_lid<-cbind(s_lid[1], confint(s_lid)[1],confint(s_lid)[2])
ss_lid<-svyciprop(~I(state_lidlock == "Strongly support"), sample_design)
ss_lid<-cbind(ss_lid[1], confint(ss_lid)[1],confint(ss_lid)[2])
statelidlock_table<-rbind(setNames(as.data.frame(so_lid), c("Est.", "LCL", "UCL")),
setNames(as.data.frame(o_lid), c("Est.", "LCL", "UCL")),
setNames(as.data.frame(nons_lid), c("Est.", "LCL", "UCL")),
setNames(as.data.frame(s_lid), c("Est.", "LCL", "UCL")),
setNames(as.data.frame(ss_lid), c("Est.", "LCL", "UCL")))
statelidlock_table$Est.<-formattable::percent(statelidlock_table$Est., digits = 1)
statelidlock_table$LCL<-formattable::percent(statelidlock_table$LCL, digits = 1)
statelidlock_table$UCL<-formattable::percent(statelidlock_table$UCL, digits = 1)
statelidlock_table$Response<- c("Strongly Oppose", "Oppose", "Neither Support nor Oppose", "Support", "Strongly Support")
statelidlock_table<-statelidlock_table %>%relocate(Response)
table_statelidlock<-tibble(statelidlock_table)
gt_statelidlock<-gt(table_statelidlock)
gt_statelidlock %>%
tab_header(title = "Minimal Opposition to Lid-Lock Ordinances",
subtitle = "A 'lid-lock' ordinance is a law that requires businesses to use dumpster lids that close tightly and to lock them after use. Proponents of lid-lock ordinances argue that they reduce the number of rodents who are attracted to public spaces. Opponents of lid-lock ordinances argue that it is burdensome to repeatedly unlock and lock dumpsters, and that people just throw trash on top of locked dumpsters.
Would you strongly support, support, neither support nor oppose, oppose, or strongly oppose the town you live in implementing a lid-lock ordinance?")%>%
tab_source_note(source_note = "Unweighted N = 1386. Weighted to be nationally representative of U.S. adults. LCL and UCL are the lower and upper limit of the 95% confidence interval, respectively.")%>%
tab_options(column_labels.font.weight = "bold", source_notes.font.size = px(10))
| Minimal Opposition to Lid-Lock Ordinances | |||
| A 'lid-lock' ordinance is a law that requires businesses to use dumpster lids that close tightly and to lock them after use. Proponents of lid-lock ordinances argue that they reduce the number of rodents who are attracted to public spaces. Opponents of lid-lock ordinances argue that it is burdensome to repeatedly unlock and lock dumpsters, and that people just throw trash on top of locked dumpsters. Would you strongly support, support, neither support nor oppose, oppose, or strongly oppose the town you live in implementing a lid-lock ordinance? | |||
| Response | Est. | LCL | UCL |
|---|---|---|---|
| Strongly Oppose | 3.6% | 2.2% | 5.9% |
| Oppose | 15.7% | 12.1% | 20.1% |
| Neither Support nor Oppose | 15.5% | 12.3% | 19.3% |
| Support | 38.9% | 33.8% | 44.2% |
| Strongly Support | 26.3% | 21.8% | 31.4% |
| Unweighted N = 1386. Weighted to be nationally representative of U.S. adults. LCL and UCL are the lower and upper limit of the 95% confidence interval, respectively. | |||
Higher income significantly increases support for the lid-lock ordinances. Malibu is an affluent city so this result is not too surprising. It could be that lower-income individuals live in areas with worse waste management, and consequently view the ordinance as unrealistic.
olr_lidlock<-svyolr(state_lidlock~ female+ otheri +age_num + income_num + education_num + region_wt + democrat + independent +animals + hired_pest + self_remove, sample_design)
tidy(olr_lidlock, exponentiate = TRUE, conf.int = TRUE)