1 Week 1

1.1 How do we approach epidemiology investigation?

1.2 In general, how do we approach epidemiology investigation?

We perform ‘studies’. we are interested in causation!

1.3 What is a Study?

  • A STUDY IS A MEASUREMENT DEVICE ‐ analogous to a scale of weights or a measuring tape
  • If the study involves comparing the occurrence of events among human populations, it is an epidemiologic study
  • Epidemiology, however, does not determine the cause of a disease in a given individual!

1.3.1 If a study is a measurement device, what are the measurement units?

  • Since we are concerned with relating exposure to the occurrence of disease, we assess outcome in a population by using a measure of occurrence; basic measures of occurrence are incidence rates, cumulative incidence, and prevalence
  • The object of measurement may be a rate or a risk, but typically it is a measure of effect (or association) to describe the difference in outcome occurrence among exposure groups; attributable risk (AR or RD, the difference between measures of occurrence) or relative risk (RR, the ratio of the measures of occurrence) are basic measures of association
  • The final result of a study can be expressed as a single number (in measure‐of‐ association units) or ‘POINT ESTIMATE’; this is the best estimate the study provides as to the size of the thing being measured,

1.3.2 What is the goal of the study?

  • In common with other measuring devices, the study has a simple goal ‐ AN ACCURATE MEASUREMENT
  • ACCURACY, which means the degree to which the measurement reflects the true state of the universe, is comprised of VALIDITY and PRECISION
  • Hence, a study can be valid or precise, both, or neither.

1.3.3 What do we mean by validity?

  • A VALID STUDY HAS LITTLE ‘SYSTEMATIC ERROR’ OR ‘BIAS’
  • Bias refers to the tendency of a measurement to deviate from the truth in the same direction, systematically

1.3.4 two types of validity?

  • Internal validity ‐ is the notion of systematic error, refers to the measurements made within the study population
  • External validity ‐ refers to the applicability of the measurements made from the study population to larger, potentially more diverse target populations
  • Internal validity is a prerequisite for external validity
  • If the measurement you’ve made on the actual people in the study is too biased, there’s not much point in worrying about unstudied groups to whom the measurement might generalize.

1.3.5 What do we mean by precision?

  • A PRECISE STUDY HAS LITTLE ‘RANDOM ERROR’ AND PRECISION, RELIABILITY AND REPRODUCIBILITY REFER TO THE SAME THING
  • Chance refers to measurements that deviate from the truth in any direction, randomly

1.3.6 How to conceptualize random vs. systematic error?

  • Reliable but not valid
  • Valid but not reliable
  • Both reliable and valid

‘In a sufficiently large study, virtually all errors of concern are systematic errors’ ‐ Rothman

1.3.7 What are the possible explanations for the measurement result a study gives?

THE MEASUREMENT IS EXPLAINED BY ONLY THREE FACTORS:

1.3.8 Etiology – why some study designs are better than others!

This usually requires that we go beyond group association and establish three definitive requirements (lung cancer example):

  1. The “cause” is associated with the “effect” at the individual level The potential “cause” and the potential “effect” occur more frequently at the INDIVIDUAL level than would be expected by chance E.g., we establish that individuals with lung cancer are more frequently smokers than individuals without lung cancer

  2. The “cause” precedes the “effect” in time (temporality) The potential “cause” is PRESENT AT AN EARLIER TIME than the potential “effect” E.g., we establish that cigarette smoking comes before the development of lung cancer  Altering the “cause” alters the “effect” When the potential “cause” is reduced or eliminated, the potential “effect” is also reduced or eliminated – OFTEN INVOLVES EXPERIMENTAL DESIGN E.g., we establish that reducing cigarette smoking reduces lung cancer rate

  3. Bias/error and/or confounding

  4. Chance

  5. Cause (provided that the study results are valid and precise)

    • The simplest meaning of cause is that we infer (derive, surmise or suggest) that an exposure ‘E’ precedes a disease outcome ‘D’

E ———————> D

  • In designing a study we strive to improve validity by minimizing the potential role of bias and confounding, and to improve precision by minimizing the potential role of chance

In modern causal inference, we often try to gauge how well the study has succeeded in doing these two things

IMPORTANTLY, SOME STUDY DESIGNS ARE BETTER THAN OTHERS (RCTs > Cohort > Case‐Control > Ecological) – WHY?

1.3.9 Etiology – why some study designs are better than others!

This usually requires that we go beyond group association and establish three definitive requirements (lung cancer example):

1. The “cause” is associated with the “effect” at the individual level - The potential “cause” and the potential “effect” occur more frequently at the INDIVIDUAL level than would be expected by chance - E.g., we establish that individuals with lung cancer are more frequently smokers than individuals without lung cancer

  1. The “cause” precedes the “effect” in time (temporality)
  • The potential “cause” is PRESENT AT AN EARLIER TIME than the potential “effect”
  • E.g., we establish that cigarette smoking comes before the development of lung cancer
  1. Altering the “cause” alters the “effect”
  • When the potential “cause” is reduced or eliminated, the potential “effect” is also reduced or eliminated – OFTEN INVOLVES EXPERIMENTAL DESIGN
  • E.g., we establish that reducing cigarette smoking reduces lung cancer rate

1.3.10 How we use studies to establish etiology?

1.3.11 What are the major types of bias?

  1. Selection bias

Usually occurs when the association between exposure and disease differs for those who participate in study vs. non‐participants:

Examples:

  • Volunteer bias
  • Healthy worker effect
  • Lost to follow‐up bias
  • Overmatching (controls not selected independent of exposure)
  • Surveillance or detection bias
  • Collider stratification bias

Can be DIFFERENTIAL or NON‐DIFFERENTIAL:

  1. Differential : usually exposure differs according to outcome, e.g., recall bias – can bias in either direction and more unpredictable

  2. Non‐differential: unrelated to other study variables, e.g., recall limitation ‐ tends to produces estimates closer to null value and more predictable

  1. Potential solutions:

Make accurate measurements of exposure and outcome variables ‐ an example is the use of blinding methods

  • Randomization
  • Select subjects carefully and keep them on the study

Selection bias: One relevant group in the population (exposed cases in the example) has a higher probability of being included in the study sample.