class: center, middle, inverse, title-slide # Understanding Decision-Making in Visual Analytics with Digital Experiments ## PhD Dissertation Proposal Defense - July 30, 2018 ### Ryan Wesslen --- # Agenda ## Ch 1: Motivation & Problem (5 min) ## Ch 2: Related Work (10 min) ## Ch 3-5: Preliminary research (15 min) - Anchoring Effect in Visual Analytics (VAST 2017) - Anchor 2.0 (Under Revision; CHI 2019 submission) - Visual Analytics to Identify Misinformation (ICWSM 2018) ## Ch 6-8: Proposed research (30 min) - Identifying Confirmation Bias in Visual Analytics - Financial Decision-Making for Retirement Planning in Visual Analytics - Decision-Making in Explainable AI systems with Topological Data Analysis ## Ongoing Plan, Discussion & Questions (30 min) --- # Motivation <img src="./img/google-trends.png" width="600px" style="display: block; margin: auto;" /> Machine Learning, Deep Learning, and Big Data computing have provided tremendous gains in many predictive tasks like image recognition, natural language processing, and social networks. -- However, black box systems present problems... --- # Problems with Black Boxes .pull-left[.full-width[ ## Algorithmic Bias - Hajian, Bonchi, and Castillo (2016); Buolamwini and Gebru (2018) ## Regulatory Concerns & Right to Explanation (e.g., GDPR) - Goodman and Flaxman (2016) ## Need for Explainability - Gunning (2017) ## Lack of Causality - Marcus (2018); Pearl and Mackenzie (2018) ## Unintended Social Consequences - Tufekci (2015) ]] .pull-right[.full-width[ <img src="./img/ladder-of-causation.png" width="350px" style="display: block; margin: auto;" /> Pearl and Mackenzie (2018) ]] --- # Decision-Making <img src="./img/decision-making.png" width="700px" style="display: block; margin: auto;" /> --- # My research <img src="./img/diagram.png" width="600px" style="display: block; margin: auto;" /> --- # Related Work Sections ## 1. Decision-Making in Visual Analytics ## 2. Heuristics & Cognitive Biases in Cognitive Science ## 3. Heuristics & Cognitive Biases in Visual Analytics ## 4. Digital Experimentation ## 5. Evaluating Digital Experiments --- # Decision-Making in Visual Analytics ## Mental Models of Human Cognition (Sensemaking) - Priolli and Card (2005); Klein, Moon, and Hoffman (2006); Green, Ribarsky and Fisher (2009); Liu and Stasko (2010); Kang and Stasko (2012) ## Interactions and Analytic Provenance - Pike et al. (2009); Dou et al. (2009); Gotz and Zhou (2009); Endert et al. (2011); North et al. (2011); Xu et al. (2015); ## Uncertainty/Trust & Human-Centered Machine Learning - Ellis and Dix (2015); Sacha et al. (2016); Gillies et al. (2016); Sacha et al. (2017); Liu et al. (2017) <img src="./img/path.png" width="550px" style="display: block; margin: auto;" /> ??? ## Mental Models - Priolli & Card: intelligence analysis experiment and propose foraging & sensemaking loops - Klein: analysts use some frame of reference, then compare, refine and create new frames - Green: framework for human ‘higher cognition’ that extends more familiar perceptual models - Liu: interpret mental models for interaction as three primary purposes: external anchoring, information foraging, and cognitive offloading. - Kang: case studies with expert sysems ## Interactions factilates understanding sense-making (analytic provenance) - Pike: connect that interaction and inquiry are inextricable and lays out seven reas of focus for the next five years - Gotz: develop a new approach that combines the benefits of manual annotations and event-driven interactions via a taxonomy to categorize actions into four layered tiers based on their semantic intent - Dou: used human coding to infer users' reasoning processes through interaction logs of wirevis; coders had difficulty simply using the logs as without being able to see the same visual representations of the analysts. - Endert: process of sensemaking is **not focused on a series of parameter adjustments**, but instead, a series of perceived connections and patterns within the data; propose two observation level interaction types: exploratory and expressive - North: **analytic provenance** can be examined in five interrelated stages: perceive, capture, encode, recover, and reuse. ## ML and Uncertainty - Ellis: Related work in this area has tended to focus on the human's analytic and sensemaking processes. Proposing that some cognitive biases can also occur in the process of viewing visualisations. - Sacha: Trust and Uncertainty - --- class: center, middle <img src="./img/decisions.png" width="650px" style="display: block; margin: auto;" /> --- # Heuristics & Cognitive Biases .pull-left[.full-width[ ## Cognitive Biases (Tversky & Kahnmeman) - Probability inference is hard. - To deal, individuals use heuristics to deal with cognitive effort. - Heuristics lead to **systematic** error, i.e. biases (e.g., anchoring, availability, representativeness) - Skeptical of expert "intuition" <img src="./img/maxresdefault.jpg" width="350px" style="display: block; margin: auto;" /> [Farrar Straus & Giroux](https://www.youtube.com/watch?v=HefjkqKCVpo) ]] -- .pull-right[.full-width[ ## Heuristics (Gigerenzer) - "Power of simple rules in an uncertain world" (Adaptive Toolbox) - Optimistic on the role of heuristics - Uncertainty != Risk; don't apply model of certainty to uncertainty - Heuristics are "human regularization" - Less info, time, computation can be more ("better") <img src="./img/ink.jpg" width="200px" style="display: block; margin: auto;" /> ]] --- # Cognitive Biases in Visual Analytics .pull-left[.full-width[ ## Frameworks - Dimara, Dragicevic, and Bezerianos (2014) - Wall et al. (2017); Wall et al. (2018) - Calero Valdez et al. (2017) - Wu et al. (2017) - Cottam and Blaha (2017) - Streeb et al. (2017) ]] .pull-right[.full-width[ ## Empirical Studies - **Anchoring**: Calero Valdez et al. (2018); Cho et al. (2017) - **Familiarity**: Dasgupta (2017) - **Attraction Effect**: Dimara et al. (2017); Dimara et al. (2019) - **Confirmation**: Karduni et al. (2018); Delpish et al. (2018) ]] ??? ## Frameworks - Dimara considers availiability and design guidelines (heuristic based approach for decision tools may need to allow some imperfection for the sake of understandability.) - Wall proposed a behavioral-mapping to cognitive biases (Coverage and distribution) using Markov Model - CV propose a framework for linking social, action, and perceptual biases (slide X) - Wu propose Bayesian cognition model - Cottam and Blaha propose a Markov model for "a priori system biases" - Streep: balanced approach of cognitive biases and heuristics ## Empirical - Availability: estimate the frequency or probability by the ease with which instances or associations could be brought to mind - Dimara attraction effect (one's choice between two alternatives is influenced by the presence of an irrelevant (dominated) third alternative) with crowdsourced and scatterplots - Dasgupta studied familiarity --- # Digital Experiments <img src="./img/salganik.png" width="600px" style="display: block; margin: auto;" /> Salganik (2017) --- class: center, middle # Methods for Digital Experiments <img src="./img/methods.png" width="700px" style="display: block; margin: auto;" /> --- class: center, middle # Chapters 3-5: Completed Studies --- # Anchoring Effect <img src="./img/washington.png" width="431" height="400px" style="display: block; margin: auto;" /> What year was George Washington elected president? -- Can anchoring apply to visual analytics systems? --- # CrystalBall: Event Detection in Social Media <img src="./img/crystalball.gif" width="700px" style="display: block; margin: auto;" /> ## Experiment: Randomly assign 81 participants to 2 x 2 between-subjects groups of Numerical Anchor (High or Low) and Visual Anchor (Temporal or Spatial View). --- # Anchoring Effect (VAST 2017) ## Is numerical anchoring transferable to visual anchors (i.e., pre-analysis exposure)? .pull-left[.full-width[ ## Numerical anchoring associated with decision accuracy <img src="./img/anchor-effect2.png" width="300px" style="display: block; margin: auto;" /> ]] .pull-right[ ## Visual anchors associated with more interactions (and time spent) for related View <img src="./img/anchor-effect1.png" width="550px" style="display: block; margin: auto;" /> ] --- # Problems ## Numerical anchoring affects event detection but visual anchors affects interactions. ## Need to decision digitally and information that aided decision-making process (e.g., view importance). # Anchoring Effect 2.0 ## Capture decision-making process within the system. - Use Verifi, a VA system to identify misinformation on social media. ## Consider strategy cues (heuristics) as an additional treatment. - Example: "On the language measures, real news accounts tend to show a lower ranking in anger, fear and negativity. - Two language feature cues, two social network cues. ## Do visual anchors and/or strategy cues help or hinder performance? confidence? strategy? ??? - Reconsider anchoring, but now with a system that can capture decisioning - Streeb et al. (2017) raised role of heuristics; yet they haven't been used in VA Cog Bias - Explore can they affect performance, confidence, and time spent --- # Verifi <img src="./img/verifi.gif" width="800px" /> Using third-party account-level labels, we conducted an in-laboratory experiment with 94 participants to measure their ability to identify misinformation Twitter news accounts. Experiment conducted in two rounds, each with three conditions in which training processes (e.g., visual anchors and strategy cues) were modified. --- class: center, middle # Outcomes <img src="./img/anchor20-results.png" width="800px" /> ??? A: Visual Anchors tend to perform slightly worse than Control; indicating visual anchors have a slightly negative performance. B: However, the Visual Anchors tend to boost confidence. C: Unique visual anchor doesn't increase value importance for respective view D: Although, visual anchors seem to value secondary views (tweet panel and ) less. E: Cue ratings seem to be fairly consistent; with one exception is that all three groups in Round 2 tend to use Fake Mention Cue rating negligible --- class: center, middle # Accuracy Factors .pull-left[ <img src="./img/user-accuracy-account.png" width="500px" /> ] -- .pull-right[ <img src="./img/regressions.png" width="350px" /> (1) and (4): Round 1, (2) and (5): Round 2, (3) and (6): Both Rounds ] ??? - We find that account has an important factor. Some accounts (like GothamPost) are very easy, in which 93 out of 94 got it right. While others perform less. --- # Heterogeneous Treatment Effects of Visual Anchor <img src="./img/heterogenous-effects.png" width="850px" /> - Used causal regression tree approach with "Honest split" (Athey and Imbens (2016)). - Treatments must be binary; used Control (0) to any Visual Anchor (1). - Considered four user-level covariates: major, background in data visualization, Twitter use, and social media use (likert scales) ??? 1) The visual anchor had a slightly negative effect on accuracy. This shows that the visual anchor had an even more negative effect for self-respondents who use Twitter often while it was beneficial for those without experience. 2) Visual anchor tended to boost confidence; that effect was larger for those people who self report with a 6 or 7 that they get their news from Social Media. 3) Alternatively, the visual anchor groups tended to end 2.7 minutes earlier than the Control groups. Those who use highly use Twitter (7), the visual anchors seemed to have a dramatically effect as they tended to end their session by 12 minutes relative to high Twitter users in the Control groups. Caution must be used on these results for a variety of factors (e.g., 94 sample size, self-reported results). Nevertheless, this provides very interesting opportunity to better understand treatment effects. --- class: center, middle # Exploring Interaction Logs to Identify Strategies <img src="./img/cluster.png" width="800px" /> --- class: center, middle # Identifying Strategy Clusters through Interactions <img src="./img/interactions.png" width="800px" /> --- # Steps for CHI Submission ## Related Work - Strengthen background on decision-making in Visual Analytics and past research on cognitive biases. ## Study Design - Rewrite study design considerations including adding participant procedure and threats to validity. ## Clarity - Redo charts to emphasize effects size and de-emphasize significance; critically discuss possible confounding factors. ## Presentation - Improve writing by avoiding codes and nicknames, adopt consistent terminology, and remove tangential discussions. --- # Confirmation Bias <img src="./img/confbias1.png" width="700px" style="display: block; margin: auto;" /> -- <img src="./img/confbias2.png" width="700px" style="display: block; margin: auto;" /> -- Confirmation bias is the "selectivity in the acquisition and use of evidence" -Nickerson (1998). --- # Exploring Confirmation Bias in Misinformation (ICWSM 2018) <img src="./img/verifi-design.png" width="400px" style="display: block; margin: auto;" /> Using Verifi, 60 students were randomly assigned to one of three conditions for the task of identifying Twitter misinformation. **Research Question**: Would individuals make decisions differently about the veracity of news media sources, when explicitly asked to confirm or disconfirm a given hypothesis? **Hypothesis**: We expect that the confirm group will have a worse accuracy due to confirmation bias. --- # Results .pull-left[.full-width[ ## What were users' decisions? <img src="./img/verifi1.png" width="350px" style="display: block; margin: auto;" /> ]] .pull-right[.full-width[ ## What factors influenced users' decisions? <img src="./img/verifi2.png" width="350px" style="display: block; margin: auto;" /> ]] --- class: center, middle <img src="./img/dilbert-confirmation-bias.gif" width="700px" style="display: block; margin: auto;" /> # Identifying Confirmation Bias in Visual Analytics Proposed Study 1: Chapter 6 --- # Experiment ## Problem with past study on confirmation bias: - What is a confirming action? Especially given complexity of interface - Hypothesis generation: provided or self-generating? - Small, homogeneous (students) sample -- ## Based on Nickerson (1998), use the idea of **Hypothesis-Determined Information Seeking**: - Restriction of attention to a favored hypothesis - Preferential treatment of evidence supporting existing beliefs -- ## Proposed Experiment: - A between-subjects, MTurk experiment to understand what factors influence a dependent variable. - Consider dependent variables that participants have intuition but there's no clear measurement. - Example: Individual happiness, social media virality, social media misinformation. --- # What features most affect individual happiness? <img src="./img/conf-bias-study.png" width="800px" style="display: block; margin: auto;" /> **Hypothesis**: Participants in the **unlucky group** will have a **lower Spearman Rank Correlation** (i.e., worst accuracy compared to ground truth) due to confirmation bias. Said differently, if confirmation bias does not exist, we wouldn't expect any difference between the groups (as all individuals would let the data speak for itself). Correlations are chosen to be identifiable based on past Visual Analytics studies (Harrison, Yang, Franconeri, and Chang (2014), Kay and Heer (2016)). <!-- - Randomly assign participants to one of three groups: "lucky", "control", or "unlucky" --> <!-- - Ask participants their prior beliefs of six features' (e.g., family, friends, income, health, ) association with a dependent variable (e.g., happiness) --> <!-- - Data provided to individuals will be according to their treatment --> <!-- - Example: If participants chooses A, B, C, D, E, F and lucky -> order of correlation with DV will be B, A, C, D, F, E (i.e., rho = 0.829) --> <!-- - Hypothesis: Participants in the unlucky group will have a lower Spearman Rank Correlation (i.e., decisions compared to ground truth) --> <!-- - Said differently, null hypothesis is that Confirmation Bias doesn't exist. --> <!-- - If Confirmation Bias doesn't exist (i.e., participants base their decisions only on the data and not their prior beliefs), then there should be no difference between the three groups --> <!-- - Immediate Follow Up: Can providing users correlation matrix mitigate the effect of confirmation bias? --> --- # Extensions ## Test all three applications (happiness, virality, and misinformation) - Run in three rounds on MTurk, counterbalancing for learning effects ## Provide mitigation through a scattermatrix - This will force individuals to view all the data. - Cannot selectively choose data to view. ## Rerun experiment with Datasaurus-like data - Typically, most users would use summary statistics. - However, summary statistics can be misleading (Matejka and Fitzmaurice (2017)). - Rerun with manipulated series versus summary statistics. -- <img src="./img/DinoSequentialSmaller.gif" width="400px" style="display: block; margin: auto;" /> --- # Interface for MTurk <img src="./img/study.gif" width="800px" style="display: block; margin: auto;" /> --- class: center, middle <img src="./img/durant.gif" width="400px" style="display: block; margin: auto;" /> # Financial Decision-Making for Retirement Planning in Visual Analytics Proposed Study 2: Chapter 7 --- # Retirement Planning ## Shift from defined-benefit (pensions) to defined-contribution (401k) - This means individuals have a very important decisions in participation, contribution, and allocation. -- .pull-left[.full-width[ ## Size of market. Very significant! - U.S. defined-contribution balance are over **$15 trillion** dollars. ]] .pull-right[.full-width[ <img src="./img/retirement.png" width="314" height="150px" style="display: block; margin: auto;" /> ]] -- ## Financial applications in VA are wide (e.g., experts) (Ko et al. (2016)) - But rarely focus on household decision-making (i.e., for personal finances) - Exceptions: Rudolph, Savikhin, and Ebert (2009), Torsney-Weir et al. (2018) -- ## Behavioral economists have used viz to understand retirement decisions - Benartzi and Thaler (1999), Bateman et al. (2016), Shaton (2017) --- class: center, middle <img src="./img/ralph.gif" width="400px" style="display: block; margin: auto;" /> # How do individuals make decisions for retirement? --- # Paul Samuelson: Flipping a Coin .pull-left[.full-width[ <img src="./img/Samuelson.png" width="300px" style="display: block; margin: auto;" /> ]] .pull-right[.full-width[ Would you take this bet: You get **$200** if you **guess correctly** a coin flip but you pay me **$100** if **you're wrong**? <img src="./img/coinflip.jpg" width="200px" style="display: block; margin: auto;" /> - What about if we did it 100 times? [Simulation](https://unccviscenter.shinyapps.io/coinFlips/) ]] --- # Richard Thaler <img src="./img/richard-h-thaler-misbehaving.jpg" width="700px" /> - Samuelson's coin flip problem is identical to decisions individuals make in understanding risk in stock investments (Chapter 20). - Identified as possible solution to "Equity Premium Puzzle" (Mehra and Prescott (1985), Benartzi and Thaler (1995)). --- # Myopic Risk Aversion (Benartzi and Thaler (1995)) .pull-left[.full-width[ ## (Cumulative) Prospect Theory - Tversky and Kahneman (1992) <img src="./img/prospect.jpg" width="350px" /> Source: [sketchplantations.com](https://www.sketchplanations.com/post/118976026701/prospect-theory-dan-kahneman-and-amos-tverskys) ]] .pull-right[.full-width[ ## Narrow Framing - Mental Accounting, Thaler (1985) <img src="./img/homerbrain.png" width="200px" style="display: block; margin: auto;" /> Source: [Brian M. Lucey](https://brianmlucey.wordpress.com/tag/mental-accounting/) ]] ## Prospect Theory + Narrow Framing => Myopic Risk Aversion --- # Experiment on Allocation <img src="./img/bernatzi.png" width="550px" style="display: block; margin: auto;" /> Benartzi and Thaler (1999) Question: Can updated visual analytics tools mitigate myopic risk aversion? --- # Initial Experiment ## Treatment 1: Annual Returns (one time) <img src="presentation_files/figure-html/unnamed-chunk-40-1.png" width="960" /> ## Treatment 2: 30-Year Annualized Returns (thirty times) <img src="presentation_files/figure-html/unnamed-chunk-41-1.png" width="960" /> --- # Propose to mitigate with a slider interaction. <img src="./img/finSlider.gif" width="700px" style="display: block; margin: auto;" /> --- # Considerations / Extensions ## Stratified Sampling (Athey and Imbens (2017)) - Assign treatment stratified on age, income, and financial knowledge. ## Are there better ways to encode visualize returns? - Density, histogram, dotplots, etc. ## Consider 3+ investments - Do participants simply use a 1/n rule? ## What effect can anchoring have on decisons? - Will it interact with myopic risk aversion? --- <img src="./img/bernatzi2.png" width="700px" /> --- class: center, middle # Topological Data Analysis for Explainable AI Proposed Study 3: Chapter 8 --- # Explainable AI <img src="./img/xai.png" width="700px" /> Gunning (2017) --- # Three Types of Explainable AI Models (Gunning (2017)) <img src="./img/xai-models.png" width="700px" /> We'll focus on an example of model induction (instance based explanations), i.e., explaining model predictions through other observations. --- # Topological Data Analysis .pull-left[.full-width[ - Topological data analysis focuses on the shape and structure of an object. - Maps highly complex, high dimensional data to a lower dimensional space. - Scalable "dimensionality reduction" technique ]] .pull-right[.full-width[ <img src="./img/topology.png" width="250px" style="display: block; margin: auto;" /> ]] <img src="./img/covers.png" width="750px" style="display: block; margin: auto;" /> Courtesy of Dustin Arendt --- class: center, middle # Example: Predicting Leaf Images .pull-left[ <img src="./img/leafs.png" width="290" height="200px" style="display: block; margin: auto;" /> ] .pull-right[ Approach is model agnostic;, i.e., can use any supervised ML model (e.g., 10-class L1-regularized logistic regression). ] <img src="./img/covers2.png" width="600px" style="display: block; margin: auto;" /> Courtesy of Dustin Arendt --- class: center, middle # Global View .pull-left[.full-width[ <img src="./img/tp-network.png" width="400px" style="display: block; margin: auto;" /> ]] .pull-right[ <img src="./img/leafs.png" width="450px" /> ] Courtesy of Dustin Arendt --- class: center, middle # Application: Instance Based Analysis .pull-left[ Let's consider an observation and its four highest classes: ] .pull-right[ <img src="./img/instance.png" width="250px" style="display: block; margin: auto;" /> ] <img src="./img/instances2.png" width="550px" style="display: block; margin: auto;" /> Courtesy of Dustin Arendt --- # Decision-Making in XAI <img src="./img/cognitive-xai.png" width="700px" /> Gunning (2017) --- # Proposed Work: ## Determine appropriate application and data source (e.g., social media) - Specific model is less important; e.g., recommended simple model that aligns to data source - Typically multinomial classification works best - Application will yield desire for current interface or new interface (e.g., lab or MTurk) ## Implement experiments to understand user confidence and trust in the system (e.g., accept or reject) - Compare results to out-of-sample observation; does the system improve individual predictions? - Evaluate performance relative to XAI standards ## Consider the role of Representativeness bias Tversky and Kahneman (1974) - What is the probability that event A belongs to class B? - Consider base rate fallacy and (in)consistency with Bayesian inference. --- class: center, middle # Dissertation Plan --- # Proposed Organization **Chapter**|**Title**|**Paper Venue** :-----:|:-----:|:-----: 1|Introduction & Problem Statement| 2|Related Work | 3|The Anchoring Effect in Decision-Making with Visual Analytics|VAST 2017 4|Can You Verifi This? Studying Uncertainty and Decision-Making About Misinformation in Visual Analytics|ICWSM 2018 5|Anchored in a Data Storm: How Anchoring Bias Can Affect User Strategy, Confidence, and Decisions in Visual Analytics|CHI 2018 (tentative submission) 6|Identifying Confirmation Bias in Visual Analytics|Proposal (VAST 2019) 7|Financial Decision-Making (Retirement) in Visual Analytics|Proposal (VAST 2019) 8|Decision-Making in Explainable AI with TDA|Proposal (TBD) 9|Conclusion| --- # Ryan's Action Plan <img src="presentation_files/figure-html/unnamed-chunk-55-1.png" width="672" /> (A) CHI 2019, (B) Apply for May Graduation, (C) Formatting Deadline, (D) VAST 2019, (E) Defense Deadline, (F) Final Submission --- # References [1] S. Benartzi and R. H. Thaler. "Myopic loss aversion and the equity premium puzzle". In: _The quarterly journal of Economics_ 110.1 (1995), pp. 73-92. [2] S. Benartzi and R. H. Thaler. "Risk aversion or myopia? Choices in repeated gambles and retirement investments". In: _Management science_ 45.3 (1999), pp. 364-381. [3] J. Buolamwini and T. Gebru. "Gender shades: Intersectional accuracy disparities in commercial gender classification". In: _Conference on Fairness, Accountability and Transparency_. 2018, pp. 77-91. [4] B. Goodman and S. Flaxman. "European Union regulations on algorithmic decision-making and a `right to explanation'". In: _arXiv preprint arXiv:1606.08813_ (2016). [5] D. Gunning. "Explainable artificial intelligence (xai)". In: _Defense Advanced Research Projects Agency (DARPA), nd Web_ (2017). [6] S. Hajian, F. Bonchi and C. Castillo. "Algorithmic bias: From discrimination discovery to fairness-aware data mining". In: _Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining_. ACM. 2016, pp. 2125-2126. [7] L. Harrison, F. Yang, S. Franconeri, et al. "Ranking visualizations of correlation using weber's law". In: _IEEE transactions on visualization and computer graphics_ 20.12 (2014), pp. 1943-1952. --- # Possible Factors for Confirmation Bias Applications | Happiness | Virality | Misinformation | |-----------|-------------|----------------| | Health | Followers | Anger | | Family | Emotional | Negativity | | Income | Images | Punctuation | | Friends | Credibility | Mention | | Leisure | Timing | Retweet | | Work | Hashtag | Images | --- # Permanent Income (PIH) & Life-Cycle Hypotheses (LCH) <img src="./img/permenant-income.png" width="700px" style="display: block; margin: auto;" /> Friedman (1957) (PIH) and Ando and Modigliani (1963) (LCH) Adapted from [Isaac Bailey](http://www.isaacbaley.com/uploads/6/7/3/5/6735245/lecture_5_baley.pdf) --- # Equity Premium Puzzle <img src="./img/equity-premium.png" width="550px" style="display: block; margin: auto;" /> Mehra and Prescott (1985) --- # Calero Valdez et al. (2017) Bias Framework <img src="./img/cv-plot.png" width="300px" style="display: block; margin: auto;" /> --- # The Problem of "Unreliability"