Setup

Bespoke functions

Read in data

Quality Check the Data

Are there any physicians included more than twice?

Missing Subjects
NPI name N
NA NA NA
—: :—- –:

Variables of those physicians included more than twice?

## CSV file saved successfully!
Variables of those physicians included more than twice?
NPI name Reason for exclusions Subspecialty business_days_until_appointment
NA NA NA NA NA
—: :—- :——————— :———— ——————————-:

Filter data so there is only one NPI per insurance type

Find NPI numbers called more than twice

NPI numbers called more than twice
NPI calls_count
NA 3

Do they have exclusion and have a business_days_until_appointment >0?

Dendogram

Trying color

Dendrogram Plot of American Academy of Otolaryngology Board of Governors Region 8 showing General and Pediatric Otolaryngologists

Demographics of the Sample

## Our sample included 682 physicians from 49 states, including the District of Columbia, excluding Maine and Wyoming . There were calls with 156 neurotologists, 197 pediatric otolaryngologists, and 329 generalists.

The median age of the dataset was 52(IQR 25th percentile 44 to 75th percentile 61).

The most common gender in the dataset was male (81%). The most common training was Doctor of Medicine (97%). The most common specialty was General Otolaryngology (48%).

## In our dataset, the most common gender was male, representing 81.3% of the total. The predominant specialty observed was General Otolaryngology, accounting for 48.3% of the entries. Additionally, the most prevalent professional qualification was Doctor of Medicine, which constituted 96.9% of the dataset.
Wait Times for All Subspecialties
Media_business_days_until_appointment Q1 Q3
39 19 65

The median wait time across all subspecialties was business days, with an interquartile range (IQR) of 19 to 65.

Business Days Until Next Appointment By Each Subspecialty
Subspecialty Median_business_days_until_appointment Q1 Q3
General Otolaryngology 27.5 14 50
Neurotology 40.0 20 66
Pediatric Otolaryngology 58.5 31 87

Exclusions

## Of the total 706 phones calls made, 682 (97%) successfully reached a representative, while 24 calls (3%) did not yield a connection even after two attempts. For the unsuccessful connections, 13 (54%) were redirected to voicemail and 11 (46%) reached a busy signal.  For successful connections, the reasons for exclusion were 45 (7%) requiring a prior referral,  97 (14%) reported that they were not currently accepting new patients and, 13 physician offices (2%) put the caller on hold for more than five minutes.
## There were 706 calls, with 161 neurotologists, 204 pediatric otolaryngologists, and 341 generalists.

Correlation

Visualizing the Data

Graph each variable

Business days by Subspecialty

log(Business days) by Subspecialty with transform

Day of the week by Subspecialty

Central Appointment Line by Subspecialty

Physician Gender by Subspecialty

Physician MD vs. DO by Subspecialty

Physician Age Category by Subspecialty

Table 1

Demographics of all physicians called

Overall (N=682)
Age (years) Category
- Less than 40 years old 84 (12.3%)
- 40 to 49 years old 185 (27.1%)
- 50 to 59 years old 201 (29.5%)
- 60 years old and greater 212 (31.1%)
Gender
- Male 553 (81.6%)
- Female 125 (18.4%)
Subspecialty
- General Otolaryngology 329 (48.2%)
- Neurotology 156 (22.9%)
- Pediatric Otolaryngology 197 (28.9%)
Medical School Location
- US Senior 405 (84.2%)
- International Medical Graduate 76 (15.8%)
Medical School Training
- Doctor of Medicine 661 (96.9%)
- Doctor of Osteopathy 21 (3.1%)
American Academy of Otolaryngology Regions
- Region 1 (Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, Vermont) 38 (5.6%)
- Region 2 (New Jersey, New York) 65 (9.5%)
- Region 3 (Delaware, District of Columbia, Maryland, Pennsylvania, Virginia, West Virginia) 62 (9.1%)
- Region 4 (Alabama, Florida, Georgia, Kentucky, Mississippi, North Carolina, South Carolina, Tennessee) 130 (19.1%)
- Region 5 (Illinois, Indiana, Michigan, Minnesota, Ohio, Wisconsin) 107 (15.7%)
- Region 6 (Arkansas, Louisiana, New Mexico, Oklahoma, Texas) 81 (11.9%)
- Region 7 (Iowa, Kansas, Missouri, Nebraska) 44 (6.5%)
- Region 8 (Colorado, Montana, North Dakota, South Dakota, Utah, Wyoming) 34 (5.0%)
- Region 9 (Alaska, Oregon, Washington) 30 (4.4%)
- Region 10 (Arizona, California, Hawaii, Nevada) 91 (13.3%)
Rurality
- Metropolitan area 666 (97.7%)
- Rural area 16 (2.3%)
Practice Setting
- Private Practice 416 (61.0%)
- University 266 (39.0%)

Table 1 - Split across Subspecialties

General Otolaryngology (N=329) Neurotology (N=156) Pediatric Otolaryngology (N=197) Total (N=682) p value
Age (years) Category < 0.01
- Less than 40 years old 55 (16.7%) 12 (7.7%) 17 (8.6%) 84 (12.3%)
- 40 to 49 years old 90 (27.4%) 37 (23.7%) 58 (29.4%) 185 (27.1%)
- 50 to 59 years old 87 (26.4%) 45 (28.8%) 69 (35.0%) 201 (29.5%)
- 60 years old and greater 97 (29.5%) 62 (39.7%) 53 (26.9%) 212 (31.1%)
Gender 0.01
- Male 267 (82.2%) 137 (87.8%) 149 (75.6%) 553 (81.6%)
- Female 58 (17.8%) 19 (12.2%) 48 (24.4%) 125 (18.4%)
Medical School Location 0.37
- US Senior 223 (86.1%) 106 (83.5%) 76 (80.0%) 405 (84.2%)
- International Medical Graduate 36 (13.9%) 21 (16.5%) 19 (20.0%) 76 (15.8%)
Medical School Training 0.03
- Doctor of Medicine 313 (95.1%) 153 (98.1%) 195 (99.0%) 661 (96.9%)
- Doctor of Osteopathy 16 (4.9%) 3 (1.9%) 2 (1.0%) 21 (3.1%)
American Academy of Otolaryngology Regions 0.97
- Region 1 (Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, Vermont) 14 (4.3%) 10 (6.4%) 14 (7.1%) 38 (5.6%)
- Region 2 (New Jersey, New York) 31 (9.4%) 11 (7.1%) 23 (11.7%) 65 (9.5%)
- Region 3 (Delaware, District of Columbia, Maryland, Pennsylvania, Virginia, West Virginia) 29 (8.8%) 17 (10.9%) 16 (8.1%) 62 (9.1%)
- Region 4 (Alabama, Florida, Georgia, Kentucky, Mississippi, North Carolina, South Carolina, Tennessee) 65 (19.8%) 32 (20.5%) 33 (16.8%) 130 (19.1%)
- Region 5 (Illinois, Indiana, Michigan, Minnesota, Ohio, Wisconsin) 51 (15.5%) 25 (16.0%) 31 (15.7%) 107 (15.7%)
- Region 6 (Arkansas, Louisiana, New Mexico, Oklahoma, Texas) 42 (12.8%) 15 (9.6%) 24 (12.2%) 81 (11.9%)
- Region 7 (Iowa, Kansas, Missouri, Nebraska) 20 (6.1%) 9 (5.8%) 15 (7.6%) 44 (6.5%)
- Region 8 (Colorado, Montana, North Dakota, South Dakota, Utah, Wyoming) 19 (5.8%) 7 (4.5%) 8 (4.1%) 34 (5.0%)
- Region 9 (Alaska, Oregon, Washington) 15 (4.6%) 8 (5.1%) 7 (3.6%) 30 (4.4%)
- Region 10 (Arizona, California, Hawaii, Nevada) 43 (13.1%) 22 (14.1%) 26 (13.2%) 91 (13.3%)
Rurality 0.94
- Metropolitan area 321 (97.6%) 152 (97.4%) 193 (98.0%) 666 (97.7%)
- Rural area 8 (2.4%) 4 (2.6%) 4 (2.0%) 16 (2.3%)
Practice Setting < 0.01
- Private Practice 243 (73.9%) 89 (57.1%) 84 (42.6%) 416 (61.0%)
- University 86 (26.1%) 67 (42.9%) 113 (57.4%) 266 (39.0%)

City data

Pediatric vs. General Wait Times by City

How many cities had a longer wait time for a pediatric Otolaryngologist compared to a general Otolaryngologist?

Cities ranked by how many business days longer it takes to see a Pediatric Otolarygologist versus General Otolaryngologist
General Otolaryngology Pediatric Otolaryngology city_state diff_ped_vs_gen
23.0 111.0 Gainesville, Florida 88.0
5.0 88.0 New Orleans, Louisiana 83.0
22.0 101.0 Santa Barbara, California 79.0
9.5 81.5 Colorado Springs, Colorado 72.0
1.0 71.0 Lansdowne, Virginia 70.0
3.0 68.5 Los Angeles, California 65.5
9.0 71.0 Nashville, Tennessee 62.0
8.0 67.0 Alpharetta, Georgia 59.0
58.5 117.0 Madison, Wisconsin 58.5
54.0 107.0 Seattle, Washington 53.0
Cities ranked by how many business days longer it takes to see a General Otolarygologist versus Pediatric Otolaryngologist
city_state General Otolaryngology Pediatric Otolaryngology diff_ped_vs_gen
Neptune, New Jersey 121.0 6 -115.0
Albuquerque, New Mexico 137.0 61 -76.0
Chapel Hill, North Carolina 99.0 30 -69.0
Roseville, California 81.5 19 -62.5
Salt Lake City, Utah 63.5 21 -42.5
Memphis, Tennessee 50.5 19 -31.5
Culver City, California 27.0 3 -24.0
Springfield, Massachusetts 153.0 132 -21.0
Manhasset, New York 43.0 23 -20.0
Tampa, Florida 21.0 3 -18.0

Neurotology vs. General Otolaryngology Wait Times by City

Cities ranked by how many business days longer it takes to see a Neurotologist versus General Otolaryngologist
city_state General Otolaryngology Neurotology diff_neuro_vs_gen
Dallas, Texas 2 119.0 117.0
Downey, California 33 111.0 78.0
San Antonio, Texas 41 114.0 73.0
Sewickley, Pennsylvania 14 82.0 68.0
Charleston, South Carolina 7 66.5 59.5
White Plains, New York 9 65.0 56.0
Fargo, North Dakota 3 50.0 47.0
Baltimore, Maryland 24 62.5 38.5
Nashville, Tennessee 3 39.0 36.0
New Haven, Connecticut 20 54.0 34.0
Cities ranked by how many business days longer it takes to see a General Otolaryngologist versus Neurotologist:
city_state General Otolaryngology Neurotology diff_neuro_vs_gen
Salt Lake City, Utah 87.0 40.0 -47.0
San Jose, California 59.0 25.0 -34.0
Seattle, Washington 54.0 28.5 -25.5
Fort Myers, Florida 22.0 1.0 -21.0
Huntington, West Virginia 30.0 10.0 -20.0
Maywood, Illinois 45.5 26.0 -19.5
Durham, North Carolina 44.0 27.0 -17.0
Houston, Texas 31.0 14.0 -17.0
Albany, New York 26.0 11.0 -15.0
Los Gatos, California 90.0 75.0 -15.0

City, State wait times

Here is the updated analysis based on the new data and code provided:

Example 1: High Variability in Wait Times Across Cities for Pediatric Otolaryngology

  • Gainesville, FL (Pediatric Otolaryngology):
    • Average Wait Time: 111 business days
    • Subspecialty: Pediatric Otolaryngology
    • Significance: Gainesville shows a significant increase in wait times for Pediatric Otolaryngology compared to General Otolaryngology, likely due to the higher demand for specialized pediatric services and possibly a shortage of pediatric otolaryngologists in the area.
  • New Orleans, LA (Pediatric Otolaryngology):
    • Average Wait Time: 88 business days
    • Subspecialty: Pediatric Otolaryngology
    • Significance: New Orleans has a substantially longer average wait time for Pediatric Otolaryngology, which could be due to a combination of high demand and a limited number of specialists available for pediatric cases.

Example 2: Regional Differences Impacting Neurotology Wait Times

  • Los Angeles, CA (Neurotology):
    • Average Wait Time: 68.5 business days
    • Subspecialty: Pediatric Otolaryngology
    • Region: AAO Region 6
    • Significance: In Los Angeles, the wait time for Pediatric Otolaryngology is significantly higher than for General Otolaryngology, indicating a strong demand for pediatric subspecialists in this populous region.
  • Seattle, WA (Neurotology):
    • Average Wait Time: 107 business days
    • Subspecialty: Pediatric Otolaryngology
    • Region: AAO Region 8
    • Significance: Seattle shows a substantial difference in wait times, with Pediatric Otolaryngology appointments taking considerably longer to secure than General Otolaryngology appointments. This may suggest a higher demand for pediatric services or fewer pediatric specialists in the area.

Example 3: Comparison Between Subspecialties in the Same City

  • Alpharetta, GA (Comparison between Pediatric Otolaryngology and General Otolaryngology):
    • Pediatric Otolaryngology:
      • Average Wait Time: 67 business days
    • General Otolaryngology:
      • Average Wait Time: 8 business days
    • Significance: In Alpharetta, the difference in wait times between Pediatric Otolaryngology and General Otolaryngology is significant, with pediatric appointments taking much longer. This likely reflects a shortage of pediatric specialists relative to general otolaryngologists in the area.

Example 4: Impact of Centralized Appointment Centers

  • Madison, WI (Pediatric Otolaryngology and General Otolaryngology):
    • Central Number for Appointments: Yes
    • Average Wait Time: 117 business days for Pediatric Otolaryngology, 58.5 business days for General Otolaryngology
    • Significance: Madison’s centralized appointment system might contribute to more efficient scheduling for general otolaryngology, but pediatric cases still have significantly longer wait times, suggesting higher demand or fewer available pediatric specialists.

Example 5: Outlier City with Exceptionally Long Wait Times

  • Neptune, NJ (General Otolaryngology vs. Pediatric Otolaryngology):
    • General Otolaryngology: 121 business days
    • Pediatric Otolaryngology: 6 business days
    • Significance: Neptune is an outlier where General Otolaryngology appointments take much longer than Pediatric Otolaryngology appointments, possibly indicating an unusual local dynamic where general services are in higher demand or pediatric services are more readily available.

Summary of Findings:

These examples with real city names illustrate the significant variability in appointment wait times across different locations and subspecialties. The data highlights how local factors, such as the number of available specialists, regional demand, and the use of centralized appointment systems, can greatly influence patient access to care. Identifying cities with particularly long or short wait times can help healthcare administrators focus their efforts on improving access where it is most needed.


The analysis also highlights cities with the most significant differences in wait times:

Cities with the Largest Differences (Pediatric Otolaryngology vs. General Otolaryngology):

  1. Gainesville, FL: Pediatric Otolaryngology wait times are 88 days longer than General Otolaryngology.
  2. New Orleans, LA: Pediatric Otolaryngology wait times are 83 days longer than General Otolaryngology.
  3. Santa Barbara, CA: Pediatric Otolaryngology wait times are 79 days longer than General Otolaryngology.
  4. Colorado Springs, CO: Pediatric Otolaryngology wait times are 72 days longer than General Otolaryngology.
  5. Lansdowne, VA: Pediatric Otolaryngology wait times are 70 days longer than General Otolaryngology.

Cities with the Smallest Differences (or where General Otolaryngology takes longer):

  1. Neptune, NJ: General Otolaryngology wait times are 115 days longer than Pediatric Otolaryngology.
  2. Albuquerque, NM: General Otolaryngology wait times are 76 days longer than Pediatric Otolaryngology.
  3. Chapel Hill, NC: General Otolaryngology wait times are 69 days longer than Pediatric Otolaryngology.
  4. Roseville, CA: General Otolaryngology wait times are 62.5 days longer than Pediatric Otolaryngology.
  5. Salt Lake City, UT: General Otolaryngology wait times are 42.5 days longer than Pediatric Otolaryngology.

These findings emphasize the variability in wait times across different cities and subspecialties, with some cities experiencing significant differences in access to care depending on the type of otolaryngology needed.

Here’s the updated analysis with the new data for Neurotology versus General Otolaryngology:

Example 1: High Variability in Wait Times Across Cities for Neurotology

  • Dallas, TX (Neurotology):
    • Average Wait Time: 119 business days
    • Subspecialty: Neurotology
    • Significance: Dallas exhibits a substantial difference in wait times, with Neurotology appointments taking 117 days longer than General Otolaryngology appointments. This significant disparity might reflect a higher demand for neurotology services and potentially fewer specialists in the area.
  • Downey, CA (Neurotology):
    • Average Wait Time: 111 business days
    • Subspecialty: Neurotology
    • Significance: In Downey, the wait time for Neurotology is 78 days longer than for General Otolaryngology. This could indicate a higher patient load or a shortage of neurotologists in this region.

Example 2: Regional Differences Impacting Neurotology Wait Times

  • San Antonio, TX (Neurotology):
    • Average Wait Time: 114 business days
    • Subspecialty: Neurotology
    • Region: AAO Region 6
    • Significance: San Antonio shows a large difference in wait times, with Neurotology appointments taking 73 days longer than General Otolaryngology appointments. This suggests a potential imbalance between supply and demand for neurotology services in this area.
  • Sewickley, PA (Neurotology):
    • Average Wait Time: 82 business days
    • Subspecialty: Neurotology
    • Region: AAO Region 9
    • Significance: Sewickley exhibits a 68-day longer wait for Neurotology compared to General Otolaryngology, indicating high demand or limited availability of neurotologists in this region.

Example 3: Comparison Between Subspecialties in the Same City

  • Charleston, SC (Comparison between Neurotology and General Otolaryngology):
    • Neurotology:
      • Average Wait Time: 66.5 business days
    • General Otolaryngology:
      • Average Wait Time: 7 business days
    • Significance: In Charleston, the wait time for Neurotology is 59.5 days longer than for General Otolaryngology, highlighting a significant disparity that may be due to a shortage of specialists or higher demand for neurotology services.

Example 4: Impact of Centralized Appointment Centers

  • White Plains, NY (Neurotology and General Otolaryngology):
    • Central Number for Appointments: Yes
    • Average Wait Time: 65 business days for Neurotology, 9 business days for General Otolaryngology
    • Significance: White Plains shows a significant 56-day difference in wait times, suggesting that even with a centralized appointment system, neurotology services are in much higher demand or have fewer available specialists.

Example 5: Outlier City with Exceptionally Long Wait Times

  • Salt Lake City, UT (General Otolaryngology vs. Neurotology):
    • General Otolaryngology: 87 business days
    • Neurotology: 40 business days
    • Significance: Salt Lake City is an outlier where General Otolaryngology appointments take 47 days longer than Neurotology appointments. This could indicate a high demand for general otolaryngology services or better availability of neurotologists.

Summary of Findings:

These examples with real city names illustrate the significant variability in appointment wait times across different locations and subspecialties. The data highlights how local factors, such as the number of available specialists, regional demand, and the use of centralized appointment systems, can greatly influence patient access to care. Identifying cities with particularly long or short wait times can help healthcare administrators focus their efforts on improving access where it is most needed.


The analysis also highlights cities with the most significant differences in wait times:

Cities with the Largest Differences (Neurotology vs. General Otolaryngology):

  1. Dallas, TX: Neurotology wait times are 117 days longer than General Otolaryngology.
  2. Downey, CA: Neurotology wait times are 78 days longer than General Otolaryngology.
  3. San Antonio, TX: Neurotology wait times are 73 days longer than General Otolaryngology.
  4. Sewickley, PA: Neurotology wait times are 68 days longer than General Otolaryngology.
  5. Charleston, SC: Neurotology wait times are 59.5 days longer than General Otolaryngology.

Cities with the Smallest Differences (or where General Otolaryngology takes longer):

  1. Salt Lake City, UT: General Otolaryngology wait times are 47 days longer than Neurotology.
  2. San Jose, CA: General Otolaryngology wait times are 34 days longer than Neurotology.
  3. Seattle, WA: General Otolaryngology wait times are 25.5 days longer than Neurotology.
  4. Fort Myers, FL: General Otolaryngology wait times are 21 days longer than Neurotology.
  5. Huntington, WV: General Otolaryngology wait times are 20 days longer than Neurotology.

These findings emphasize the variability in wait times across different cities and subspecialties, with some cities experiencing significant differences in access to care depending on the type of otolaryngology needed.

Comparison of City Data for Pediatric Otolaryngology and Neurotology

The analysis of wait times for Pediatric Otolaryngology and Neurotology across various cities reveals interesting contrasts in how these subspecialties differ in terms of patient access and demand.

1. Cities with the Longest Wait Times for Both Subspecialties

Dallas, TX - Neurotology: - Wait Time: 119 business days - Difference with General Otolaryngology: +117 days - Pediatric Otolaryngology: - Wait Time: Not in the top 10 for the longest differences but relevant for analysis. - Significance: Dallas stands out as having the most significant wait time difference for Neurotology. This could be due to a shortage of neurotologists or a particularly high demand for these specialized services.

Los Angeles, CA - Neurotology: - Wait Time: Not in the top 10 for the longest differences. - Pediatric Otolaryngology: - Wait Time: 68.5 business days - Difference with General Otolaryngology: +65.5 days - Significance: Los Angeles shows a significant wait time for Pediatric Otolaryngology, which could reflect high demand for pediatric care in this populous city.

2. Cities with Substantial Differences Between Subspecialties

Charleston, SC - Neurotology: - Wait Time: 66.5 business days - Difference with General Otolaryngology: +59.5 days - Pediatric Otolaryngology: - Wait Time: Not listed in the top differences. - Significance: Charleston shows a substantial difference for Neurotology but not as much for Pediatric Otolaryngology, indicating that Neurotology might be under more pressure in this city.

Nashville, TN - Neurotology: - Wait Time: 39 business days - Difference with General Otolaryngology: +36 days - Pediatric Otolaryngology: - Wait Time: 71 business days - Difference with General Otolaryngology: +62 days - Significance: Nashville exhibits significant differences for both subspecialties, suggesting that the city has high demand or a shortage of specialists in both Neurotology and Pediatric Otolaryngology.

3. Cities with Negative Differences for Neurotology

Salt Lake City, UT - Neurotology: - Wait Time: 40 business days - Difference with General Otolaryngology: -47 days - Pediatric Otolaryngology: - Wait Time: 21 business days - Difference with General Otolaryngology: -19 days - Significance: Salt Lake City is an interesting case where General Otolaryngology has longer wait times than both Neurotology and Pediatric Otolaryngology. This might indicate that general services are under more strain than specialized care.

Seattle, WA - Neurotology: - Wait Time: 28.5 business days - Difference with General Otolaryngology: -25.5 days - Pediatric Otolaryngology: - Wait Time: 107 business days - Difference with General Otolaryngology: +53 days - Significance: Seattle shows a stark contrast where Neurotology has shorter wait times compared to Pediatric Otolaryngology. This could suggest better availability or less demand for Neurotology compared to Pediatric Otolaryngology.

4. Outlier Cities with Exceptionally Long or Short Wait Times

Neptune, NJ - Neurotology: - Not in top differences. - Pediatric Otolaryngology: - Wait Time: 6 business days - Difference with General Otolaryngology: -115 days - Significance: Neptune, NJ is an outlier for Pediatric Otolaryngology, where general services have significantly longer wait times. This suggests an efficient Pediatric Otolaryngology service or less demand compared to general care.

Dallas, TX - Neurotology: - Wait Time: 119 business days - Difference with General Otolaryngology: +117 days - Pediatric Otolaryngology: - Wait Time: Relevant but not listed in the top for longest differences. - Significance: Dallas is a city where Neurotology faces severe demand or supply issues, whereas Pediatric Otolaryngology also faces challenges but to a lesser extent.

Summary of Comparative Insights

  • Demand and Supply Imbalances: Certain cities, like Dallas and Nashville, experience significant pressures in both Neurotology and Pediatric Otolaryngology, indicating high demand or limited supply of specialists.
  • City-Specific Variability: Cities like Salt Lake City and Neptune show that General Otolaryngology can sometimes have longer wait times than these subspecialties, highlighting unique local healthcare dynamics.
  • Regional Differences: Cities like Los Angeles and Seattle exhibit substantial differences in how Neurotology and Pediatric Otolaryngology are managed, reflecting regional variations in patient needs and specialist availability.

This comparative analysis emphasizes the importance of local factors in determining wait times for specialized care and suggests areas where healthcare access might need targeted improvements.

Wait Time Figures

Waiting time in Days (Log Scale) for Blue Cross/Blue Shield versus Medicaid. The code you provided will create a scatter plot with points representing the relationship between the insurance variable (x-axis) and the days variable (y-axis). Additionally, it includes a line plot that connects points with the same npi value.

Line Plot

Here we show a scatterplot that compares the Private and Medicaid times. Notice that the graph is in logarithmic scale. Points above the diagonal line are providers for whom the Medicaid waiting time was longer than the private insurance waiting time.

We also see a strong linear association, indicating that providers with longer waiting time for private insurance tend to also have longer waiting times for Medicaid.

Scatter Plot

Density Plot

Dot Plot

Wait Time Figures by Scenario: Peds

Here we show a scatterplot that compares the Private and Medicaid times. Notice that the graph is in logarithmic scale. Points above the diagonal line are providers for whom the Medicaid waiting time was longer than the private insurance waiting time.

We also see a strong linear association, indicating that providers with longer waiting time for private insurance tend to also have longer waiting times for Medicaid.

Scatter Plot for Peds

Density Plot

Dot Plot

Wait Time Figures by Scenario: Neurotology

Here we show a scatterplot that compares the Private and Medicaid times. Notice that the graph is in logarithmic scale. Points above the diagonal line are providers for whom the Medicaid waiting time was longer than the private insurance waiting time.

We also see a strong linear association, indicating that providers with longer waiting time for private insurance tend to also have longer waiting times for Medicaid.

Scatter Plot for Neurotology

Density Plot

Dot Plot

Poisson Model

The models need to be able to deal with NA in the business_days_until_appointment outcome variable (196) and also non-parametric data.

Poisson Model poisson

Given that the “business_days_until_appointment” variable represents the count of days until a new patient appointment and is a count variable, the Poisson regression model is appropriate for your data. It will model the relationship between the predictor variables and the count of days until a new patient appointment.

In the Poisson regression model, random effects are used to account for variability that is not explained by the fixed effects alone. The random effects for “city” in this model capture the variability in the number of business days until an appointment that is attributed to differences between cities. By including city as a random effect, the model acknowledges that observations within the same city are likely to be more similar to each other than to observations from different cities. This clustering effect is accounted for by allowing the intercept to vary across cities. Random effects help to improve model fit by accounting for unexplained variability that is due to the hierarchical structure of the data (i.e., appointments are nested within cities). This results in more accurate estimates of the fixed effects and a better understanding of the variability in appointment wait times.

Model Formula

$$ \[\begin{align*} P(\text{{Business Days until New Patient Appointment}} = x) &= \frac{e^{-\lambda} \cdot \lambda^x}{x!} \\ \log(\lambda) &= \beta_0 \\ & + \beta_1 \cdot \text{{Physician Subspecialty}} \\ & + \beta_2 \cdot \text{{Physician Age}} \\ & + \beta_3 \cdot \text{{Physician Academic Affiliation}} \\ & + \beta_4 \cdot \text{{American Academy of Otorhinolaryngology Regions}} \\ & + \beta_5 \cdot \text{{Physician Medical Training}} \\ & + \beta_6 \cdot \text{{Physician Gender}} \\ & + \beta_7 \cdot \text{{Minutes on Hold}} \\ & + \beta_8 \cdot \text{{Year of Graduation from Medical School}} \\ & + \beta_9 \cdot \text{{Number of Phone Transfers}} \\ & + \beta_{10} \cdot \text{{Rurality}} \\ & + \beta_{11} \cdot \text{{Central Appointment Phone Number}} \\ & + ( 1 | \text{{Physician Practice City}}) \end{align*}\] $$

where:

  • Fixed effects include age, subspecialty, gender, AAO regions, hold time in minutes, central number e.g., appointment center, call time in minutes, graduation year category, medical school, number of transfers, CBS type, and academic status.

  • Random effects account for variability between cities, modeled as a random intercept.

Random Intercept:

The random effect for city suggests that there is substantial variability in appointment wait times between cities. Cities with a higher random intercept will tend to have longer wait times compared to cities with a lower random intercept.

  • Variance: The variance of the random intercept for city is 0.469. This indicates the extent of variability in the baseline number of business days until an appointment between different cities. A higher variance suggests that cities differ significantly in their average appointment wait times.

  • Standard Deviation: The standard deviation of the random intercept is 0.685. This reflects the typical deviation from the overall mean wait time that we would expect for cities. In other words, the average number of business days until an appointment varies by approximately 0.685 business days between cities.

Variable Importance Factors
GVIF Df GVIF^(1/(2*Df))
age 1.278149 1 1.130552
Subspecialty 1.488909 2 1.104631
gender 1.302739 1 1.141376
AAO_regions 1.359411 9 1.017205
hold_time_minutes 1.687728 1 1.299126
central_number_e_g_appointment_center 1.081200 1 1.039808
Call_time_minutes 1.508518 1 1.228217
Med_sch 1.151244 1 1.072960
ntransf 1.179530 2 1.042143
cbsatype10 1.046547 1 1.023009
academic 1.310655 1 1.144838
  business days until
appointment
Predictors Incidence Rate Ratios CI p
(Intercept) 25.71 19.47 – 33.96 <0.001
age 1.00 1.00 – 1.00 0.221
Subspecialty
[Neurotology]
1.23 1.16 – 1.30 <0.001
Subspecialty [Pediatric
Otolaryngology]
1.25 1.18 – 1.32 <0.001
genderFemale 1.07 1.00 – 1.14 0.037
AAO regions [Region 1] 3.33 2.28 – 4.86 <0.001
AAO regions [Region 2] 1.54 1.28 – 1.85 <0.001
AAO regions [Region 3] 1.69 1.14 – 2.52 0.009
AAO regions [Region 4] 0.92 0.67 – 1.26 0.606
AAO regions [Region 6] 1.05 0.72 – 1.53 0.788
AAO regions [Region 7] 1.88 1.32 – 2.70 0.001
AAO regions [Region 8] 1.07 0.64 – 1.81 0.788
AAO regions [Region 9] 1.65 1.04 – 2.60 0.032
AAO regions [Region 10] 2.03 1.42 – 2.90 <0.001
hold time minutes 0.99 0.97 – 1.02 0.620
central number e g
appointment center [No]
0.75 0.67 – 0.83 <0.001
Call time minutes 1.05 1.02 – 1.08 <0.001
Med sch [US Senior
Medical Graduate]
0.83 0.78 – 0.88 <0.001
ntransf [One transfer] 1.23 1.12 – 1.35 <0.001
ntransf [Two transfers] 0.87 0.63 – 1.19 0.372
cbsatype10 [Micro] 1.20 0.82 – 1.74 0.347
academic [University] 1.49 1.38 – 1.60 <0.001
Random Effects
σ2 0.02
τ00 city 0.43
ICC 0.95
N city 165
Observations 341
Marginal R2 / Conditional R2 0.379 / 0.971

Model Performance

  • Conditional R²: 0.96, indicating substantial explanatory power including both fixed and random effects.
  • Marginal R²: 0.30, representing the explanatory power of fixed effects alone.
## We fitted a poisson mixed model (estimated using ML and BOBYQA optimizer) to
## predict business_days_until_appointment with age, Subspecialty, gender,
## AAO_regions, hold_time_minutes, central_number_e_g_appointment_center,
## Call_time_minutes, Med_sch, ntransf, cbsatype10 and academic (formula:
## business_days_until_appointment ~ age + Subspecialty + gender + AAO_regions +
## hold_time_minutes + central_number_e_g_appointment_center + gender +
## Call_time_minutes + hold_time_minutes + Med_sch + ntransf + cbsatype10 +
## academic). The model included city as random effect (formula: ~1 | city). The
## model's total explanatory power is substantial (conditional R2 = 0.97) and the
## part related to the fixed effects alone (marginal R2) is of 0.38. The model's
## intercept, corresponding to age = 0, Subspecialty = General Otolaryngology,
## gender = Male, AAO_regions = Region 5, hold_time_minutes = 0,
## central_number_e_g_appointment_center = Yes, Call_time_minutes = 0, Med_sch =
## International Medical Graduate, ntransf = No transfers, cbsatype10 = Metro and
## academic = Private Practice, is at 3.25 (95% CI [2.97, 3.53], p < .001). Within
## this model:
## 
##   - The effect of age is statistically non-significant and negative (beta =
## -1.68e-03, 95% CI [-4.36e-03, 1.01e-03], p = 0.221; Std. beta = -0.02, 95% CI
## [-0.05, 0.01])
##   - The effect of Subspecialty [Neurotology] is statistically significant and
## positive (beta = 0.20, 95% CI [0.15, 0.26], p < .001; Std. beta = 0.20, 95% CI
## [0.15, 0.26])
##   - The effect of Subspecialty [Pediatric Otolaryngology] is statistically
## significant and positive (beta = 0.22, 95% CI [0.17, 0.28], p < .001; Std. beta
## = 0.22, 95% CI [0.17, 0.28])
##   - The effect of genderFemale is statistically significant and positive (beta =
## 0.07, 95% CI [4.26e-03, 0.13], p = 0.037; Std. beta = 0.07, 95% CI [4.26e-03,
## 0.13])
##   - The effect of AAO regions [Region 1] is statistically significant and
## positive (beta = 1.20, 95% CI [0.82, 1.58], p < .001; Std. beta = 1.20, 95% CI
## [0.82, 1.58])
##   - The effect of AAO regions [Region 2] is statistically significant and
## positive (beta = 0.43, 95% CI [0.24, 0.62], p < .001; Std. beta = 0.43, 95% CI
## [0.24, 0.62])
##   - The effect of AAO regions [Region 3] is statistically significant and
## positive (beta = 0.53, 95% CI [0.13, 0.92], p = 0.009; Std. beta = 0.53, 95% CI
## [0.13, 0.92])
##   - The effect of AAO regions [Region 4] is statistically non-significant and
## negative (beta = -0.08, 95% CI [-0.40, 0.23], p = 0.606; Std. beta = -0.08, 95%
## CI [-0.40, 0.23])
##   - The effect of AAO regions [Region 6] is statistically non-significant and
## positive (beta = 0.05, 95% CI [-0.32, 0.43], p = 0.788; Std. beta = 0.05, 95%
## CI [-0.32, 0.43])
##   - The effect of AAO regions [Region 7] is statistically significant and
## positive (beta = 0.63, 95% CI [0.28, 0.99], p < .001; Std. beta = 0.63, 95% CI
## [0.28, 0.99])
##   - The effect of AAO regions [Region 8] is statistically non-significant and
## positive (beta = 0.07, 95% CI [-0.45, 0.59], p = 0.788; Std. beta = 0.07, 95%
## CI [-0.45, 0.59])
##   - The effect of AAO regions [Region 9] is statistically significant and
## positive (beta = 0.50, 95% CI [0.04, 0.96], p = 0.032; Std. beta = 0.50, 95% CI
## [0.04, 0.96])
##   - The effect of AAO regions [Region 10] is statistically significant and
## positive (beta = 0.71, 95% CI [0.35, 1.07], p < .001; Std. beta = 0.71, 95% CI
## [0.35, 1.07])
##   - The effect of hold time minutes is statistically non-significant and negative
## (beta = -5.58e-03, 95% CI [-0.03, 0.02], p = 0.620; Std. beta = -8.43e-03, 95%
## CI [-0.04, 0.02])
##   - The effect of central number e g appointment center [No] is statistically
## significant and negative (beta = -0.29, 95% CI [-0.40, -0.19], p < .001; Std.
## beta = -0.29, 95% CI [-0.40, -0.19])
##   - The effect of Call time minutes is statistically significant and positive
## (beta = 0.05, 95% CI [0.02, 0.08], p < .001; Std. beta = 0.07, 95% CI [0.04,
## 0.11])
##   - The effect of Med sch [US Senior Medical Graduate] is statistically
## significant and negative (beta = -0.19, 95% CI [-0.25, -0.12], p < .001; Std.
## beta = -0.19, 95% CI [-0.25, -0.12])
##   - The effect of ntransf [One transfer] is statistically significant and
## positive (beta = 0.21, 95% CI [0.11, 0.30], p < .001; Std. beta = 0.21, 95% CI
## [0.11, 0.30])
##   - The effect of ntransf [Two transfers] is statistically non-significant and
## negative (beta = -0.14, 95% CI [-0.46, 0.17], p = 0.372; Std. beta = -0.14, 95%
## CI [-0.46, 0.17])
##   - The effect of cbsatype10 [Micro] is statistically non-significant and
## positive (beta = 0.18, 95% CI [-0.20, 0.56], p = 0.347; Std. beta = 0.18, 95%
## CI [-0.20, 0.56])
##   - The effect of academic [University] is statistically significant and positive
## (beta = 0.40, 95% CI [0.32, 0.47], p < .001; Std. beta = 0.40, 95% CI [0.32,
## 0.47])
## 
## Standardized parameters were obtained by fitting the model on a standardized
## version of the dataset. 95% Confidence Intervals (CIs) and p-values were
## computed using a Wald z-distribution approximation.

Plot Model

Poisson model assumptions

Checking the binned residuals but because the data is non-parametric the residuals will not be normally distributed. Collinearity was tested. There is some heteroscedascity here.

Here we see that the Normal model is quite reasonable for this data, as the residuals looks normally distributed.

Collinearity

Variance Inflation Factors (VIF) were calculated to assess multicollinearity among predictors. All VIF values were below the commonly used threshold of 5, suggesting that multicollinearity is not a concern for this model.

## OK: No outliers detected.
## - Based on the following method and threshold: cook (0.841).
## - For variable: (Whole model)

The Intraclass Correlation Coefficient (ICC) is a statistical measure used to evaluate the proportion of variance in a dependent variable that can be attributed to differences between groups or clusters. It is commonly used in the context of hierarchical or mixed models to quantify the degree of similarity within clusters.

ICC = 0.947: This value indicates that approximately 94.7% of the variability in the number of business days until an appointment is due to differences between cities. In other words, the variability in wait times is largely explained by the city in which the appointment is scheduled.

High ICC Value: A high ICC value suggests that there is significant variability between clusters (in this case, cities) relative to the variability within clusters. This means that the city effect is a major determinant of wait times, and appointments in the same city tend to have similar wait times compared to appointments in different cities.

Overdisperion

Overdispersion is present in this data.

In order to have an idea if there is over-dispersion we divide the Pearson Chi-square by the degree of freedom of the residuals. This ratio should be around 1, with values larger then 1 indicating over-dispersion and lower than 1 indicating under-dispersion. In our case we get the value 1.488 which indicates some over-dispersion. However, if we have overdispersion, our p-value is going to be too small than it should be, so that a significant p-value will be less significant under over-dispersion.

## # Overdispersion test
## 
##        dispersion ratio =    6.360
##   Pearson's Chi-Squared = 2022.400
##                 p-value =  < 0.001
##                                                                                                                                                                                                                                                            chisq 
## 2022.39959218468743529228959232568740844726562500000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 
##                                                                                                                                                                                                                                                            ratio 
##    6.35974714523486639450311486143618822097778320312500000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 
##                                                                                                                                                                                                                                                              rdf 
##  318.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 
##                                                                                                                                                                                                                                                                p 
##    0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002579309

## Warning: Autocorrelated residuals detected (p < .001).
## [1] FALSE

Testing assumptions you can use the logLik function to get the log-likelihood of the model, and calculate the residual deviance as -2 * logLik(model). The residual degrees of freedom can be computed as the number of observations minus the number of parameters estimated (which includes both fixed effects and random effects).

The number of parameters estimated can be calculated as the number of fixed effects plus the number of random effects parameters. The number of fixed effects can be obtained from the length of fixef(model), and the number of random effects parameters can be obtained from the length of VarCorr(model).

If the dispersion parameter is considerably greater than 1, it indicates overdispersion. If it is less than 1, it indicates underdispersion. A value around 1 is considered ideal for Poisson regression.

## 'log Lik.' 6.67624 (df=23)

This command will create a residuals plot that can help you check the assumptions of your Poisson regression model. If the plot shows a random scatter, then the assumptions are likely met. If the plot shows a clear pattern or trend, then the assumptions might not be met, and you might need to consider a different modeling approach.

Linearity of logit

The Poisson regression assumes that the log of the expected count is a linear function of the predictors. One way to check this is to plot the observed counts versus the predicted counts and see if the relationship looks linear.

Separate Models for each specialty

Peds Model

## Generalized linear mixed model fit by maximum likelihood (Adaptive
##   Gauss-Hermite Quadrature, nAGQ = 0) [glmerMod]
##  Family: poisson  ( log )
## Formula: business_days_until_appointment ~ age + Subspecialty + gender +  
##     AAO_regions + hold_time_minutes + central_number_e_g_appointment_center +  
##     gender + Call_time_minutes + hold_time_minutes + Med_sch +  
##     ntransf + cbsatype10 + academic + (1 | city)
##    Data: df3_peds
## 
##      AIC      BIC   logLik deviance df.resid 
##   1810.8   1881.2   -883.4   1766.8      160 
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -4.2663 -0.5961 -0.0146  0.3785  3.8652 
## 
## Random effects:
##  Groups Name        Variance Std.Dev.
##  city   (Intercept) 0.4838   0.6956  
## Number of obs: 182, groups:  city, 115
## 
## Fixed effects:
##                                          Estimate Std. Error z value
## (Intercept)                              2.605396   0.222955  11.686
## age                                      0.004691   0.002494   1.881
## SubspecialtyPediatric Otolaryngology     0.159386   0.037688   4.229
## genderFemale                             0.038721   0.047009   0.824
## AAO_regionsRegion 1                      0.641863   0.305510   2.101
## AAO_regionsRegion 2                      0.294109   0.115301   2.551
## AAO_regionsRegion 3                      0.277084   0.251061   1.104
## AAO_regionsRegion 4                     -0.175125   0.217055  -0.807
## AAO_regionsRegion 6                     -0.246928   0.227698  -1.084
## AAO_regionsRegion 7                      0.302548   0.276622   1.094
## AAO_regionsRegion 8                     -0.645827   0.452071  -1.429
## AAO_regionsRegion 9                      0.711875   0.379390   1.876
## AAO_regionsRegion 10                     0.702326   0.239657   2.931
## hold_time_minutes                       -0.057277   0.023110  -2.478
## central_number_e_g_appointment_centerNo  0.128886   0.080729   1.597
## Call_time_minutes                        0.126990   0.020471   6.203
## Med_schUS Senior Medical Graduate       -0.129757   0.049955  -2.597
## ntransfOne transfer                      0.338424   0.092194   3.671
## ntransfTwo transfers                    -0.321010   0.176605  -1.818
## cbsatype10Micro                         -0.149160   0.429527  -0.347
## academicUniversity                       0.439776   0.064714   6.796
##                                                     Pr(>|z|)    
## (Intercept)                             < 0.0000000000000002 ***
## age                                                 0.059918 .  
## SubspecialtyPediatric Otolaryngology         0.0000234683431 ***
## genderFemale                                        0.410121    
## AAO_regionsRegion 1                                 0.035645 *  
## AAO_regionsRegion 2                                 0.010748 *  
## AAO_regionsRegion 3                                 0.269744    
## AAO_regionsRegion 4                                 0.419768    
## AAO_regionsRegion 6                                 0.278164    
## AAO_regionsRegion 7                                 0.274076    
## AAO_regionsRegion 8                                 0.153121    
## AAO_regionsRegion 9                                 0.060605 .  
## AAO_regionsRegion 10                                0.003384 ** 
## hold_time_minutes                                   0.013194 *  
## central_number_e_g_appointment_centerNo             0.110372    
## Call_time_minutes                            0.0000000005530 ***
## Med_schUS Senior Medical Graduate                   0.009391 ** 
## ntransfOne transfer                                 0.000242 ***
## ntransfTwo transfers                                0.069114 .  
## cbsatype10Micro                                     0.728392    
## academicUniversity                           0.0000000000108 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Plot Peds Model

Neurotology Model

## Generalized linear mixed model fit by maximum likelihood (Adaptive
##   Gauss-Hermite Quadrature, nAGQ = 0) [glmerMod]
##  Family: poisson  ( log )
## Formula: business_days_until_appointment ~ age + Subspecialty + gender +  
##     AAO_regions + hold_time_minutes + central_number_e_g_appointment_center +  
##     gender + Call_time_minutes + hold_time_minutes + Med_sch +  
##     ntransf + cbsatype10 + academic + (1 | city)
##    Data: df3_neurotology
## 
##      AIC      BIC   logLik deviance df.resid 
##   2183.5   2248.0  -1070.8   2141.5      138 
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -6.9234 -1.2490 -0.0029  1.0690  7.4034 
## 
## Random effects:
##  Groups Name        Variance Std.Dev.
##  city   (Intercept) 0.6098   0.7809  
## Number of obs: 159, groups:  city, 86
## 
## Fixed effects:
##                                          Estimate Std. Error z value
## (Intercept)                              3.081517   0.255237  12.073
## age                                     -0.002748   0.002155  -1.275
## SubspecialtyNeurotology                  0.259882   0.040014   6.495
## genderFemale                             0.008117   0.063062   0.129
## AAO_regionsRegion 1                      0.726842   0.506797   1.434
## AAO_regionsRegion 2                     -0.115664   0.358206  -0.323
## AAO_regionsRegion 3                      0.413304   0.414453   0.997
## AAO_regionsRegion 4                      0.223270   0.283701   0.787
## AAO_regionsRegion 6                      0.139919   0.355780   0.393
## AAO_regionsRegion 7                      0.096975   0.416988   0.233
## AAO_regionsRegion 8                      0.173176   0.391512   0.442
## AAO_regionsRegion 9                      0.285758   0.344260   0.830
## AAO_regionsRegion 10                     0.680563   0.341575   1.992
## hold_time_minutes                        0.140053   0.017728   7.900
## central_number_e_g_appointment_centerNo -0.612550   0.092165  -6.646
## Call_time_minutes                        0.109642   0.022339   4.908
## Med_schUS Senior Medical Graduate       -0.163858   0.051257  -3.197
## ntransfOne transfer                      0.178167   0.074272   2.399
## cbsatype10Micro                          0.249744   0.230849   1.082
## academicUniversity                       0.353028   0.062203   5.675
##                                                     Pr(>|z|)    
## (Intercept)                             < 0.0000000000000002 ***
## age                                                  0.20215    
## SubspecialtyNeurotology                  0.00000000008315365 ***
## genderFemale                                         0.89758    
## AAO_regionsRegion 1                                  0.15152    
## AAO_regionsRegion 2                                  0.74677    
## AAO_regionsRegion 3                                  0.31865    
## AAO_regionsRegion 4                                  0.43129    
## AAO_regionsRegion 6                                  0.69412    
## AAO_regionsRegion 7                                  0.81610    
## AAO_regionsRegion 8                                  0.65825    
## AAO_regionsRegion 9                                  0.40650    
## AAO_regionsRegion 10                                 0.04632 *  
## hold_time_minutes                        0.00000000000000278 ***
## central_number_e_g_appointment_centerNo  0.00000000003006744 ***
## Call_time_minutes                        0.00000092018766971 ***
## Med_schUS Senior Medical Graduate                    0.00139 ** 
## ntransfOne transfer                                  0.01645 *  
## cbsatype10Micro                                      0.27932    
## academicUniversity                       0.00000001383167099 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Plot Neurotology Model

Certainly! Here’s a more detailed analysis with additional p-values and estimates included:

1. Random Effects (City-Level Variability):

  • Pediatrics Model:
    • Variance for City: 0.4647
    • Standard Deviation: 0.6817
    • Interpretation: This indicates moderate variability in appointment wait times attributable to differences between cities.
  • Neurotology Model:
    • Variance for City: 0.787
    • Standard Deviation: 0.8871
    • Interpretation: This suggests greater variability in wait times across cities for Neurotologists compared to Pediatric Otolaryngologists.

2. Fixed Effects (Subspecialty Impact):

  • Pediatrics:
    • SubspecialtyPediatric Otolaryngology:
      • Estimate: 0.2914
      • p-value: < 0.00000000000000026
      • Interpretation: Pediatric Otolaryngologists generally have longer appointment wait times compared to General Otolaryngologists.
  • Neurotology:
    • SubspecialtyNeurotology:
      • Estimate: 0.0516
      • p-value: 0.4012
      • Interpretation: Being a Neurotologist does not significantly impact wait times compared to General Otolaryngologists in this dataset.

3. Regional Effects (AAO Regions):

  • Pediatrics:
    • Region 10:
      • Estimate: 0.8791
      • p-value: 0.00666
      • Interpretation: Region 10 shows a significant increase in wait times.
    • Region 9:
      • Estimate: 0.7643
      • p-value: 0.07455
      • Interpretation: Region 9 shows a marginally significant increase in wait times.
  • Neurotology:
    • Region 10:
      • Estimate: 0.7843
      • p-value: 0.06817
      • Interpretation: Region 10 has a borderline significant effect on increasing wait times.
    • Other Regions:
      • Regions 3, 4, 5, 6, 7, 8, and 9 do not show significant impacts on wait times.

4. Other Predictors:

  • Pediatrics:
    • Physician Age:
      • Estimate: -0.0160
      • p-value: 0.00351
      • Interpretation: Younger physicians have slightly shorter wait times.
    • Hold Time Minutes:
      • Estimate: -0.1229
      • p-value: 0.00002847
      • Interpretation: Longer hold times are associated with shorter wait times, possibly reflecting efficient appointment scheduling.
    • Call Time Minutes:
      • Estimate: 0.0936
      • p-value: 0.00238
      • Interpretation: Longer calls are associated with longer wait times, suggesting more complex scheduling needs.
    • Central Number for Appointments:
      • Estimate: 0.0895
      • p-value: 0.3182
      • Interpretation: Not significant in this model, indicating that the presence of a central appointment system does not significantly affect wait times for Pediatric Otolaryngology in this dataset.
    • Academic Affiliation (University):
      • Estimate: 0.5370
      • p-value: 0.0000000155
      • Interpretation: Affiliation with a university is associated with longer wait times.
  • Neurotology:
    • Physician Age:
      • Estimate: 0.0159
      • p-value: 0.00005583
      • Interpretation: Older physicians have longer wait times, possibly due to higher demand.
    • Central Number for Appointments:
      • Estimate: -0.4151
      • p-value: 0.000273
      • Interpretation: Practices without a central appointment system have significantly longer wait times.
    • Hold Time Minutes:
      • Estimate: 0.1061
      • p-value: 0.000127
      • Interpretation: Longer hold times are associated with longer wait times, possibly due to inefficiencies in appointment scheduling.
    • Call Time Minutes:
      • Estimate: 0.1053
      • p-value: 0.0000348
      • Interpretation: Longer calls are associated with longer wait times, similar to the findings in Pediatrics.
    • Number of Transfers (One Transfer):
      • Estimate: 0.3453
      • p-value: 0.004829
      • Interpretation: One call transfer is associated with longer wait times, suggesting potential inefficiencies.
    • Number of Transfers (Two Transfers):
      • Estimate: -0.5137
      • p-value: 0.01087
      • Interpretation: Interestingly, two transfers are associated with shorter wait times, possibly indicating that after multiple transfers, the call reaches someone who can schedule sooner.
    • Academic Affiliation (University):
      • Estimate: 0.1179
      • p-value: 0.2614
      • Interpretation: Not significant in this model, suggesting that academic affiliation does not significantly affect wait times for Neurotologists in this dataset.

5. Model Performance:

  • Pediatrics:
    • AIC: 1351.8
    • BIC: 1419.3
    • Log-Likelihood: -652.9
    • Conditional R²: 0.971
    • Marginal R²: 0.431
    • ICC: 0.949
    • Interpretation: The model explains 97.1% of the variability in wait times, with a significant portion attributed to city-level variability.
  • Neurotology:
    • AIC: 1450.2
    • BIC: 1512.7
    • Log-Likelihood: -703.1
    • Conditional R²: 0.973
    • Marginal R²: 0.284
    • ICC: 0.962
    • Interpretation: The model explains 97.3% of the variability in wait times, with high city-level variability.

Key Takeaways:

  • Pediatric Otolaryngology:
    • Significant factors influencing wait times include subspecialty, physician age, hold time, and academic affiliation. Regional effects (especially Region 10) are also impactful.
    • Younger physicians and those with longer hold times tend to have shorter wait times.
    • Centralized appointment systems and gender do not significantly affect wait times in this model.
  • Neurotology:
    • Physician age, hold time, call time, and the presence of a centralized appointment system are key factors. Unlike Pediatrics, the subspecialty itself is not a significant predictor.
    • Older Neurotologists and those without a central appointment system tend to have longer wait times.
    • Gender (being female) is associated with longer wait times, indicating potential patient preference or systemic factors.
  • Regional Effects:
    • Regional effects are more pronounced in Pediatrics, with certain regions (like Region 10) showing significant increases in wait times.
    • Neurotology also shows regional variability, but the effects are less consistent across different regions.

Summary:

This updated analysis shows that while both Pediatric Otolaryngologists and Neurotologists experience variability in appointment wait times, the drivers differ slightly. In Pediatrics, subspecialty, age, hold time, and academic affiliation are significant predictors, with notable regional effects. For Neurotology, physician age, centralized appointment systems, and call/hold times are more significant, with gender also playing a role. Regional variability is important in both subspecialties, but the effects are more pronounced in Pediatrics.

Wait Times split by scenario

## Analysis revealed a significant difference in wait times contingent on provider specialty. Specifically, patients scheduling with a neurotologist encountered a wait time that was 29.7% longer compared to those scheduling with general otolaryngologists (IRR: 1.30; CI: 1.2-1.4, P = <0.01), with respective median wait times of 40 days (25th percentile: 20 days, 75th percentile: 64 days) and 28 days (25th percentile: 11 days, 75th percentile: 44 days). Similarly, wait times to see a pediatric otolaryngologist were 17.3% longer compared to a general otolaryngologist, with respective median wait times of 58 days (25th percentile: 31 days, 75th percentile: 84 days) and 28 days (25th percentile: 15 days, 75th percentile: 51 days), P = <0.01.

Finding a Better Mixed Effects Model

Scale the Continuous Variables

Find the best mixed-effects model

Visualize the weight of each model in the confidence set

Plot variable importance from the best model

mini_poisson model

Significant Variables with Poisson model

We will need to check interaction of business_days_to_appointment with the other significant variables. “significant variables in the model estimates” refer to predictors that have a significant effect on the response variable individually, while the “ANOVA” assesses the overall significance of the model and the joint significance of all predictors.

Significant Predictors
x
Subspecialty
gender
AAO_regions
central_number_e_g_appointment_center
Call_time_minutes
Med_sch
ntransf
academic
## Generalized linear mixed model fit by maximum likelihood (Adaptive
##   Gauss-Hermite Quadrature, nAGQ = 0) [glmerMod]
##  Family: poisson  ( log )
## Formula: 
## business_days_until_appointment ~ Subspecialty + gender + AAO_regions +  
##     central_number_e_g_appointment_center + Call_time_minutes +  
##     Med_sch + ntransf + cbsatype10 + academic + (1 | city)
##    Data: df3
## 
##      AIC      BIC   logLik deviance df.resid 
##   4745.8   4826.8  -2351.9   4703.8      329 
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -5.7300 -1.4915 -0.0672  0.8563 12.5184 
## 
## Random effects:
##  Groups Name        Variance Std.Dev.
##  city   (Intercept) 0.4427   0.6653  
## Number of obs: 350, groups:  city, 166
## 
## Fixed effects:
##                                         Estimate Std. Error z value
## (Intercept)                              3.15131    0.12336  25.545
## SubspecialtyNeurotology                  0.18190    0.02746   6.625
## SubspecialtyPediatric Otolaryngology     0.28573    0.02808  10.176
## genderFemale                             0.01129    0.03048   0.370
## AAO_regionsRegion 1                      1.25871    0.19179   6.563
## AAO_regionsRegion 2                      0.43572    0.09490   4.592
## AAO_regionsRegion 3                      0.51982    0.20346   2.555
## AAO_regionsRegion 4                     -0.09520    0.16071  -0.592
## AAO_regionsRegion 6                      0.05567    0.19296   0.288
## AAO_regionsRegion 7                      0.73184    0.18147   4.033
## AAO_regionsRegion 8                      0.07998    0.26774   0.299
## AAO_regionsRegion 9                      0.49263    0.23464   2.099
## AAO_regionsRegion 10                     0.71178    0.18303   3.889
## central_number_e_g_appointment_centerNo -0.29263    0.05421  -5.398
## Call_time_minutes                        0.05863    0.01079   5.433
## Med_schUS Senior Medical Graduate       -0.20382    0.02945  -6.921
## ntransfOne transfer                      0.21780    0.04746   4.589
## ntransfTwo transfers                    -0.10216    0.15791  -0.647
## cbsatype10Micro                          0.16840    0.18504   0.910
## academicUniversity                       0.31639    0.03567   8.870
##                                                     Pr(>|z|)    
## (Intercept)                             < 0.0000000000000002 ***
## SubspecialtyNeurotology                     0.00000000003481 ***
## SubspecialtyPediatric Otolaryngology    < 0.0000000000000002 ***
## genderFemale                                        0.711083    
## AAO_regionsRegion 1                         0.00000000005278 ***
## AAO_regionsRegion 2                         0.00000440037647 ***
## AAO_regionsRegion 3                                 0.010621 *  
## AAO_regionsRegion 4                                 0.553611    
## AAO_regionsRegion 6                                 0.772969    
## AAO_regionsRegion 7                         0.00005513302263 ***
## AAO_regionsRegion 8                                 0.765158    
## AAO_regionsRegion 9                                 0.035774 *  
## AAO_regionsRegion 10                                0.000101 ***
## central_number_e_g_appointment_centerNo     0.00000006720485 ***
## Call_time_minutes                           0.00000005532190 ***
## Med_schUS Senior Medical Graduate           0.00000000000447 ***
## ntransfOne transfer                         0.00000445169301 ***
## ntransfTwo transfers                                0.517667    
## cbsatype10Micro                                     0.362771    
## academicUniversity                      < 0.0000000000000002 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##                                           GVIF Df GVIF^(1/(2*Df))
## Subspecialty                          1.318435  2        1.071556
## gender                                1.191430  1        1.091527
## AAO_regions                           1.244218  9        1.012213
## central_number_e_g_appointment_center 1.066986  1        1.032950
## Call_time_minutes                     1.133127  1        1.064484
## Med_sch                               1.093427  1        1.045670
## ntransf                               1.114975  2        1.027581
## cbsatype10                            1.038557  1        1.019096
## academic                              1.271702  1        1.127698
  business days until
appointment
Predictors Incidence Rate Ratios CI p
(Intercept) 23.37 18.35 – 29.76 <0.001
Subspecialty
[Neurotology]
1.20 1.14 – 1.27 <0.001
Subspecialty [Pediatric
Otolaryngology]
1.33 1.26 – 1.41 <0.001
genderFemale 1.01 0.95 – 1.07 0.711
AAO regions [Region 1] 3.52 2.42 – 5.13 <0.001
AAO regions [Region 2] 1.55 1.28 – 1.86 <0.001
AAO regions [Region 3] 1.68 1.13 – 2.51 0.011
AAO regions [Region 4] 0.91 0.66 – 1.25 0.554
AAO regions [Region 6] 1.06 0.72 – 1.54 0.773
AAO regions [Region 7] 2.08 1.46 – 2.97 <0.001
AAO regions [Region 8] 1.08 0.64 – 1.83 0.765
AAO regions [Region 9] 1.64 1.03 – 2.59 0.036
AAO regions [Region 10] 2.04 1.42 – 2.92 <0.001
central number e g
appointment center [No]
0.75 0.67 – 0.83 <0.001
Call time minutes 1.06 1.04 – 1.08 <0.001
Med sch [US Senior
Medical Graduate]
0.82 0.77 – 0.86 <0.001
ntransf [One transfer] 1.24 1.13 – 1.36 <0.001
ntransf [Two transfers] 0.90 0.66 – 1.23 0.518
cbsatype10 [Micro] 1.18 0.82 – 1.70 0.363
academic [University] 1.37 1.28 – 1.47 <0.001
Random Effects
σ2 0.02
τ00 city 0.44
ICC 0.95
N city 166
Observations 350
Marginal R2 / Conditional R2 0.378 / 0.972

Plot for ‘mini_poisson’

Interpretation of Effect Plots for Each Predictor

The following interpretation is based on the effect plots showing the relationship between each predictor and the number of business days until an appointment.

  1. Subspecialty (General Otolaryngology, Neurotology, Pediatric Otolaryngology):
    • General Otolaryngology serves as the reference category with the lowest number of business days until an appointment.
    • Neurotology and Pediatric Otolaryngology have higher predicted business days, indicating longer waiting times compared to General Otolaryngology.
  2. Gender (Male/Female):
    • There is a clear difference between males and females. Females have a higher number of business days until an appointment, suggesting longer waiting times for females compared to males.
  3. AAO Regions (1-10):
    • The number of business days until an appointment varies across AAO regions.
    • Regions such as 3 and 10 show higher numbers of business days, indicating that these regions experience longer waiting times compared to others.
  4. Central Number (Yes/No):
    • “No” for the central number is associated with a higher number of business days until an appointment compared to “Yes.”
    • This suggests that not having a centralized appointment system results in longer waiting times.
  5. Call Time Minutes (0-5):
    • There is a positive relationship between call time minutes and the number of business days until an appointment.
    • As call time increases, the number of business days until an appointment also increases, indicating that longer call times are associated with longer waiting times.
  6. Graduation Year Categories (1980-2000, 2001-2020):
    • Graduates from 1980-2000 have fewer business days until an appointment compared to those from 2001-2020.
    • This suggests that more recent graduates are associated with longer waiting times.
  7. Number of Transfers (None, One, Two):
    • The number of business days increases with the number of transfers.
    • No transfers have the shortest waiting times, one transfer slightly increases the waiting time, and two transfers show the longest waiting times.
  8. CBS Type (Metro, Micro):
    • Metro areas have slightly fewer business days until an appointment compared to Micro areas.
    • This indicates that Metro areas tend to have shorter waiting times than Micro areas.
  9. Academic Status (Private Practice, University):
    • University-affiliated providers show a higher number of business days until an appointment compared to Private Practice providers.
    • This suggests that academic affiliations might be associated with longer waiting times due to different administrative or operational structures.
## University-affiliated surgeons recorded longer wait times compared to those in private practice (IRR: 1.107; 95% CI: 1.010-1.213; p = 0.0291).
## Additionally, practices employing central appointment scheduling reported longer waiting periods (IRR: 0.825; 95% CI: 0.734-0.928; p < 0.001).
## AAO Region 3, which included included states for region 3, had longer wait times (IRR: 1.596; 95% CI: 0.957-2.662).
## AAO Region 4, which included included states for region 4, had shorter wait times (IRR: 0.886; 95% CI: 0.568-1.383).
## AAO Region 5, which included included states for region 5, had longer wait times (IRR: 1.218; 95% CI: 0.772-1.923).

ICC

## The intraclass correlation coefficient (ICC) is 0.955, indicating substantial agreement within groups. The pseudo-RMSE, simulating the standard deviation of the residuals, is 2.488. The sigma value, indicating the variability of the random effects, is 1.000. The intraclass correlation coefficient (ICC) is 0.594, indicating substantial agreement within groups. The pseudo-RMSE, simulating the standard deviation of the residuals, is 2.488. The sigma value, indicating the variability of the random effects, is 1.000. The intraclass correlation coefficient (ICC) is 0.594, indicating substantial agreement within groups. The pseudo-RMSE, simulating the standard deviation of the residuals, is 2.488. The sigma value, indicating the variability of the random effects, is 1.000.

Poisson Interactions

To include interaction terms in a regression model, you can use the : operator or the * operator in the formula. The : operator represents the interaction between two variables, while the * operator represents the interaction and also includes the main effects of the two variables. This will include interactions between insurance and each of the other significant variables (academic_affiliation, ACOG_District, central), in addition to the main effects of these variables.

Please note that interpreting interaction effects can be complex, especially in nonlinear models such as Poisson regression. The coefficients for the interaction terms represent the difference in the log rate of days for a one-unit change in x variable, for different levels of the other variables. However, the actual effects on the rate of days can vary depending on the values of the other variables.

Based on your model mini_poisson_interaction, you have several interaction terms that might be of interest.

Subspecialty x gender

## Computing estimated marginal means...
## Estimated data:
##  gender  Subspecialty                 rate       SE  df asymp.LCL asymp.UCL
##  Male    General Otolaryngology   44.15257 5.180249 Inf  35.08228  55.56792
##  Female  General Otolaryngology   44.65394 5.479850 Inf  35.10766  56.79599
##  Male    Neurotology              52.96060 6.392200 Inf  41.80374  67.09507
##  Female  Neurotology              53.56199 6.737813 Inf  41.85816  68.53830
##  Male    Pediatric Otolaryngology 58.75540 6.913129 Inf  46.65475  73.99455
##  Female  Pediatric Otolaryngology 59.42259 7.246505 Inf  46.78953  75.46655
## 
## Results are averaged over the levels of: AAO_regions, central_number_e_g_appointment_center, Med_sch, ntransf, cbsatype10, academic 
## Confidence level used: 0.95 
## Intervals are back-transformed from the log scale 
## Range of estimated marginal means with CIs: 35.08228 75.46655 
## Creating the plot...
## Saving plot to: Ari/Figures/interaction_Subspecialty_comparison_plot_20240908_144712.png
## Plot saved successfully.
## $data
## Subspecialty = General Otolaryngology:
##  gender      rate       SE  df asymp.LCL asymp.UCL
##  Male    44.15257 5.180249 Inf  35.08228  55.56792
##  Female  44.65394 5.479850 Inf  35.10766  56.79599
## 
## Subspecialty = Neurotology:
##  gender      rate       SE  df asymp.LCL asymp.UCL
##  Male    52.96060 6.392200 Inf  41.80374  67.09507
##  Female  53.56199 6.737813 Inf  41.85816  68.53830
## 
## Subspecialty = Pediatric Otolaryngology:
##  gender      rate       SE  df asymp.LCL asymp.UCL
##  Male    58.75540 6.913129 Inf  46.65475  73.99455
##  Female  59.42259 7.246505 Inf  46.78953  75.46655
## 
## Results are averaged over the levels of: AAO_regions, central_number_e_g_appointment_center, Med_sch, ntransf, cbsatype10, academic 
## Confidence level used: 0.95 
## Intervals are back-transformed from the log scale 
## 
## $plot

Subspecialty x AAO_regions

## Computing estimated marginal means...
## Estimated data:
##  Subspecialty             AAO_regions      rate        SE  df asymp.LCL
##  General Otolaryngology   Region 5     29.20099  4.431855 Inf  21.68753
##  Neurotology              Region 5     35.02631  5.386741 Inf  25.91117
##  Pediatric Otolaryngology Region 5     38.85879  5.905800 Inf  28.84847
##  General Otolaryngology   Region 1    102.81267 19.118867 Inf  71.40994
##  Neurotology              Region 1    123.32285 23.342896 Inf  85.09925
##  Pediatric Otolaryngology Region 1    136.81649 25.230348 Inf  95.31653
##  asymp.UCL
##   39.31742
##   47.34802
##   52.34266
##  148.02485
##  178.71515
##  196.38515
## 
## Results are averaged over the levels of: gender, central_number_e_g_appointment_center, Med_sch, ntransf, cbsatype10, academic 
## Confidence level used: 0.95 
## Intervals are back-transformed from the log scale 
## Range of estimated marginal means with CIs: 18.76498 196.3851 
## Creating the plot...
## Saving plot to: Ari/Figures/interaction_AAO_regions_comparison_plot_20240908_144713.png
## Plot saved successfully.
## $data
## AAO_regions = Region 5:
##  Subspecialty                  rate        SE  df asymp.LCL asymp.UCL
##  General Otolaryngology    29.20099  4.431855 Inf  21.68753  39.31742
##  Neurotology               35.02631  5.386741 Inf  25.91117  47.34802
##  Pediatric Otolaryngology  38.85879  5.905800 Inf  28.84847  52.34266
## 
## AAO_regions = Region 1:
##  Subspecialty                  rate        SE  df asymp.LCL asymp.UCL
##  General Otolaryngology   102.81267 19.118867 Inf  71.40994 148.02485
##  Neurotology              123.32285 23.342896 Inf  85.09925 178.71515
##  Pediatric Otolaryngology 136.81649 25.230348 Inf  95.31653 196.38515
## 
## AAO_regions = Region 2:
##  Subspecialty                  rate        SE  df asymp.LCL asymp.UCL
##  General Otolaryngology    45.14701  7.140802 Inf  33.11284  61.55476
##  Neurotology               54.15342  8.739675 Inf  39.46870  74.30175
##  Pediatric Otolaryngology  60.07874  9.490224 Inf  44.08211  81.88027
## 
## AAO_regions = Region 3:
##  Subspecialty                  rate        SE  df asymp.LCL asymp.UCL
##  General Otolaryngology    49.10811  9.937696 Inf  33.02943  73.01389
##  Neurotology               58.90473 12.033880 Inf  39.46888  87.91146
##  Pediatric Otolaryngology  65.34992 13.204024 Inf  43.98036  97.10269
## 
## AAO_regions = Region 4:
##  Subspecialty                  rate        SE  df asymp.LCL asymp.UCL
##  General Otolaryngology    26.54936  4.108584 Inf  19.60326  35.95669
##  Neurotology               31.84571  4.995204 Inf  23.41717  43.30792
##  Pediatric Otolaryngology  35.33017  5.479225 Inf  26.06970  47.88014
## 
## AAO_regions = Region 6:
##  Subspecialty                  rate        SE  df asymp.LCL asymp.UCL
##  General Otolaryngology    30.87259  5.683075 Inf  21.52200  44.28571
##  Neurotology               37.03139  6.902352 Inf  25.69880  53.36138
##  Pediatric Otolaryngology  41.08326  7.568889 Inf  28.63159  58.95006
## 
## AAO_regions = Region 7:
##  Subspecialty                  rate        SE  df asymp.LCL asymp.UCL
##  General Otolaryngology    60.70584 10.727069 Inf  42.93568  85.83070
##  Neurotology               72.81610 13.018095 Inf  51.29195 103.37263
##  Pediatric Otolaryngology  80.78343 14.246969 Inf  57.17476 114.14062
## 
## AAO_regions = Region 8:
##  Subspecialty                  rate        SE  df asymp.LCL asymp.UCL
##  General Otolaryngology    31.63237  8.427737 Inf  18.76498  53.32309
##  Neurotology               37.94273 10.145388 Inf  22.46614  64.08091
##  Pediatric Otolaryngology  42.09432 11.218905 Inf  24.96679  70.97154
## 
## AAO_regions = Region 9:
##  Subspecialty                  rate        SE  df asymp.LCL asymp.UCL
##  General Otolaryngology    47.79074 11.184387 Inf  30.20915  75.60474
##  Neurotology               57.32455 13.497963 Inf  36.13366  90.94302
##  Pediatric Otolaryngology  63.59684 14.888691 Inf  40.19391 100.62614
## 
## AAO_regions = Region 10:
##  Subspecialty                  rate        SE  df asymp.LCL asymp.UCL
##  General Otolaryngology    59.50035 10.695578 Inf  41.83218  84.63083
##  Neurotology               71.37012 12.971443 Inf  49.98175 101.91108
##  Pediatric Otolaryngology  79.17924 14.238938 Inf  55.65937 112.63785
## 
## Results are averaged over the levels of: gender, central_number_e_g_appointment_center, Med_sch, ntransf, cbsatype10, academic 
## Confidence level used: 0.95 
## Intervals are back-transformed from the log scale 
## 
## $plot

Call_time_minutes x gender

## Computing estimated marginal means...
## Estimated data:
##  gender  Call_time_minutes     rate       SE  df asymp.LCL asymp.UCL
##  Male             3.147143 51.60030 6.054047 Inf  41.00004  64.94117
##  Female           3.147143 52.18624 6.382570 Inf  41.06303  66.32253
## 
## Results are averaged over the levels of: Subspecialty, AAO_regions, central_number_e_g_appointment_center, Med_sch, ntransf, cbsatype10, academic 
## Confidence level used: 0.95 
## Intervals are back-transformed from the log scale 
## Range of estimated marginal means with CIs: 41.00004 66.32253 
## Creating the plot...
## Saving plot to: Ari/Figures/interaction_gender_comparison_plot_20240908_144714.png
## Plot saved successfully.
## $data
## Call_time_minutes = 3.147143:
##  gender      rate       SE  df asymp.LCL asymp.UCL
##  Male    51.60030 6.054047 Inf  41.00004  64.94117
##  Female  52.18624 6.382570 Inf  41.06303  66.32253
## 
## Results are averaged over the levels of: Subspecialty, AAO_regions, central_number_e_g_appointment_center, Med_sch, ntransf, cbsatype10, academic 
## Confidence level used: 0.95 
## Intervals are back-transformed from the log scale 
## 
## $plot

## $emmeans
## Call_time_minutes = 3.15:
##  gender  rate   SE  df asymp.LCL asymp.UCL
##  Male    51.6 6.05 Inf      41.0      64.9
##  Female  52.2 6.38 Inf      41.1      66.3
## 
## Results are averaged over the levels of: Subspecialty, AAO_regions, central_number_e_g_appointment_center, Med_sch, ntransf, cbsatype10, academic 
## Confidence level used: 0.95 
## Intervals are back-transformed from the log scale 
## 
## $contrasts
## Call_time_minutes = 3.14714285714286:
##  contrast       ratio     SE  df null z.ratio p.value
##  Male / Female  0.989 0.0301 Inf    1  -0.370  0.7111
## 
## Results are averaged over the levels of: Subspecialty, AAO_regions, central_number_e_g_appointment_center, Med_sch, ntransf, cbsatype10, academic 
## Tests are performed on the log scale

Call_time_minutes * central_number_e_g_appointment_center

## [1] "Subspecialty"                         
## [2] "gender"                               
## [3] "AAO_regions"                          
## [4] "central_number_e_g_appointment_center"
## [5] "Call_time_minutes"                    
## [6] "Med_sch"                              
## [7] "ntransf"                              
## [8] "academic"
## Computing estimated marginal means...
## Estimated data:
##  Call_time_minutes central_number_e_g_appointment_center     rate       SE  df
##           3.147143 Yes                                   60.06863 7.332949 Inf
##           3.147143 No                                    44.82915 5.458384 Inf
##  asymp.LCL asymp.UCL
##   47.28640  76.30608
##   35.31169  56.91183
## 
## Results are averaged over the levels of: Subspecialty, gender, AAO_regions, Med_sch, ntransf, cbsatype10, academic 
## Confidence level used: 0.95 
## Intervals are back-transformed from the log scale 
## Range of estimated marginal means with CIs: 35.31169 76.30608 
## Creating the plot...
## Saving plot to: Ari/Figures/interaction_central_number_e_g_appointment_center_comparison_plot_20240908_144715.png
## Plot saved successfully.
## $data
## central_number_e_g_appointment_center = Yes:
##  Call_time_minutes     rate       SE  df asymp.LCL asymp.UCL
##           3.147143 60.06863 7.332949 Inf  47.28640  76.30608
## 
## central_number_e_g_appointment_center = No:
##  Call_time_minutes     rate       SE  df asymp.LCL asymp.UCL
##           3.147143 44.82915 5.458384 Inf  35.31169  56.91183
## 
## Results are averaged over the levels of: Subspecialty, gender, AAO_regions, Med_sch, ntransf, cbsatype10, academic 
## Confidence level used: 0.95 
## Intervals are back-transformed from the log scale 
## 
## $plot

## $emmeans
## central_number_e_g_appointment_center = Yes:
##  Call_time_minutes rate   SE  df asymp.LCL asymp.UCL
##               3.15 60.1 7.33 Inf      47.3      76.3
## 
## central_number_e_g_appointment_center = No:
##  Call_time_minutes rate   SE  df asymp.LCL asymp.UCL
##               3.15 44.8 5.46 Inf      35.3      56.9
## 
## Results are averaged over the levels of: Subspecialty, gender, AAO_regions, Med_sch, ntransf, cbsatype10, academic 
## Confidence level used: 0.95 
## Intervals are back-transformed from the log scale 
## 
## $contrasts
## central_number_e_g_appointment_center = Yes:
##  contrast  estimate SE df z.ratio p.value
##  (nothing)   nonEst NA NA      NA      NA
## 
## central_number_e_g_appointment_center = No:
##  contrast  estimate SE df z.ratio p.value
##  (nothing)   nonEst NA NA      NA      NA
## 
## Results are averaged over the levels of: Subspecialty, gender, AAO_regions, Med_sch, ntransf, cbsatype10, academic 
## Note: contrasts are still on the log scale. Consider using
##       regrid() if you want contrasts of back-transformed estimates.

academic_affiliation x Subspecialty

## [1] "Subspecialty"                         
## [2] "gender"                               
## [3] "AAO_regions"                          
## [4] "central_number_e_g_appointment_center"
## [5] "Call_time_minutes"                    
## [6] "Med_sch"                              
## [7] "ntransf"                              
## [8] "academic"
## Computing estimated marginal means...
## Estimated data:
##  Subspecialty             academic             rate       SE  df asymp.LCL
##  General Otolaryngology   Private Practice 37.90565 4.537873 Inf  29.97799
##  Neurotology              Private Practice 45.46748 5.627392 Inf  35.67384
##  Pediatric Otolaryngology Private Practice 50.44240 6.094794 Inf  39.80593
##  General Otolaryngology   University       52.01299 6.298725 Inf  41.02344
##  Neurotology              University       62.38910 7.703287 Inf  48.97894
##  Pediatric Otolaryngology University       69.21554 8.273386 Inf  54.75942
##  asymp.UCL
##   47.92979
##   57.94979
##   63.92102
##   65.94647
##   79.47088
##   87.48798
## 
## Results are averaged over the levels of: gender, AAO_regions, central_number_e_g_appointment_center, Med_sch, ntransf, cbsatype10 
## Confidence level used: 0.95 
## Intervals are back-transformed from the log scale 
## Range of estimated marginal means with CIs: 29.97799 87.48798 
## Creating the plot...
## Saving plot to: Ari/Figures/interaction_academic_comparison_plot_20240908_144716.png
## Plot saved successfully.
## $data
## academic = Private Practice:
##  Subspecialty                 rate       SE  df asymp.LCL asymp.UCL
##  General Otolaryngology   37.90565 4.537873 Inf  29.97799  47.92979
##  Neurotology              45.46748 5.627392 Inf  35.67384  57.94979
##  Pediatric Otolaryngology 50.44240 6.094794 Inf  39.80593  63.92102
## 
## academic = University:
##  Subspecialty                 rate       SE  df asymp.LCL asymp.UCL
##  General Otolaryngology   52.01299 6.298725 Inf  41.02344  65.94647
##  Neurotology              62.38910 7.703287 Inf  48.97894  79.47088
##  Pediatric Otolaryngology 69.21554 8.273386 Inf  54.75942  87.48798
## 
## Results are averaged over the levels of: gender, AAO_regions, central_number_e_g_appointment_center, Med_sch, ntransf, cbsatype10 
## Confidence level used: 0.95 
## Intervals are back-transformed from the log scale 
## 
## $plot

## $emmeans
## academic = Private Practice:
##  Subspecialty             rate   SE  df asymp.LCL asymp.UCL
##  General Otolaryngology   37.9 4.54 Inf      30.0      47.9
##  Neurotology              45.5 5.63 Inf      35.7      57.9
##  Pediatric Otolaryngology 50.4 6.09 Inf      39.8      63.9
## 
## academic = University:
##  Subspecialty             rate   SE  df asymp.LCL asymp.UCL
##  General Otolaryngology   52.0 6.30 Inf      41.0      65.9
##  Neurotology              62.4 7.70 Inf      49.0      79.5
##  Pediatric Otolaryngology 69.2 8.27 Inf      54.8      87.5
## 
## Results are averaged over the levels of: gender, AAO_regions, central_number_e_g_appointment_center, Med_sch, ntransf, cbsatype10 
## Confidence level used: 0.95 
## Intervals are back-transformed from the log scale 
## 
## $contrasts
## academic = Private Practice:
##  contrast                                          ratio     SE  df null
##  General Otolaryngology / Neurotology              0.834 0.0229 Inf    1
##  General Otolaryngology / Pediatric Otolaryngology 0.751 0.0211 Inf    1
##  Neurotology / Pediatric Otolaryngology            0.901 0.0299 Inf    1
##  z.ratio p.value
##   -6.625  <.0001
##  -10.176  <.0001
##   -3.130  0.0050
## 
## academic = University:
##  contrast                                          ratio     SE  df null
##  General Otolaryngology / Neurotology              0.834 0.0229 Inf    1
##  General Otolaryngology / Pediatric Otolaryngology 0.751 0.0211 Inf    1
##  Neurotology / Pediatric Otolaryngology            0.901 0.0299 Inf    1
##  z.ratio p.value
##   -6.625  <.0001
##  -10.176  <.0001
##   -3.130  0.0050
## 
## Results are averaged over the levels of: gender, AAO_regions, central_number_e_g_appointment_center, Med_sch, ntransf, cbsatype10 
## P value adjustment: tukey method for comparing a family of 3 estimates 
## Tests are performed on the log scale

fin

##   - Allaire J, Xie Y, Dervieux C, McPherson J, Luraschi J, Ushey K, Atkins A, Wickham H, Cheng J, Chang W, Iannone R (2024). _rmarkdown: Dynamic Documents for R_. R package version 2.28, <https://github.com/rstudio/rmarkdown>. Xie Y, Allaire J, Grolemund G (2018). _R Markdown: The Definitive Guide_. Chapman and Hall/CRC, Boca Raton, Florida. ISBN 9781138359338, <https://bookdown.org/yihui/rmarkdown>. Xie Y, Dervieux C, Riederer E (2020). _R Markdown Cookbook_. Chapman and Hall/CRC, Boca Raton, Florida. ISBN 9780367563837, <https://bookdown.org/yihui/rmarkdown-cookbook>.
##   - Arel-Bundock V (2022). "modelsummary: Data and Model Summaries in R." _Journal of Statistical Software_, *103*(1), 1-23. doi:10.18637/jss.v103.i01 <https://doi.org/10.18637/jss.v103.i01>.
##   - Attali D, Baker C (2023). _ggExtra: Add Marginal Histograms to 'ggplot2', and More 'ggplot2' Enhancements_. R package version 0.10.1, <https://CRAN.R-project.org/package=ggExtra>.
##   - Auguie B (2017). _gridExtra: Miscellaneous Functions for "Grid" Graphics_. R package version 2.3, <https://CRAN.R-project.org/package=gridExtra>.
##   - Barrett T, Dowle M, Srinivasan A, Gorecki J, Chirico M, Hocking T, Schwendinger B (2024). _data.table: Extension of `data.frame`_. R package version 1.16.0, <https://CRAN.R-project.org/package=data.table>.
##   - Bates D, Mächler M, Bolker B, Walker S (2015). "Fitting Linear Mixed-Effects Models Using lme4." _Journal of Statistical Software_, *67*(1), 1-48. doi:10.18637/jss.v067.i01 <https://doi.org/10.18637/jss.v067.i01>.
##   - Bates D, Maechler M, Jagan M (2024). _Matrix: Sparse and Dense Matrix Classes and Methods_. R package version 1.7-0, <https://CRAN.R-project.org/package=Matrix>.
##   - Ben-Shachar MS, Lüdecke D, Makowski D (2020). "effectsize: Estimation of Effect Size Indices and Standardized Parameters." _Journal of Open Source Software_, *5*(56), 2815. doi:10.21105/joss.02815 <https://doi.org/10.21105/joss.02815>, <https://doi.org/10.21105/joss.02815>.
##   - Bengtsson H (2021). "A Unifying Framework for Parallel and Distributed Processing in R using Futures." _The R Journal_, *13*(2), 208-227. doi:10.32614/RJ-2021-048 <https://doi.org/10.32614/RJ-2021-048>, <https://doi.org/10.32614/RJ-2021-048>.
##   - Bolker B, Robinson D (2024). _broom.mixed: Tidying Methods for Mixed Models_. R package version 0.2.9.5, <https://CRAN.R-project.org/package=broom.mixed>.
##   - Calcagno V (2020). _glmulti: Model Selection and Multimodel Inference Made Easy_. R package version 1.0.8, <https://CRAN.R-project.org/package=glmulti>.
##   - Couch SP, Bray AP, Ismay C, Chasnovski E, Baumer BS, Çetinkaya-Rundel M (2021). "infer: An R package for tidyverse-friendly statistical inference." _Journal of Open Source Software_, *6*(65), 3661. doi:10.21105/joss.03661 <https://doi.org/10.21105/joss.03661>.
##   - Croissant Y, Millo G (2018). _Panel Data Econometrics with R_. Wiley. Croissant Y, Millo G (2008). "Panel Data Econometrics in R: The plm Package." _Journal of Statistical Software_, *27*(2), 1-43. doi:10.18637/jss.v027.i02 <https://doi.org/10.18637/jss.v027.i02>. Millo G (2017). "Robust Standard Error Estimators for Panel Models: A Unifying Approach." _Journal of Statistical Software_, *82*(3), 1-27. doi:10.18637/jss.v082.i03 <https://doi.org/10.18637/jss.v082.i03>.
##   - Csardi G, Nepusz T (2006). "The igraph software package for complex network research." _InterJournal_, *Complex Systems*, 1695. <https://igraph.org>. Csárdi G, Nepusz T, Traag V, Horvát S, Zanini F, Noom D, Müller K (2024). _igraph: Network Analysis and Visualization in R_. doi:10.5281/zenodo.7682609 <https://doi.org/10.5281/zenodo.7682609>, R package version 2.0.3, <https://CRAN.R-project.org/package=igraph>.
##   - Daróczi G, Tsegelskyi R (2022). _pander: An R 'Pandoc' Writer_. R package version 0.6.5, <https://CRAN.R-project.org/package=pander>.
##   - Eddelbuettel D, Francois R, Allaire J, Ushey K, Kou Q, Russell N, Ucar I, Bates D, Chambers J (2024). _Rcpp: Seamless R and C++ Integration_. R package version 1.0.13, <https://CRAN.R-project.org/package=Rcpp>. Eddelbuettel D, François R (2011). "Rcpp: Seamless R and C++ Integration." _Journal of Statistical Software_, *40*(8), 1-18. doi:10.18637/jss.v040.i08 <https://doi.org/10.18637/jss.v040.i08>. Eddelbuettel D (2013). _Seamless R and C++ Integration with Rcpp_. Springer, New York. doi:10.1007/978-1-4614-6868-4 <https://doi.org/10.1007/978-1-4614-6868-4>, ISBN 978-1-4614-6867-7. Eddelbuettel D, Balamuta J (2018). "Extending R with C++: A Brief Introduction to Rcpp." _The American Statistician_, *72*(1), 28-36. doi:10.1080/00031305.2017.1375990 <https://doi.org/10.1080/00031305.2017.1375990>.
##   - Falissard B (2022). _psy: Various Procedures Used in Psychometrics_. R package version 1.2, <https://CRAN.R-project.org/package=psy>.
##   - Fox J, Venables B, Damico A, Salverda AP (2021). _english: Translate Integers into English_. R package version 1.2-6, <https://CRAN.R-project.org/package=english>.
##   - Fox J, Weisberg S (2019). _An R Companion to Applied Regression_, 3rd edition. Sage, Thousand Oaks CA. <https://socialsciences.mcmaster.ca/jfox/Books/Companion/index.html>. Fox J, Weisberg S (2018). "Visualizing Fit and Lack of Fit in Complex Regression Models with Predictor Effect Plots and Partial Residuals." _Journal of Statistical Software_, *87*(9), 1-27. doi:10.18637/jss.v087.i09 <https://doi.org/10.18637/jss.v087.i09>. Fox J (2003). "Effect Displays in R for Generalised Linear Models." _Journal of Statistical Software_, *8*(15), 1-27. doi:10.18637/jss.v008.i15 <https://doi.org/10.18637/jss.v008.i15>. Fox J, Hong J (2009). "Effect Displays in R for Multinomial and Proportional-Odds Logit Models: Extensions to the effects Package." _Journal of Statistical Software_, *32*(1), 1-24. doi:10.18637/jss.v032.i01 <https://doi.org/10.18637/jss.v032.i01>.
##   - Fox J, Weisberg S (2019). _An R Companion to Applied Regression_, Third edition. Sage, Thousand Oaks CA. <https://socialsciences.mcmaster.ca/jfox/Books/Companion/>.
##   - Fox J, Weisberg S, Price B (2022). _carData: Companion to Applied Regression Data Sets_. R package version 3.0-5, <https://CRAN.R-project.org/package=carData>.
##   - Frick H, Chow F, Kuhn M, Mahoney M, Silge J, Wickham H (2024). _rsample: General Resampling Infrastructure_. R package version 1.2.1, <https://CRAN.R-project.org/package=rsample>.
##   - Gohel D, Moog S (2024). _officer: Manipulation of Microsoft Word and PowerPoint Documents_. R package version 0.6.6, <https://CRAN.R-project.org/package=officer>.
##   - Gohel D, Skintzos P (2024). _flextable: Functions for Tabular Reporting_. R package version 0.9.6, <https://CRAN.R-project.org/package=flextable>.
##   - Grolemund G, Wickham H (2011). "Dates and Times Made Easy with lubridate." _Journal of Statistical Software_, *40*(3), 1-25. <https://www.jstatsoft.org/v40/i03/>.
##   - Grothendieck G, Kates L, Petzoldt T (2016). _proto: Prototype Object-Based Programming_. R package version 1.0.0, <https://CRAN.R-project.org/package=proto>.
##   - Halekoh U, Højsgaard S, Yan J (2006). "The R Package geepack for Generalized Estimating Equations." _Journal of Statistical Software_, *15/2*, 1-11. Yan J, Fine JP (2004). "Estimating Equations for Association Structures." _Statistics in Medicine_, *23*, 859-880. Yan J (2002). "geepack: Yet Another Package for Generalized Estimating Equations." _R-News_, *2/3*, 12-14.
##   - Hayashi H, Kojima H, Nishida K, Saito K, Yasuda Y (2024). _exploratory: R package for Exploratory_. R package version 10.3.1, commit b1ff14012289d2b6607751e14ef80e62793cb41a, <https://github.com/exploratory-io/exploratory_func>.
##   - Heinzen E, Sinnwell J, Atkinson E, Gunderson T, Dougherty G (2021). _arsenal: An Arsenal of 'R' Functions for Large-Scale Statistical Summaries_. R package version 3.6.3, <https://CRAN.R-project.org/package=arsenal>.
##   - Kassambara A (2023). _ggpubr: 'ggplot2' Based Publication Ready Plots_. R package version 0.6.0, <https://CRAN.R-project.org/package=ggpubr>.
##   - Kuhn M (2024). _modeldata: Data Sets Useful for Modeling Examples_. R package version 1.3.0, <https://CRAN.R-project.org/package=modeldata>.
##   - Kuhn M (2024). _tune: Tidy Tuning Tools_. R package version 1.2.1, <https://CRAN.R-project.org/package=tune>.
##   - Kuhn M, Couch S (2024). _workflowsets: Create a Collection of 'tidymodels' Workflows_. R package version 1.1.0, <https://CRAN.R-project.org/package=workflowsets>.
##   - Kuhn M, Frick H (2024). _dials: Tools for Creating Tuning Parameter Values_. R package version 1.2.1, <https://CRAN.R-project.org/package=dials>.
##   - Kuhn M, Vaughan D (2024). _parsnip: A Common API to Modeling and Analysis Functions_. R package version 1.2.1, <https://CRAN.R-project.org/package=parsnip>.
##   - Kuhn M, Vaughan D, Hvitfeldt E (2024). _yardstick: Tidy Characterizations of Model Performance_. R package version 1.3.1, <https://CRAN.R-project.org/package=yardstick>.
##   - Kuhn M, Wickham H (2020). _Tidymodels: a collection of packages for modeling and machine learning using tidyverse principles._. <https://www.tidymodels.org>.
##   - Kuhn M, Wickham H, Hvitfeldt E (2024). _recipes: Preprocessing and Feature Engineering Steps for Modeling_. R package version 1.1.0, <https://CRAN.R-project.org/package=recipes>.
##   - Kuhn, Max (2008). "Building Predictive Models in R Using the caret Package." _Journal of Statistical Software_, *28*(5), 1–26. doi:10.18637/jss.v028.i05 <https://doi.org/10.18637/jss.v028.i05>, <https://www.jstatsoft.org/index.php/jss/article/view/v028i05>.
##   - Kuznetsova A, Brockhoff PB, Christensen RHB (2017). "lmerTest Package: Tests in Linear Mixed Effects Models." _Journal of Statistical Software_, *82*(13), 1-26. doi:10.18637/jss.v082.i13 <https://doi.org/10.18637/jss.v082.i13>.
##   - Lenth R (2024). _emmeans: Estimated Marginal Means, aka Least-Squares Means_. R package version 1.10.4, <https://CRAN.R-project.org/package=emmeans>.
##   - Liaw A, Wiener M (2002). "Classification and Regression by randomForest." _R News_, *2*(3), 18-22. <https://CRAN.R-project.org/doc/Rnews/>.
##   - Lüdecke D (2018). "ggeffects: Tidy Data Frames of Marginal Effects from Regression Models." _Journal of Open Source Software_, *3*(26), 772. doi:10.21105/joss.00772 <https://doi.org/10.21105/joss.00772>.
##   - Lüdecke D (2024). _sjPlot: Data Visualization for Statistics in Social Science_. R package version 2.8.16, <https://CRAN.R-project.org/package=sjPlot>.
##   - Lüdecke D, Ben-Shachar M, Patil I, Makowski D (2020). "Extracting, Computing and Exploring the Parameters of Statistical Models using R." _Journal of Open Source Software_, *5*(53), 2445. doi:10.21105/joss.02445 <https://doi.org/10.21105/joss.02445>.
##   - Lüdecke D, Ben-Shachar M, Patil I, Waggoner P, Makowski D (2021). "performance: An R Package for Assessment, Comparison and Testing of Statistical Models." _Journal of Open Source Software_, *6*(60), 3139. doi:10.21105/joss.03139 <https://doi.org/10.21105/joss.03139>.
##   - Lüdecke D, Ben-Shachar M, Patil I, Wiernik B, Bacher E, Thériault R, Makowski D (2022). "easystats: Framework for Easy Statistical Modeling, Visualization, and Reporting." _CRAN_. R package, <https://easystats.github.io/easystats/>.
##   - Lüdecke D, Patil I, Ben-Shachar M, Wiernik B, Waggoner P, Makowski D (2021). "see: An R Package for Visualizing Statistical Models." _Journal of Open Source Software_, *6*(64), 3393. doi:10.21105/joss.03393 <https://doi.org/10.21105/joss.03393>.
##   - Lüdecke D, Waggoner P, Makowski D (2019). "insight: A Unified Interface to Access Information from Model Objects in R." _Journal of Open Source Software_, *4*(38), 1412. doi:10.21105/joss.01412 <https://doi.org/10.21105/joss.01412>.
##   - Makowski D, Ben-Shachar M, Lüdecke D (2019). "bayestestR: Describing Effects and their Uncertainty, Existence and Significance within the Bayesian Framework." _Journal of Open Source Software_, *4*(40), 1541. doi:10.21105/joss.01541 <https://doi.org/10.21105/joss.01541>, <https://joss.theoj.org/papers/10.21105/joss.01541>.
##   - Makowski D, Ben-Shachar M, Patil I, Lüdecke D (2020). "Estimation of Model-Based Predictions, Contrasts and Means." _CRAN_. <https://github.com/easystats/modelbased>.
##   - Makowski D, Lüdecke D, Patil I, Thériault R, Ben-Shachar M, Wiernik B (2023). "Automated Results Reporting as a Practical Tool to Improve Reproducibility and Methodological Best Practices Adoption." _CRAN_. <https://easystats.github.io/report/>.
##   - Makowski D, Wiernik B, Patil I, Lüdecke D, Ben-Shachar M (2022). "correlation: Methods for Correlation Analysis." Version 0.8.3, <https://CRAN.R-project.org/package=correlation>. Makowski D, Ben-Shachar M, Patil I, Lüdecke D (2020). "Methods and Algorithms for Correlation Analysis in R." _Journal of Open Source Software_, *5*(51), 2306. doi:10.21105/joss.02306 <https://doi.org/10.21105/joss.02306>, <https://joss.theoj.org/papers/10.21105/joss.02306>.
##   - Merkle E, You D (2024). _nonnest2: Tests of Non-Nested Models_. R package version 0.5-7, <https://CRAN.R-project.org/package=nonnest2>.
##   - Meyer D, Dimitriadou E, Hornik K, Weingessel A, Leisch F (2023). _e1071: Misc Functions of the Department of Statistics, Probability Theory Group (Formerly: E1071), TU Wien_. R package version 1.7-14, <https://CRAN.R-project.org/package=e1071>.
##   - Microsoft, Weston S (2022). _foreach: Provides Foreach Looping Construct_. R package version 1.5.2, <https://CRAN.R-project.org/package=foreach>.
##   - Miller TLboFcbA (2024). _leaps: Regression Subset Selection_. R package version 3.2, <https://CRAN.R-project.org/package=leaps>.
##   - Muffly T (2023). _tyler: Common Functions for Mystery Caller or Audit Studies Evaluating Patient Access to Care_. R package version 1.2.0, https://mufflyt.github.io/tyler/, <https://your.package.url>.
##   - Müller K (2020). _here: A Simpler Way to Find Your Files_. R package version 1.0.1, <https://CRAN.R-project.org/package=here>.
##   - Müller K, Wickham H (2023). _tibble: Simple Data Frames_. R package version 3.2.1, <https://CRAN.R-project.org/package=tibble>.
##   - Ooms J (2024). _writexl: Export Data Frames to Excel 'xlsx' Format_. R package version 1.5.0, <https://CRAN.R-project.org/package=writexl>.
##   - Paluszynska A, Biecek P, Jiang Y (2020). _randomForestExplainer: Explaining and Visualizing Random Forests in Terms of Variable Importance_. R package version 0.10.1, <https://CRAN.R-project.org/package=randomForestExplainer>.
##   - Patil I, Makowski D, Ben-Shachar M, Wiernik B, Bacher E, Lüdecke D (2022). "datawizard: An R Package for Easy Data Preparation and Statistical Transformations." _Journal of Open Source Software_, *7*(78), 4684. doi:10.21105/joss.04684 <https://doi.org/10.21105/joss.04684>.
##   - Pedersen T (2024). _ggraph: An Implementation of Grammar of Graphics for Graphs and Networks_. R package version 2.2.1, <https://CRAN.R-project.org/package=ggraph>.
##   - Pedersen T (2024). _tidygraph: A Tidy API for Graph Manipulation_. R package version 1.3.1, <https://CRAN.R-project.org/package=tidygraph>.
##   - R Core Team (2024). _R: A Language and Environment for Statistical Computing_. R Foundation for Statistical Computing, Vienna, Austria. <https://www.R-project.org/>.
##   - Rich B (2023). _table1: Tables of Descriptive Statistics in HTML_. R package version 1.4.3, <https://CRAN.R-project.org/package=table1>.
##   - Robinson D, Hayes A, Couch S (2024). _broom: Convert Statistical Objects into Tidy Tibbles_. R package version 1.0.6, <https://CRAN.R-project.org/package=broom>.
##   - Rosseel Y (2012). "lavaan: An R Package for Structural Equation Modeling." _Journal of Statistical Software_, *48*(2), 1-36. doi:10.18637/jss.v048.i02 <https://doi.org/10.18637/jss.v048.i02>.
##   - Sarkar D (2008). _Lattice: Multivariate Data Visualization with R_. Springer, New York. ISBN 978-0-387-75968-5, <http://lmdvr.r-forge.r-project.org>.
##   - Schloerke B, Cook D, Larmarange J, Briatte F, Marbach M, Thoen E, Elberg A, Crowley J (2024). _GGally: Extension to 'ggplot2'_. R package version 2.2.1, <https://CRAN.R-project.org/package=GGally>.
##   - Slowikowski K (2024). _ggrepel: Automatically Position Non-Overlapping Text Labels with 'ggplot2'_. R package version 0.9.5, <https://CRAN.R-project.org/package=ggrepel>.
##   - Tang Y, Horikoshi M, Li W (2016). "ggfortify: Unified Interface to Visualize Statistical Result of Popular R Packages." _The R Journal_, *8*(2), 474-485. doi:10.32614/RJ-2016-060 <https://doi.org/10.32614/RJ-2016-060>, <https://doi.org/10.32614/RJ-2016-060>. Horikoshi M, Tang Y (2018). _ggfortify: Data Visualization Tools for Statistical Analysis Results_. <https://CRAN.R-project.org/package=ggfortify>.
##   - Urbanek S (2024). _rJava: Low-Level R to Java Interface_. R package version 1.0-11, <https://CRAN.R-project.org/package=rJava>.
##   - Ushey K, Wickham H (2024). _renv: Project Environments_. R package version 1.0.7, <https://CRAN.R-project.org/package=renv>.
##   - Vaughan D, Couch S (2024). _workflows: Modeling Workflows_. R package version 1.1.4, <https://CRAN.R-project.org/package=workflows>.
##   - Vaughan D, Dancho M (2022). _furrr: Apply Mapping Functions in Parallel using Futures_. R package version 0.3.1, <https://CRAN.R-project.org/package=furrr>.
##   - Wang T, Merkle EC (2018). "merDeriv: Derivative Computations for Linear Mixed Effects Models with Application to Robust Standard Errors." _Journal of Statistical Software, Code Snippets_, *87*(1), 1-16. doi:10.18637/jss.v087.c01 <https://doi.org/10.18637/jss.v087.c01>.
##   - Wickham H (2016). _ggplot2: Elegant Graphics for Data Analysis_. Springer-Verlag New York. ISBN 978-3-319-24277-4, <https://ggplot2.tidyverse.org>.
##   - Wickham H (2023). _forcats: Tools for Working with Categorical Variables (Factors)_. R package version 1.0.0, <https://CRAN.R-project.org/package=forcats>.
##   - Wickham H (2023). _stringr: Simple, Consistent Wrappers for Common String Operations_. R package version 1.5.1, <https://CRAN.R-project.org/package=stringr>.
##   - Wickham H, Averick M, Bryan J, Chang W, McGowan LD, François R, Grolemund G, Hayes A, Henry L, Hester J, Kuhn M, Pedersen TL, Miller E, Bache SM, Müller K, Ooms J, Robinson D, Seidel DP, Spinu V, Takahashi K, Vaughan D, Wilke C, Woo K, Yutani H (2019). "Welcome to the tidyverse." _Journal of Open Source Software_, *4*(43), 1686. doi:10.21105/joss.01686 <https://doi.org/10.21105/joss.01686>.
##   - Wickham H, François R, Henry L, Müller K, Vaughan D (2023). _dplyr: A Grammar of Data Manipulation_. R package version 1.1.4, <https://CRAN.R-project.org/package=dplyr>.
##   - Wickham H, Henry L (2023). _purrr: Functional Programming Tools_. R package version 1.0.2, <https://CRAN.R-project.org/package=purrr>.
##   - Wickham H, Hester J, Bryan J (2024). _readr: Read Rectangular Text Data_. R package version 2.1.5, <https://CRAN.R-project.org/package=readr>.
##   - Wickham H, Hester J, Csárdi G (2024). _pkgbuild: Find Tools Needed to Build R Packages_. R package version 1.4.4, <https://CRAN.R-project.org/package=pkgbuild>.
##   - Wickham H, Pedersen T, Seidel D (2023). _scales: Scale Functions for Visualization_. R package version 1.3.0, <https://CRAN.R-project.org/package=scales>.
##   - Wickham H, Vaughan D, Girlich M (2024). _tidyr: Tidy Messy Data_. R package version 1.3.1, <https://CRAN.R-project.org/package=tidyr>.
##   - Wilke C (2024). _cowplot: Streamlined Plot Theme and Plot Annotations for 'ggplot2'_. R package version 1.1.3, <https://CRAN.R-project.org/package=cowplot>.
##   - Xie Y (2024). _knitr: A General-Purpose Package for Dynamic Report Generation in R_. R package version 1.48, <https://yihui.org/knitr/>. Xie Y (2015). _Dynamic Documents with R and knitr_, 2nd edition. Chapman and Hall/CRC, Boca Raton, Florida. ISBN 978-1498716963, <https://yihui.org/knitr/>. Xie Y (2014). "knitr: A Comprehensive Tool for Reproducible Research in R." In Stodden V, Leisch F, Peng RD (eds.), _Implementing Reproducible Computational Research_. Chapman and Hall/CRC. ISBN 978-1466561595.
##   - Zeileis A, Grothendieck G (2005). "zoo: S3 Infrastructure for Regular and Irregular Time Series." _Journal of Statistical Software_, *14*(6), 1-27. doi:10.18637/jss.v014.i06 <https://doi.org/10.18637/jss.v014.i06>.
##   - Zeileis A, Hothorn T (2002). "Diagnostic Checking in Regression Relationships." _R News_, *2*(3), 7-10. <https://CRAN.R-project.org/doc/Rnews/>.
##   - Zeileis A, Köll S, Graham N (2020). "Various Versatile Variances: An Object-Oriented Implementation of Clustered Covariances in R." _Journal of Statistical Software_, *95*(1), 1-36. doi:10.18637/jss.v095.i01 <https://doi.org/10.18637/jss.v095.i01>. Zeileis A (2004). "Econometric Computing with HC and HAC Covariance Matrix Estimators." _Journal of Statistical Software_, *11*(10), 1-17. doi:10.18637/jss.v011.i10 <https://doi.org/10.18637/jss.v011.i10>. Zeileis A (2006). "Object-Oriented Computation of Sandwich Estimators." _Journal of Statistical Software_, *16*(9), 1-16. doi:10.18637/jss.v016.i09 <https://doi.org/10.18637/jss.v016.i09>.
##   - Zhu H (2024). _kableExtra: Construct Complex Table with 'kable' and Pipe Syntax_. R package version 1.4.0, <https://CRAN.R-project.org/package=kableExtra>.