Updated by Sabrina Chen 9/30/24
You have been hired as a consultant by Disney to create a location for a new amusement park. Your job is to analyze weather data from different locations to pick out the best option.
Begin by looking at the climate normal data. Normal data is the predicted weather for a specific date and location. It is not tied to an individual year.
Load your climate normal datafiles. You will need to do some clean-up. Be sure to look at the data carefully. Below are a list of suggested dplyr activities.
Suggested tasks:
Now that you have the data, create some basic summary data. Show a table with the average temperature by month and location. Have the stations as rows, and the month as columns.
Hint: you may need to use dplyr pivot. First group, then summarise, then pivot. You should end up with a table showing the name for each row, and then each month as a column. You may want to use dplyr and lubridate to create a month column.
We want to find the best location for an amusement park that isn’t too hot, or too cold.
Define an appropriate temperature range where it is comfortable to be outside. Then, create a graph showing how different locations meet your temperature requirement.
Hint: use mutate to create a new field using ifelse (and some temperature range). Set this value to either 1 (for good) or 0 (for bad). Then look at how much of your dataset falls into this ‘good’ range for each station.
Write a brief 2-3 sentence explanation of your findings.
The appropriate temperature range is 60 to 75 fahrenheit. The location in Austin, Texas is the ideal place because around it has the highest pecentage of days with good temperature out of the four locations. Austin, TX has the highest percentage of days with good temperature at around 35% while the lowest percentage of days with good temperature is Juneau, AK with around 17%.
Now, you need to figure out how much the average daily weather for your best site varies from the climate normals for 2023.
Load up the GHCN_daily dataset. You’ll want to filter it down to your chosen site, and then turn the date_as_text column into a proper date. Then, join it to your climate normals (again, filtered to your chosen site) using the date.
Note that tmax is stored as Celsius. You’ll need to convert it.
Create two predictions.
First, compare the actual tmax versus predicted tmax. What is the error? Graph your results and give a 2-3 sentence explanation.
The error is around 4 degree Fahrenheit in average and shows a greater deviation in error during colder months than warmer months. It is found by finding the difference between the actual and predicted max temperature in Fahrenheit. The reason that winter months have a greater could be because of an increase in burning of fossil fuel for heating homes. In the summer, there would be less to no need for heating in homes. This deviation could also be due to climate change and global warming as there is an average of 3 degree increase in max temperature of predicted compared with actual data.
Second, compare the number of days that are predicted to be nice, versus the actual number of days that were nice. Use the same definition as the prior question.
What is the accuracy, precision, and recall of the climate normal data? Give a 2-3 sentence explanation of your results.
##
## pnegative ppositive
## negative 214 69
## positive 23 59
The climate normal data is 74.79% accurate which shows demonstrates that only 74.79% of the data had accurately predicted whether the weather of the day is “good” or “bad”. The precision of 46% means that only 46% of the retrieved items were relevant. The recall was 71.95% meaning that 71.95% of relevant items were retrieved. The low precision could be because we are only looking for data for Austin, TX and it could be too hot so data could not be relevent as it doesn’t fall within our temperature range.