Forecasting Spain’s Average Monthly Temperature
ECO 5740 - Forecasting and Time Series Models (Fall 2023)
1 Response to Comments
After receiving feedback to “create an ensembled model using auto ARIMA and regression models” to see if this model will perform better, an ensemble model was created using these two models. Once the ensemble model was created on the training set, it was forecasted on the test set and the MASE value was obtained which is 0.68. We then added this model to the “Forecast Accuracy Comparison Among the Best Model’s” atibble to view the MASE value compared to the best models which limited the forecast errors the best. Despite these ensemble efforts, the MASE value for the ensemble model remained slightly higher than the auto ARIMA model, which has a MASE of 0.67. This result suggests that, while ensemble methods were explored to enhance forecast accuracy, the auto ARIMA model of ARIMA(1,0,0)(2,1,0)[12]w/ drift continues to outperform the ensembled model of auto ARIMA and regression by limiting the forecast errors the best. Therefore, we will use the auto ARIMA model to construct a forecast for the next twenty months.
2 Abstract
Temperature forecasting is crucial for cities, states, and countries to anticipate and mitigate the impacts of temperature changes on vital sectors such as electricity demand, crop production, and tourism. This analysis utilizes Spain’s monthly average temperature data from January 2015 to April 2023 to construct a twenty-month ahead forecast. Initial benchmark methods, including the Seasonal Naive method, guided the model selection process. Advanced models, such as ETS, Holt Winters Damped, Regression, and Theta Multiplicative, were subsequently explored and fitted to the training set to enhance forecast accuracy on the test set. After conducting stationarity testing to discover the best ARIMA model, it was determined that the auto ARIMA model of ARIMA(1,0,0)(2,1,0)[12]w/ drift is the most accurate, with a MASE of 0.67 being the lowest forecast error out of these sophisticated models. As a result, ARIMA(1,0,0)(2,1,0)[12]w/ drift will be our forecasting method. Residual diagnostics confirmed the reliability of the chosen ARIMA model, displaying normally distributed residuals around zero in both the time plot and histogram, with no significant ACF spikes or heteroscedastic patterns. The forecast indicates a decrease in Spain’s monthly average temperature for the next two summers, emphasizing alignment with historical data rather than signaling a sustained climate change impact. In the summer of 2023, the projected temperature is expected to be 76°F, following a slightly cooler forecast of 75°F in the summer of 2024. This valuable insight aids officials in resource planning and management, considering the deviation from recent extreme conditions in this heat wave.
3 Introduction
Southern European countries, especially Spain, have been experiencing record breaking temperatures in the year 2023 (Valone, 2023). These extreme temperatures can lead to strains on the electricity grids due to increased demand for cooling, disruptions in crop production due to the extreme heat and water scarcity, and decreases in the tourism industry because of these heat levels. Such climate change impacts can lead to economic losses in Spain, given the integral roles these sectors play in the nation’s economy. As a result, proactive measures and strategic planning are imperative to mitigate the potential economic repercussions stemming from the ongoing climate challenges. Therefore, a twenty-month ahead forecast will be conducted that is based on the past seven years of Spain’s monthly average temperatures. This forecast will provide Spain’s officials an idea if there is an underlying trend in temperature climate changes that will continue to increase in the future after this heat wave, or if this heat spell is merely an abnormal occurrence, with temperatures expecting to return to normal. By constructing this forecast, Spain will be able to anticipate future climatic patterns and make informed decisions regarding resource allocation for the nation’s economic well-being in regards to electricity demands, crop production, and water allocation.
4 Data description and sources
The forecast analysis utilizes the monthly average temperature data for Spain spanning from January 2015 to April 2023 (Salas, 2023). This specific data set consists of two columns, detailing the recorded year and month labeled as ‘Month’, and the corresponding average temperature in Fahrenheit labeled as ‘Temperature’. From the autoplot and ACF plot of this data, we can see that there is strong seasonality in the data, with peaks occurring 12 months from each other, indicating recurrent temperatures between summer and winter months. Due to this data being highly seasonal with no apparent trend and constant variance, no transformations were needed to make this time series data homoscedastic.
5 Statistical methods
To start this analysis, we split the data into a training and testing set. The training set is composed of the first 80% of the data which ranges from January 2015 - Aug 2021, totaling out to be 80 observations. Therefore, the last 20% of this data will be the test set which is data ranging from September 2021 - April 2023, totaling out to be 20 observations. Next, we fit various models to the training set in order to get an understanding of which model will perform the best and provide the best forecast accuracy.
We begin by fitting the benchmark methods to the training set which includes Mean, Naive, Drift and Seasonal Naive. Because of this data being highly seasonal, the Seasonal Naive method had the lowest forecast errors on the test set with a MASE of 1.11. This makes sense due to the data being highly seasonal. Therefore, we will add this model to the ‘Forecast Accuracy Comparison Among the Best Models’ to populate a visualization that will hold the forecast errors of the best models, making it easier to determine which model performed the best on the test set.
We then moved on to fit more advanced models in efforts to improve the forecast accuracy. The models with the best forecast accuracy included ETS, Regression, Theta Multiplicative and Arima which were decided based on their respective MASE values. In order to determine the best ARIMA model, we first tested to see how many differences were required to make this data stationary. It was concluded that one seasonal difference was needed, and after differencing this data, the KPSS test returned a p-value of 0.1, which means we were able to reject the null hypothesis and deem the differenced data stationary. In order to choose the best ARIMA model, we generated an automated model and experimented with other models with similar parameters. The auto generated ARIMA(1,0,0)(2,1,0)[12]w/ drift came back with the lowest AICc with a value of 421.44 and the lowest MASE values. These findings support that this specific ARIMA model is the best choice out of all the potential ARIMA models. After discovering which ARIMA model to use, we then added these advanced models to the ‘Forecast Accuracy Comparison Among the Best Models’ so we would be able to easily see which model performed the best regarding the forecast accuracy.
Lastly, we also constructed three ensemble models with the first model containing a combination of Seasonal Naive and Regression, the second model containing a combination of Seasonal Naive, Regression, and ARIMA(1,0,0)(2,1,0)[12]w/ drift, and the third model containing a combination of the auto ARIMA (ARIMA(1,0,0)(2,1,0)[12]w/ drift) and regression. However, the auto ARIMA model of ARIMA(1,0,0)(2,1,0)[12]w/ drift still had lower MASE forecast errors than these three ensemble models. As a result, the auto ARIMA model will be used to forecast Spain’s monthly average temperature for the next twenty months.
5.1 Set up data as a time series object
5.2 Time Series Graphics
Based on the autoplot graph of Spain’s monthly average temperature, there appears to be seasonality present in the data. The lowest average temperatures in the graph occur in January, and the highest average temperature in Spain occurs around June (~6 months from January). The peaks occur 12 months from each other and the troughs also occur 12 months from each other. This illustrates that there is a consistent pattern that repeats itself at regular intervals of time with a fixed and known frequency, clearly illustrating seasonality. Furthermore, there is no trend since there is no long-term increase or decrease in the data and there is no cyclicity due to no inconsistent rises and falls in the autoplot.
5.3 Time series patterns
Apply the ACF function to spot any trend, seasonality, and cyclicity.
Explore other patterns using other graphs functions.
Discuss features of the data, including patterns, unusual observations, changes over time, etc.
Based on the ACF plot of Spain’s monthly average temperature, there is clear seasonality present. The autocorrelations are larger for seasonal lags, which occur at multiples of the seasonal frequency. As illustrated in the autoplot graph, the temperature peaks occur 12 months from each other and the troughs also occur 12 months from each other. This explains that in a years time, the temperature will rise to it’s peak, then back down to it’s lowest point, clearly illustrating that in January the temperature will be at it’s lowest, and in June it will be at it’s highest, then continue to drop until January returns. The ACF plot clearly identifies this pattern since the temperature rises to its peak in 6 months, then drops down to it’s lowest in the following 6 months. There is no trend in this data due to the autocorrelations not being large for the small lags and slowly decreasing over time. Lastly, we can conclude that this data is not white noise and contains useful information since more than 5% of the ACF spikes are outside the dotted bounds.
5.4 Transformations and adjustments
Check if your data need any transformations or adjustments. If transforming seems appropriate, do so and describe the effects using the plot of the transformed data.
This data set does not need a calendar adjustment since there is no seasonal variation due to simple calendar effects, due this data being weather data. This data set consists of the average monthly temperatures in Spain, so any variation due to calendar effects is removed due to the data being monthly averages.
This data set does not need a population adjustment since weather data is not affected by population changes.
This data does not need an inflation adjustment due to weather data not being affected by the value of money.
This data does not need a mathematical adjustment since the variation does not increase or decrease with the level of the series. It can be seen that this data is homoscedastic rather than heteroscedastic.
5.5 Time Series Decomposition
In this section, decompose your time series to understand its components. Discuss features of the data.
Based on the X-11, SEATS and STL decomposition methods, it is clearly evident that these plots show a lot of similarities. The STL and X-11 decomposition methods display very similar remainder patterns, whereas the SEATS decomposition methods seems to identify more variation in the remainder component. The seasonal component is very similar among the three decomposition methods, given that this data set is highly seasonal. The trend component also is very similar among the three methods, since there is almost no trend in this data. These decomposition help show the limited trend by using a very large box size next to the trend line (indicating that the trend is on a very small scale). However, the STL method does not smooth out the trend as much as the other two methods. By using these decomposition methods, we are able to visually see the pattern associated with the trend-cycle, seasonal, and remainder components and how they all come together to form the seasonal plot of the average temperature in Spain data.
5.6 Training set
In this section, use filter_index function to extract the first 80% of the observations (training set)
By filtering the years and respective months from Jan 2015 - Aug 2021, we will be able to obtain 80% of the observations and create our training set that we will use. The test set is comprised of the last 20% of the data by filtering the years and respective months from Sep 2021 - Apr 2023.
5.7 Fit the models
5.8 ARIMA Section (Stationarity)
# A tibble: 1 × 1
nsdiffs
<int>
1 1
We can see above that in order to make this time series stationary, we should apply one seasonal difference.
# A tibble: 1 × 2
kpss_stat kpss_pvalue
<dbl> <dbl>
1 0.122 0.1
By using the KPSS test, we are able to see if the seasonal differencing made the time series data stationary. Because the KPSS test p-value is 0.1, we fail to reject the null hypothesis, deeming this differenced data is now stationary.
# A mable: 5 x 2
# Key: Model name [5]
`Model name` Orders
<chr> <model>
1 arima100110 <ARIMA(1,0,0)(1,1,0)[12]>
2 arima101210 <ARIMA(1,0,1)(2,1,0)[12]>
3 arima200210 <ARIMA(2,0,0)(2,1,0)[12]>
4 auto <ARIMA(1,0,0)(2,1,0)[12] w/ drift>
5 dyn_reg <LM w/ ARIMA(1,0,0)(2,0,0)[12] errors>
# A tibble: 5 × 6
.model sigma2 log_lik AIC AICc BIC
<chr> <dbl> <dbl> <dbl> <dbl> <dbl>
1 auto 5.82 -205. 421. 421. 433.
2 arima101210 5.88 -205. 421. 422. 433.
3 arima200210 6.00 -206. 422. 423. 434.
4 arima100110 6.88 -211. 428. 428. 436.
5 dyn_reg 3.98 -203. 441. 448. 485.
When looking at the AICc, the auto arima (ARIMA(1,0,0)(2,1,0)[12]w/ drift) has the lowest AICc. Since the Corrected Akaike’s Information Criterion is based on the AIC but includes a correction for small sample sizes, the ARIMA(1,0,0)(2,1,0)[12]w/ drift model with a corresponding AICc value of 421.44 is considered the most suitable among the presented models due to it being able to capture underlying patterns in the data while penalizing excessive model complexity the best.
# A tibble: 5 × 10
.model .type ME RMSE MAE MPE MAPE MASE RMSSE ACF1
<chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 arima100110 Test -0.118 3.07 2.31 -0.707 4.34 0.827 0.939 0.154
2 arima101210 Test -0.572 2.81 2.12 -1.48 4.04 0.759 0.859 0.152
3 arima200210 Test -0.722 2.89 2.21 -1.77 4.25 0.794 0.885 0.178
4 auto Test -0.814 2.40 1.86 -1.86 3.61 0.668 0.733 0.0862
5 dyn_reg Test -0.386 2.52 2.11 -0.975 3.98 0.757 0.770 0.237
The auto ARIMA model of ARIMA(1,0,0)(2,1,0)[12]w/ drift has the lowest MAE, MAPE, MASE, and RMSSE which suggests that this model provides the best forecast accuracy and this model’s predictions are close to the actual values of the test set, which means that it is the most accurate model to use when forecasting. Because this ARIMA model has the lowest AICc and the lowest MAE, MAPE, MASE, and RMSSE values, it can be concluded that this ARIMA model is the best choice.
5.9 Compare the forecast accuracy across models
Next, forecast the last 20% of the data and test the forecast accuracy.
# A tibble: 4 × 10
.model .type ME RMSE MAE MPE MAPE MASE RMSSE ACF1
<chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 Drift Test -21.7 24.4 21.7 -43.3 43.3 7.78 7.47 0.772
2 Mean Test -3.53 11.5 10.3 -10.1 19.4 3.69 3.52 0.782
3 Naïve Test -17.7 20.8 18.3 -35.9 36.7 6.55 6.37 0.782
4 Seasonal naïve Test 1.60 3.73 3.11 2.38 5.62 1.12 1.14 -0.163
The Seasonal Naive method will provide the best forecast accuracy out of these benchmark methods. All measures (ME, RMSE, MAE, MPE, MAE, MPE, MAPE, MASE, and RMSSE) are lowest for the Seasonal Naive method, and this makes since because of this data being highly seasonal. This indicates that the Seasonal Naive method’s predictions are close to the actual values of the test set and it is the most accurate method to use when forecasting the data. This can clearly be seen when we compare the forecasts to the test set data in the autoplot, which displays that the Seasonal Naive method is the best forecast among the benchmark methods.
# A tibble: 7 × 10
.model .type ME RMSE MAE MPE MAPE MASE RMSSE ACF1
<chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 ETS Test 1.86 2.91 2.49 3.07 4.40 0.893 0.890 0.113
2 Holt Test -34.7 37.6 34.7 -67.0 67.0 12.4 11.5 0.779
3 Holt_Winters_Addi… Test 2.45 3.42 2.89 4.14 5.08 1.04 1.05 0.176
4 Holt_Winters_Damp… Test 1.79 2.91 2.47 2.88 4.34 0.886 0.889 0.116
5 Reg Test 0.313 2.33 2.02 0.242 3.69 0.725 0.713 0.155
6 SES Test -17.7 20.8 18.3 -35.9 36.7 6.55 6.37 0.782
7 additive_ETS Test 2.45 3.42 2.89 4.14 5.08 1.04 1.05 0.176
Based on these models, the Regression model appears to be the most accurate choice, closely followed by ETS. All measures (ME, RMSE, MAE, MPE, MAE, MPE, MAPE, MASE, and RMSSE) are lowest for the Regression model. This indicates that the Regression model’s predictions are close to the actual values of the test set and is the most accurate model to use out of these specific models listed above.
# A tibble: 3 × 10
.model .type ME RMSE MAE MPE MAPE MASE RMSSE ACF1
<chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 N_Net Test 1.29 3.91 2.97 1.48 5.25 1.06 1.20 0.292
2 Theta_A Test 2.01 3.08 2.61 3.31 4.57 0.934 0.941 0.131
3 Theta_M Test 1.54 2.92 2.50 2.30 4.34 0.895 0.894 0.214
Based on these models, the Theta Multiplicative model appears to be the most accurate choice. The measures of RMSE, MAE, MAPE, MASE, and RMSSE are lowest for this model. This indicates that the Theta Multiplicative’s predictions are close to the actual values of the test set and is the most accurate model to use out of these specific models listed above.
# A tibble: 5 × 10
.model .type ME RMSE MAE MPE MAPE MASE RMSSE ACF1
<chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 arima100110 Test -0.118 3.07 2.31 -0.707 4.34 0.827 0.939 0.154
2 arima101210 Test -0.572 2.81 2.12 -1.48 4.04 0.759 0.859 0.152
3 arima200210 Test -0.722 2.89 2.21 -1.77 4.25 0.794 0.885 0.178
4 auto Test -0.814 2.40 1.86 -1.86 3.61 0.668 0.733 0.0862
5 dyn_reg Test -0.386 2.52 2.11 -0.975 3.98 0.757 0.770 0.237
Based on these models, the auto ARIMA model of ARIMA(1,0,0)(2,1,0)[12]w/ drift appears to be the most accurate choice. The measures of RMSE, MAE, , MAPE, MASE, and RMSSE are lowest for this model. This indicates that the ARIMA(1,0,0)(2,1,0)[12]w/ drift predictions are close to the actual values of the test set and is the most accurate model to use out of these specific models listed above.
# A tibble: 4 × 10
.model .type ME RMSE MAE MPE MAPE MASE RMSSE ACF1
<chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 Reg Test 0.313 2.33 2.02 0.242 3.69 0.725 0.713 0.155
2 Seasonal_Naive Test 1.60 3.73 3.11 2.38 5.62 1.12 1.14 -0.163
3 auto Test -0.814 2.40 1.86 -1.86 3.61 0.668 0.733 0.0862
4 comb1 Test 0.958 2.91 2.35 1.31 4.23 0.843 0.889 -0.0310
Here, we created an ensemble method by combining the Seasonal Naive method with the Regression method. However, the auto ARIMA model of ARIMA(1,0,0)(2,1,0)[12]w/ drift still produced the best forecast accuracy given that it has the lowest MAPE and MASE.
# A tibble: 4 × 10
.model .type ME RMSE MAE MPE MAPE MASE RMSSE ACF1
<chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 Reg Test 0.313 2.33 2.02 0.242 3.69 0.725 0.713 0.155
2 Seasonal_Naive Test 1.60 3.73 3.11 2.38 5.62 1.12 1.14 -0.163
3 auto Test -0.814 2.40 1.86 -1.86 3.61 0.668 0.733 0.0862
4 comb2 Test 0.367 2.59 2.08 0.253 3.80 0.746 0.791 0.00771
Here, we created another ensemble method by combining the Seasonal Naive, Regression, and auto ARIMA method all together to try and discover if this combination will lead to better lower errors. However, we can still see that the auto ARIMA of ARIMA(1,0,0)(2,1,0)[12]w/ drift appears to have the lowest MAPE and MASE values, displaying that this method is the most accurate choice still.
# A tibble: 3 × 10
.model .type ME RMSE MAE MPE MAPE MASE RMSSE ACF1
<chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 Reg Test 0.313 2.33 2.02 0.242 3.69 0.725 0.713 0.155
2 auto Test -0.814 2.40 1.86 -1.86 3.61 0.668 0.733 0.0862
3 ensemble_auto_reg Test -0.250 2.29 1.90 -0.810 3.56 0.681 0.699 0.117
5.10 Forecast Accuracy Comparison Among the Best Models
# A tibble: 7 × 10
.model .type ME RMSE MAE MPE MAPE MASE RMSSE ACF1
<chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 ETS Test 1.86 2.91 2.49 3.07 4.40 0.893 0.890 0.113
2 Holt_Winters_Damped Test 1.79 2.91 2.47 2.88 4.34 0.886 0.889 0.116
3 Reg Test 0.313 2.33 2.02 0.242 3.69 0.725 0.713 0.155
4 Seasonal naïve Test 1.60 3.73 3.11 2.38 5.62 1.12 1.14 -0.163
5 Theta_M Test 1.54 2.92 2.50 2.30 4.34 0.895 0.894 0.214
6 auto Test -0.814 2.40 1.86 -1.86 3.61 0.668 0.733 0.0862
7 ensemble_Auto_Reg Test -0.250 2.29 1.90 -0.810 3.56 0.681 0.699 0.117
Which method provides the best forecast accuracy? Discuss the results.
Based on the the forecast errors for MASE the auto ARIMA (ARIMA(1,0,0)(2,1,0)[12]w/ drift) has the lowest errors out of all the models used. This suggests that the auto ARIMA model provides better forecast accuracy on this metric and this method’s predictions are close to the actual values of the test set and is the most accurate method to use when forecasting this data.
5.11 Test the residuals of your preferred method
Assuming that the auto ARIMA (ARIMA(1,0,0)(2,1,0)[12]w/ drift)) method provides the best forecast accuracy.
# A tibble: 1 × 3
.model lb_stat lb_pvalue
<chr> <dbl> <dbl>
1 auto 4.92 0.896
Discuss residual properties and the Ljung-Box test value.
The residuals resemble white noise based on the auto ARIMA model of ARIMA(1,0,0)(2,1,0)[12]w/ drift.
Based on the time plots of the auto ARIMA model, the mean of the residuals appears to be very close to zero due to the residual points being centered around zero. The residuals appear to be uncorrelated due to all of the ACF spikes being within the dotted bounds. Furthermore, the histogram appears to be mainly normally distributed around zero. The residuals also appear to have constant variance with a homoscedastic pattern in the time plot.
Based on the Ljung-Box test, the residuals of the auto ARIMA model represent white noise due to the p-value (0.89) being larger than alpha (0.05). This means we fail to reject the null hypothesis, indicating that the residuals are white noise and no key information was left when fitting this model.
5.12 Forecasting
Now, if you are satisfied with your model, use the complete data to fit the model
6 Results
For this analysis, MASE was the metric that was analyzed to determine the most accurate model on the test set. By going through the benchmark methods, exponential smoothing methods, theta methods, ARIMA methods, and ensemble methods, the best model was picked among the methods and added to the ‘Forecast Accuracy Comparison Among the Best Models’ tibble. We start out by seeing that the Seasonal Naive model had fairly low forecast errors with a MASE of 1.11. However, in this table there are clearly better models that fit the test set better and can be deemed more accurate choices. Regarding the most accurate choice, we clearly see that the auto ARIMA model of ARIMA(1,0,0)(2,1,0)[12]w/ drift performed the best on the test set with a MASE of 0.67. These forecast errors were the lowest out of the seven models (ETS, Holt Winters Damped, Regression, Seasonal Naive, Theta Multiplicative, ARIMA(1,0,0)(2,1,0)[12]w/ drift, and an ensemble model of auto ARIMA and regression), and can be concluded that this method provides the best forecast accuracy.
From the residual diagnostics of the ARIMA(1,0,0)(2,1,0)[12]w/ drift model, the residuals clearly represent white noise. The mean of the residuals appears to be very close to zero in the time plot due to the residual points being centered around zero with a homoscedastic pattern. The residuals also appear to be uncorrelated due to all of the ACF spikes being within the dotted bounds, with no outliers present. Furthermore, the histogram appears to be mainly normally distributed around zero. Based on the Ljung-Box test, the p-value is 0.89 which is substantially larger than alpha (0.05). This means we fail to reject the null hypothesis and confirm that the residuals are in fact white noise. As a result, there is no useful information left out when fitting this model which aids in the forecast accuracy of our forecast.
7 Discussion and conclusion
Using the auto ARIMA model of ARIMA(1,0,0)(2,1,0)[12]w/ drift, a forecast was created that forecasts twenty months ahead. This forecast displays strong seasonality with a slight decrease in temperatures for the upcoming summer seasons in the next twenty months in 80% and 95% confidence intervals. Because of these confidence intervals being very close to the forecast, this shows that this model is a good fit to this data, with very little uncertainty in the accuracy. In 2023, the average temperature will be roughly 76°F. In the year 2024, the average low is around 46° with the average high being around 75°F in the summer months.
This illustrates that although Spain is experiencing an abnormal heat wave currently, there was no underlying upward climate change temperature trend in the past seven years. If there was a slight increase in monthly temperatures over the years, then the forecasts would more than likely follow this trend and forecast accordingly. However, the forecasts are actually lower than all of the summer months, except for one year, which clearly shows that climate change is not a factor for this abnormal heatwave Spain is experiencing, and temperatures should in fact decrease in the upcoming summer months.
While the forecast does show a potential decrease in temperature for the next two summer’s in Spain, we do not have data on the temperatures that Spain experienced this past summer in 2023. This spike will not be seen in the data and the forecasts only provide insight on Spain’s past monthly average temperatures without this heatwave. As a result, this forecast may project temperatures to be a little lower compared to forecasts that include this recent data. However, these forecasts are a great way for Spain’s officials to understand what temperatures are expected to occur based on the past seven years, and they can start planning electricity demand, crop production strategies, and tourism activity with this slight decrease in average temperatures in the next twenty months.
8 Works Cited
Salas, Erick Burgueño. “Spain: Monthly Average Temperature 2015-2023.” Statista, 30 May 2023, www.statista.com/statistics/802774/monthly-mean-temperature-in-spain/.
Vitalone, Vivi. “‘The Heat Is Relentless’: Scorching Temperatures Bring Misery to Daily Life in Spain.” CNBC, CNBC, 27 July 2023, www.cnbc.com/2023/07/27/extreme-heat-soaring-temperatures-bring- misery-to-daily-life-in-spain.html.