1. Overview

In the world of air travel, 2022 was a year of air travel defined by cancellations. While 2023 has seen a decrease in cancellations, FlightAware reports an increase in delay time from last year.

Per federal data, most flight delays in recent years have been caused by issues within the control of airlines.

Discovering and understanding which specific issues cause delays could inform airlines’ priorities, and could mitigate delay time moving forward.

Of course, with something as complicated as air travel, it is not unreasonable to expect small delays. Instead of splitting hairs over delays in the minutes, we will attempt to see which variables are most closely tied to a delay of 1 hour or more. While it is clear that ‘Air Carrier Delay’ is a sizable chunk of delay times, that is very vague. We will use a dataset of flight records to create a predictive logistical model, with a goal of seeing which variables have an effect on delays of an hour or more, and how strong that effect is.

We will create different models with multiple statistical methods with a goal of determining which is best for prediction on whether see which variables, if any, are shown to consistently impact whether or not a flight experiences a long delay.

1.a Description of Data

The flight delay data has 3593 observations of 11 variables. They are:

Carrier: The airline

Airport_Distance: Distance between two airports

Number_of_Flights: Total number of flights in the airport

Weather: Weather condition, measured on a scale from 0 (mild) to 10 (extreme)

Support_Crew_Available: Number of support crew

Baggage_loading_time: Time in minutes spent loading baggage

Late_Arrival_o: Time in minutes the plane arrived late

Cleaning_o: Time in minutes spent cleaning the aircraft

Fueling_o: Time in minutes spent fueling the aircraft

Security_o: Time in minutes spent in security checking

Arr_Delay: Flight delay in minutes. This is the dependent variable of the dataset.

For logistic regression, an additional variable, Hour_Delay, will be created to see if a plane is 1 hour late or more.

A copy of this publicly available data is stored at https://pengdsci.github.io/datasets/FlightDelay/Flight_delay-data.csv.

1.b Dataset Overview

First, as the majority of the variables are numerical, a summary stats overview is warranted.

Amazingly, there are no missing values. Of note is the extreme variance within the Support_Crew_Available and Number_of_Flights variables and minimal variance within the Weather variable

We will also take a preliminary look at the Carrier variable with a frequency plot:

This shows very extreme differences within the representation of each carrier, but there are enough observations from some (B6, UA, etc) that we believe the Carrier variable could prove to be significant during analysis. As such, it will be left in.

1.c Looking for Outliers

While the summary of variables like Airport Distance do not appear to cause alarm, the high variance of some stats suggests the possibility of outliers. While outliers are expected, it may be worth identifying and individually reviewing outliers to see if there could have been an inputting error.

The test of outliers for number of flights returns 3 possible outliers - observations 1652, 3163, and 2729. The delay for each observation is 0. While these observations do not appear to be errors, they make a strong suggestion that number of flights could be a good predictor for delay time. Similarly, the baggage time also shows 2729 and 3163 as low outliers.

1.d Investigation for Collinearity

As there are multiple variables related to airplane prep (fueling, cleaning, baggage loading, and security), it may be worth checking for possible collinearity between two predictor variables. If one can be identified as redundant and removed, it will save us time and computational power.

As no variable appears to have a high correlation with another, there is no definitive preliminary evidence of multicollinearity, so all variables could potentially be included into the model.

1.e Discretizing Variables

Since there are two instances of flights with outliers of both baggage and number of flights, we will see if replacing those variables with bins of values has an impact on the model. we could not find any precedent for this online, so we will use the quartile ranges of each to split each variable into ‘low’, ‘middle’ and ‘high’.

While the residual plots clearly look different, it is not entirely clear which model has less error.

While not apparent, the discretized model appears to have brought our residuals up. We will not consider a discretized model as a possibility moving forward.

1.f Pairwise Associations

Many variables in our model are numeric, we will look at pairwise associations of all variables, looking mostly at the correlation with the variable of interest, Arr_Delay.

This output shows the strength of some correlation between variables (specifically baggage loading, late arrival, and number of flights) and the variable of interest. These variables make a strong case for their inclusion in our predictive model.

2. Building a Model

As stated in our introduction, building a logistic model will allow us to see which variables have a significant effect on the presence of a long delay and how strong that effect is.

2.a Logistic Regression

As we are attempting to predict the probability of a delay of one hour or more, our response variable can only take values between 0 and 1. The predictive model most appropriate for this will be a logistic regression, following a form of:

\(P(Y) = (e^\delta)/1+e^\delta\)

Where \(\delta\) is equal to:

\(\beta_{0} + \beta_{1}x_{1} + \beta_{2}x_{2} + ... + \beta_{k}x_{k}\)

And where each \(\beta\) represents the coefficient associated with each variable (\(x_{k}\)).

2.b Development of Candidate Models

2.b.1 Initial Model

The initial model is simply using every variable given in the initial dataframe. While very possibly accurate, this model is quite large and lacks transformations that possibly could assist with predictive modeling.

This model will be good to have as a baseline comparison for other candidate models. Our initial model residuals appear to follow a cubic line, so the inclusion of a second or third order term may help.

2.b.2 Including a Second-order variable

This can help mitigate errors with residuals, as seen above. Only numerical variables will be transformed, as mathematical functions will not work well with categorical variables such as the name of the airline!

And, because of the extremely obvious cubic shape to the residual plot of our initial model, we will try some cubed variables.

The cubic and quadratic models both, somehow, reduce error slightly despite having a single residual that is over 300 off.

2.c Selection of Optimal Model

To identify the optimal model, we will use the ‘step’ function in R to eliminate redundant variables in each of our model sets.

First, we will apply various selection methods to our initial dataframe.

Both selection processes narrowed down the model to Security_o, Airport_Distance, Number_of_Flights, Weather, Support_Crew_Available, Baggage_Loading_Time, and Late_arrival_o. This suggests that in a candidate model with no transformations or additional terms, those variables should be included.

Next, our quadratic and cubic models

The two selection processes settled on largely the same variables as our first-order model, but also included higher order terms for some of those same variables. We do not want a final model to contain the same variable at multiple levels, as it could give undue weight to those variables. However, this may indicate a strong connection between the reiterated terms and the variable of interest, we will create a reduced model with those.

2.d Candidate Models

Based on the previous results, we have 2 Candidate models, as well as our initial model:

We will look at the significance of each coefficient, of each model, as an initial comparison:

As expected, the full model shows many variables with insignificant p values. The minimal model, while exclusively significant values, does not have as many strong predictors as the mid-sized model. The mid-sized model has Security time not achieving statistical significance, but is fairly close.

As expected, almost all variables show a positive relationship with probability of longer delays. This should not come as a surprise, as most variables are number of minutes spent at some stage before a flight takes off (security, baggage loading time, etc). The number of support staff is negatively correlated, which also makes sense, as more hands available should lead to faster performance of those (and other, possibly unrecorded) tasks.

2.e Prediction

To test the similarity of the candidates, we will create a sample dataset by choosing values for each variable. We will then use R to predict the Delay for each sample flight in both the initial and candidate model. We will then compare the two models’ results.

As demonstrated in the table, the models have similar predictions despite very extreme x values. It is our belief that the candidate model is ideal, as it generates comparable predictions at a fraction of the memory. This advantage will only continue to grow as larger datasets are used with this same model.

2.f Model Building Conclusions

In this section we utilized different strategies to optimize our predictive model and arrived at a reduced model formula. We were able to test and rule out different transformations of variables, compare possible models, and select a ‘champion’ model to be compared against other model building strategies in subsequent sections.

A copy of this data is now publicly available at: https://raw.githubusercontent.com/AlexDragonetti/flightcandidate/main/can.csv

3. Training, Cross Validation, and Testing the Model

To train our model, we will only a portion of the data to determine an optimal cut-off probability and then test it on the remaining portion.

3.a Separating and Training the Model

First, we need to create separate dataframes for testing and training purposes.

Next, we ‘train’ the model by finding the most accurate cutoff probability. We do this by splitting our training data into 5 ‘folds’. Next, we use different combinations of those folds to test 20 different cutoff points. Finally, we see which cutoff point had the highest combined probability in each combination. This is called cross-validation.

The above shows the model’s optimal cutoff probability and accuracy.

3.b Testing our Cut-off Probability

We now will test this cutoff probability against the separate testing dataframe. This probability should be different from our previous probabilities, as the testing data has not been used in its calculation. Note that, because the training/testing data split is different every time the code is run (including to make this document), it is impossible to know what the optimal cutoff point would be without setting a seed (which then isn’t random). Because of this, we will simply test the cutoff probability of .52 for demonstration purposes.

Above is our model’s accuracy, when applied to the testing data subset.

3.c Specificity and Sensitivity of our Model

Specificity and Sensitivity are two important pieces of information about our predictive model. The Sensitivity is the percentage of positive predictions that are correct: “How many of our predicted yeses are correct?”

The Specificity answers the opposite. It is the percentage of correct negative guesses: “How many of our predicted nos are correct?”

We will use the newdata dataframe generated in the previous section to split our guesses into four groups:

p0.a0 - the number of correct nos p0.a1 - the number of incorrect nos p1.a0 - the number of incorrect yeses p1.a1 - the number of correct yeses

From this, we can see that our sensitivity is about 91%, while our specificity is only 81%. Our model is better at predicting the presence of an hour delay than the lack of an hour delay. If we wanted to adjust the cutoff point to increase the specificity, we could use an ROC curve to establish an ideal point while factoring in both sensitivity and specificity.

3.d Generating and ROC (Reciever Operating Characteristic) Curve

An ROC curve can help us make decisions about a cutoff point. While our cutoff point is optimal for success, we may prefer a cutoff point with a higher sensitivity (correct designation of positive, which is in this case, the presence of an hour+ delay) or specificity (correct prediction of a negative (delay under an hour). An ROC curve shows the concurrent sensitivity and specificity, allowing one to pick and choose what an ideal balance would be. It also allows us to measure the Area Under the Curve (AUC), which allows us to measure the overall performance of a model. A theoretical perfect AUC is 1, while a theoretical random guess model’s AUC is .5 (imagine your model was flipping a coin, and if heads, you say ‘yes, the plane will be an hour late’. That is the .5 random guess model).

The above graph was generated with a series of cutoff thresholds and could be used to determine the ideal cutoff point, if specificity or sensitivity need to be accommodated.

3.e Testing and Training Conclusion

In this section we established a baseline understanding of ROC/AUC and optimal cutoff points, which we will use to compare our champion model against future model building strategies.

4. Neural Network Predictions

Another possible means to an ideal cutoff could could be creating a Neural Network (NN) model. Creating an NN model, code-wise, will be very similar to the cross validation done in the previous section.

4.a Normalizing Variables

For good results using the neuralnet function, we will normalize our numerical variables. Our scaling formula for each numeric value is as follows:

\(\hat{x}_{ij} = \frac{x_{ij} - min value_{i}(x_{ij})}{range(x_{ij})}\)

Meaning that the scale X values will be arrived at by starting with the original x, subtracting the lowest value of that variable, then diving the difference by the difference between the highest and lowest x value. We will only do this for numerical variables, not dummy variables, as it will help mitigate variance. As dummy variables are only taking on values of 0 or 1, it would not change much, but it will be import for variables like airport distance (which has a range in the hundreds ).

4.b Preparing Dataset for NN

Now, we will organize our standardized, complete, flight.hour dataframe in such a way that it can be read as a model formula. For conveneince, we will make a new dataset and adhere to CamelCase naming conventions, where words in the phrase are separated but capital letters, allowing us to avoid spaces.

Many variable names are workable. We will keep all of the carrier names as is and rename the others.

colnames(flightmtx)[15]<-"AirportDistance"
colnames(flightmtx)[16]<-"NumberOfFlights"
colnames(flightmtx)[17]<-"Weather"
colnames(flightmtx)[18]<-"SupportCrewAvailable"
colnames(flightmtx)[19]<-"BaggageLoadingTime"
colnames(flightmtx)[20]<-"LateArrival"
colnames(flightmtx)[21]<-"Cleaning"
colnames(flightmtx)[22]<-"Fueling"
colnames(flightmtx)[23]<-"Security"
colnames(flightmtx)[24]<-"HourDelay"

Now that our variables are renamed and scaled, we will write the model as an object called modelFormula which we can call upon to put into code at any time. Note that we are dropping the intercept, as our NN model will generate its own.

4.c Splitting Data into Training and Testing

Just like the previous section, we will split our dataset into training and testing data for the purpose of cross-validation. See the previous section for a more detailed explanation.

4.d Building the NN Model

Our NN model will use the modelFormula object that we coded earlier. Aside from the use of the modelFormula object, the neuralnet command begins similarly to our glm code in the previous section. It differs by adding commands afterwards, such as the hidden layers and the use of the rprop+ algorithim, telling the NN to use Backpropagation. Backpropagation is a means to improve predictive accuracy by calculating the gradient of a loss function (which we would call the Mean Square Error in classical statistics) of weights within the NN and using them to update other weights.

Above is a table of each variable’s coefficient, below is a graphical model of them and how they are used to determine the predicted outcome.

‘Figure 9: Network Model Visualization’

And below is our logistic regression model, derived by the NN.

Finally, we will have our NN model find an optimal cutoff probability, using the same cross-validation technique that we used in our previous section. We expect that it will be slightly different, partially because it is a different technique, but also because our code generates a completely different random split for training and testing.

Above is the optimal Cut-off Probability, as suggested by our NN model. The ROC curve below will show the same information as in the previous section.

Similar to the previous section, the ROC curve helps determine an ideal cutoff probability, based on the desired sensitivity and specificity of our model. While favoring a more sensitive or specific model could be important due to external factors, there is no apparent need for either in the realm of flight delay predictions.

4.e Neural Network Conclusion

The above NN model building shows how machine learning can be utilized to create a predictive model and gave an ROC/AUC and cutoff point to compare to our previous ‘champion’.

5. Decision Tree Analysis

A decision tree is a means of making prediction based on answers to series of ‘questions’ to generate ‘rules’, which are defined by the available data. Whether or not a condition is met is defined at a node. Beginning with a root node, representing the entire sample, the data branches into different nodes. These. If a node splits into sub-nodes, it is aptly called a decision node. If a node does not split, it is called a terminal node or - to fit the theme - a leaf. A simpler decision tree can be seen below:

Here, the root node splits the sample based on income bracket. From there, it shows conditions to either continue to further decision nodes or end at a leaf, which shows that the model predicts a ‘yes’ or a ‘no’. Our own decision tree will follow the same principles, but will likely be more complex (as it contains more variables).

A decision tree may allow us to simplify our model and see if any variables are only relevant if certain conditions are met,like how in the above example, Credit Rating (CR) is not used to make a prediction for the ‘high’ income bracket.

5.a Splitting the Dataframe for training and validation

First, we will need an unweighted datafame.

Next, we will need to separate it into training and testing sets. This is the same code from previous sections and accomplishes the same split.

5.b Different options for Decision Tree

When designing our decision tree, we have two options that we will explore. They are: 1) whether or not to weight false positives and false negatives differently and 2) How to determine if and when to split.

5.b.1 Assigning weight to different errors

No predictive model is perfect. We will have false positives and false negatives in any model, but the rpart() function allows us to treat them differently. As a reminder, the definitions are:

False Positive - Our model predicts that there will be a delay of 1 hour or more but in reality, there is not False Negative - Our model predicts that there will not be a delay of 1 hour or more but in reality, there is

We believe that exploring false positives having more weight to them is much more relevant than a false negative. Put simply, if you incorrectly predict a short delay and act accordingly (get to the airport with enough time to go through security, check-in, etc), the end result is waiting in an airport terminal (something to be expected anyway). On the other hand, if you incorrectly predict a long delay and act accordingly (take your time in arriving to the airport, etc), you will likely miss your flight.

Keeping this in mind, our trees will explore weighing false positives and negatives the same, but also giving false positives ten times the weight of false negatives.

5.b.2 Gini vs Entropy Splitting

Our Decision Trees will consider splitting into different nodes based on one of two possible criteria: Gini or Information Gain (referred to as Entropy, which will be explained shortly).

Not to be confused with the summary measure of income inequality, the Gini index measures the variation of subgroups split by a feature variable. In the example above, the Gini measure (D) of the Student variable would vary numerically based on how many answered ‘Yes’ vs how many answered ‘No’. If all of them answered one or the other, D would be equal to 0.

The Information Gain of a variable measures the percentage of each child node class derived from splitting a decision node. It is calculated by subtracting the Entropy of the child node from Entropy of the parent node. Highest Information Gain (difference between parent and child node Entropy) is usually taken first.

5.c Making the Trees

As we will be making four trees (weighted and unweighted, Gini and Entropy), it will be faster (and more space efficient) to write an R function rather than copy the same code four times. This is done using the rplot() package.

We can now use this function to create decision trees with different splitting and weighing methods.

Now, we will use the rpart.plot() package to visualize the Decision Trees

5.d ROC Curve for different Decision Tree models

As the ROC curve requires both sensitivity and specificity, we will write a function to determine them.

Now, we can use this new function to define and plot the ROC and AUC for each of our models.

Above is the ROC curve for each model considered. We will now look at the optimal cutoff point for each decision tree model. As, again, we are doing this for four different models, it will be faster if we write a function.

Above is the optimal cutoff point for each decision tree model. They do not seem to be radically different, but it would depend on user preference for weighting false positives and purity measure.

5.e Decision Tree Conclusion

While the decision trees use far fewer variables than any other method outlined so far - especially the Entropy + Weighted model, which managed an AUC of .914 only using 2 variables on one occasion - they are able to demonstrate comparable accuracy other models, as evidenced by the cut-off scores. They provide the advantage of versatility and simplicity, which would be a boon if this model was adopted to a larger dataset. For the size, the number of variables is certainly appropriate, but in a sample size 100 times larger, being able to cut down a model to just two predictor variables would significantly reduce the computational power needed.

6. Principal Component Analysis

Our dataframe contains many variables that could be related. For example, the number of support staff available could very easily impact the amount of time spent in baggage loading. Principal Component Analysis (PCA) can be very useful in the event of a dataset with multicolinearity.

6.a Preparing the Data for PCA

To prepare our data for PCA, we will remove categorical variable and, in order to mitigate the effect of outliers, we will scale it. We cannot use log scaling, as there are variables with a value of 0. Log-scaling changes their value to -Inf, which only returns

6.b PCA analysis and Output

Next, we can see each variable’s factor from the PCA. For an example of how this grid can be read, consider PC1 below. PC1 can be defined as:

\(PC_1 = (x_{Airport_Distance}*PC1_{Airport_Distance}) + (x_{Number_of_Flights}*PC1_{Number_of_Flights}) + ... + (x_{k}*PC1_{k})\)

And now the importance - ‘proportion of variance’ - of each component.

This shows the amount of variance explained by each PC. As the first explains the most, by far, we will attempt to include it in a model.

6.c Testing The PC model

We will run the same analysis we have done on the first candidate model to see if it has statistical significance.

The output does not indicate that the PC variable is significant. We will not consider a PC variable in final analysis.

6.d PCA Conclusion

Despite not coming away with a new piece for prospective models, our PCA has shown that there is not significant multicollinearity between different variables. It was certainly worth investigation, as one would expect an airport experiencing a staff shortage to experience longer security times, baggage loading times, etc.

7. Bootstrap Aggreggation Decision Trees

Bootstrap Aggregation (Bagging, for short) is a method that may allow for improvement over our basic decision trees. This method first bootstraps the data, trains models individually (called parallel training), then aggregates the outputs to ‘vote’ on the classification, given a set of values.

7.a Splitting our Data for Testing and Training

This is the same testing/training code from earlier in the report.

7.b Bagging our Data

The code is very similar to the Decision Tree modeling code. We will again weigh false positives more heavily than false negatives, for reasons outlined in section 5.

We will then plot the ROC curve and AUC to measure the efficacy of this model.

The above output shows the optimal cutoff probability and ROC curve for our Bootstrap Aggregation model. It should be compared against the weighted gini model in the earlier Decision Tree section.

7.c Bagging Conclusions

Bagging presents a means of refining the previously discussed Decision Tree model. Through utilization of Bootstrap resampling, we can improve our Decision Trees, as seen in the increase from section 5’s unweighted gini model.

8. Conclusion and Discussion

8.a Conclusions

First, we would like to make clear: no one involved in this study is particularly familiar with research related to flights, travel, etc., so any recommendations should be run by an expert before being acted upon.

From the exploratory phase, it seemed clear that, in this dataset, Number of Flights, Arrival Delay, and Baggage Loading Time have a strong relationship with a flight delay, while Carrier does not. Our reduced candidate model makes a case that Security time, Airport Distance, Support Crew, Arrival Delay, and Weather all have a significant impact on the presence of a long delay. While this is all actionable information, these variables can be split into three categories:

Projection - Understanding the impact that Number of Flights, Airport Distance, and to an extent, available staff and weather, can help create a more realistic departure time, available to a passenger before they even reach the airport. Obviously, while they can be forecast with weather reports and staffing information, weather and available staff may change, so they should not be as heavily relied on as the number of flights and airport distance.

Live Updates - While this information does not do a passenger as much good, a live update of their flight situation could leave them less in the dark. If the recording of like security, baggage loading time, and the late arrival of a plane could be automated, they could be factored into a model of one’s choosing to provide a live update of a flight’s projected departure.

Systemic Improvements - Having established above which variables impact the presence of a long delay more, an airport or carrier may want to focus on improving those above others. For example, bagging loading time was considered a significant variable by most models. Average baggage loading time was about the same across all carriers. With this in mind, if infrastructure improvement is being budgeted for with the explicit purpose of reducing delays, one could prioritize that over a variable that did not have as strong of an impact, like fueling.

Each modeling technique outlined in an earlier section has its own merits, but depending on the intended purpose, a simpler model that does not lose too much information is ideal (or even computationally necessary). For this, we would like to highlight the Decision Tree models. During some testing sessions, trees were able to reach an AUC of above .9 while only using 3-4 variables. The ability to strengthen these trees with different techniques presents a variety of possibilities (especially as machine learning techniques related to decision trees are developed).

8.b Recommendations for Future Study

Should this be replicated, there are a few things that we would recommend: First, analyzing the impact that the airports involved may have. It is impossible to say based on this data alone, but there is plenty of potential analysis to be done with the airport a plane is arriving from, departing from, and heading towards. While this information will create more computational demand, we believe that it is worth investigating.

Second, further exploration into Decision Tree techniques. Due to our aim to broadly analyze the data, we cannot fit more in this report, but there are many - often more advanced - tree algorithms that could prove useful, such as boosting, gradient boosting, or the recently developed XG boosting.

Finally, we chose an hour delay as the response variable somewhat arbitrarily. A cutoff point needed to be established to use some of these statistical techniques, but one could use these techniques with different delay times. Our research shows that a flight that is late by 15 minutes or more is considered ‘delayed’, but treating that as the cutoff point would have meant there were only 92 ‘on time’ flights out of a total of nearly 3600. One may want to consider expert opinions for a third party analysis, or, for internal analysis (done by a single carrier or airport), use carrier or airport specific data to find an ideal cutoff point.