Content-Based Computer Simulation of a Networking Course: An Assessment

A Study by Giti Javidi, Ph.D. and Ehsan Sheybani, Ph.D. from Virginia State University


Introduction

With the current advancements and prevalence of technology, schools and other organizations have provided alternative platforms for delivering academic courses and services to students. As a result, some studies in computer science and engineering have attempted to determine the effectiveness of learning through simulated laboratories in virtual experiments, mainly because of the many challenges involved in developing tight experiments for such studies. One primary example is this assessment on content-based computer simulation of a networking course by Javidi and Sheybani that poses a great influence of using the World Wide Web as a medium of delivering simulated virtual laboratory learning towards students in the computer science and engineering field of study. According to them, “[t]he purpose of this study would be to investigate the effectiveness of simulated labs as virtual laboratory and present the results. Specifically, this study examines whether computer simulations are as effective as physical laboratory activities in teaching college-level computer science and engineering students about the concepts of signal transmission, modulation, and demodulation” (64).

In 2016, researchers identified that three-quarters of two and four-year colleges offer distance-learning opportunities. However, with the recent pandemic, most colleges and universities worldwide were forced to conduct classes online as well. This makes the aforementioned study more relevant and significant as several specialized fields of education–mostly engineering and medicine-related–remain far from being ready to transition online. For instance, Javidi and Sheybani mentioned how laboratory sessions are indispensable for engineering programs. Despite having access to alternatives like home-kits or computer-simulated laboratories, again, there is still little to no evidence on the effectiveness of such methods (64). Consequently, they also identified three reasons as to why there appear to be few studies derived from statistically significant data sets, despite efforts to enhance engineering and computer science education. First, it was discovered that computer science and engineering classes lack the necessary number of students to form control or experimental groups that would yield such results. Second, they believe that “few engineering professors are familiar with the complexities and ethical issues involved in human subject research.” And last, with the ongoing innovations revolving around computer science and engineering, they mentioned the difficulty in conducting control group studies because of the natural growth and change in environment after pre-planning phases.

Given these, their study used both qualitative and quantitative methods to determine the effectiveness of an alternative to physical laboratory activities in a communication systems course. In addition, they also took interest in the effects that computer simulations have on a) students’ knowledge retention after a period of time and b) students’ attitudes towards the use of the simulation as a substitute for physical activities. Below is a short description of their findings in relation to the purpose of the study:

“The findings revealed significant differences, in favor of the simulation group, between the two groups on both the conceptual post-test and the follow-up test. The findings also revealed a significant correlation between simulation groups’ attitude toward the simulation program and their post-test scores. Moreover, there was a significant difference between the two groups on their attitude toward their laboratory experience in favor of the simulation group. The qualitative research uncovered several issues not explored by the quantitative research. It was concluded that incorporating the recommendations acquired from the qualitative research, especially elements of incorporating hardware experience to avoid lack of hands-on skills, into the laboratory pedagogy should help improve students’ experience regardless of the environment in which the laboratory is conducted” (65).

In this journal, the students will take on the quantitative methodology used during the research, and how it was appropriate for this study, the results of hypothesis testing to see whether virtual simulated learning presents a better learning environment compared to traditional laboratory, and a discussion of the results and possible reasons how they came about.

Methodology

As previously mentioned, Javidi and Sheybani’s study combines both quantitative and qualitative approaches into their research methods. However, this report will again only focus on its quantitative data in order to generate results through hypothesis testing. Their quantitative study examined the differences between the scores of two groups in their post-test, as well as a follow-up measure. Accordingly, it also examined the difference in terms of lab completion time. The physical lab group performed communication systems laboratory exercises using traditional hardware laboratory while the simulation group used a simulation software to perform the same exercises.

In the study, the independent variable is the method of instruction provided, varying from the following categories: computer simulation and physical laboratory. The dependent variables on the other hand are the post-test score, follow-up scores, attitude scores, and laboratory completion time scores. The post-test was made up of problem-oriented type items, along with a few multiple-choice questions. Moreover, the study focused on signal modulation and demodulation, specifically the understanding of speech signal modulation and demodulation to give the students a perspective on how communication systems–including radios and televisions–work. And with that, all sections met once a week on two varying days for a period of four hours. Firstly, it was ensured that all participants were provided a brief lecture on the topic of FM and AM modulation and demodulation. Similarly, the students were also incentivized by being given 5 extra credit points for participation in the study. Each student was then assigned to one of the two groups according to the last two digits of their subject’s student ID. The physical laboratory group gathered in a hardware laboratory, while the simulation group held their experimentations in the computer lab. Afterward, each group was given a pre-lab and two laboratory experiments specifically designed for each group. At the completion of the experiments, the students were asked to take a one-hour Conceptual Test. Then, both groups were asked to complete the attitude survey questionnaire. Finally, the group in the physical laboratory was dismissed, followed by the simulation group who remained longer to complete the qualitative survey questionnaire. A few days after, three students from each group were randomly selected to participate in a group interview.

Now, in order to test their knowledge retention, all 12 post-test questions were integrated into their midterm exam three weeks after the first treatment. In addition, the conceptual test was administered twice to each student in the sample: during the 5th (after the experimental treatment) and 8th weeks of the semester (during their mid-term exam week). And in the end, each student test was graded by two independent instructors: the course instructor and another instructor who has not taken part in the study or methodology. To prevent any biases, the researchers also made sure that both instructors were unaware that another instructor was grading with them. From there, the scores’ alpha reliability was finally computed to examine the internal consistency in grading.

Results

The results obtained by the researchers consisted of the test scores from both the physical laboratory students and the simulated laboratory students. It includes the lowest score, highest score, mean, standard deviation, skewness, and kurtosis. Based on the nature of this hypothesis test, we will only be utilizing the mean and standard deviation of both samples.

Table X.Y: Descriptive Statistics for Both Post Test and Follow-up Test

The results of the experiment seem obvious from the two distributions. The initial test yielded a great difference in average scores with the lower scores for the simulation group being even higher than the highest scores of the physical group. This is evident from the two curves not intersecting. To determine if there truly is a significant difference, the null and alternative hypotheses must be established: \[H_o: μ_a = μ_b\] \[H_a: μ_a < μ_b\]

The null hypothesis states that there is no significant difference between the two groups’ scores, and the alternative hypothesis states that the physical lab students scored lower than the simulated lab students. With this, the researchers conducted the hypothesis test to compare the two mean scores. The appropriate critical value for this test will be α = 0.01, which will yield a very accurate answer for the hypothesis test. The equation required is shown below: \[z=\frac{x̄_a-x̄_b}{\sqrt{\frac{s_a^2}{n_a}+\frac{s_b^2}{n_b}}}\]

Inputting the values, the resulting equation reads as follows: \[z=\frac{13.78-31.65}{\sqrt{\frac{1.14^2}{40}+\frac{2.68^2}{40}}}=-56.65\] Plugging this value into the pnorm function of R, the resulting probability is seen below:

pnorm(-56.65)
## [1] 0

This z-value is astronomically low, resulting in a probability of 0. This test concludes with \(P(Z<0.05)\) being true. This means that the null hypothesis is rejected, which points directly to the alternative hypothesis being accepted. This provides ample evidence to support the statement that students that conducted their laboratory through a simulation performed better than the students that took regular physical laboratory classes.

The follow-up test, however, yielded different results compared to the initial test. The students that took the simulated laboratory classes ended up with lower scores compared to their initial test scores. The students that took physical laboratory classes were more consistent with their scores. Comparing the two, there is still a large difference between the scores of the simulated lab class compared to the physical lab class. Plugging in the values obtained from the follow-up test and using the same hypotheses, the resulting equation reads as follows: \[z=\frac{13.35-25.50}{\sqrt{\frac{1.15^2}{40}+\frac{7.57^2}{40}}}=-10.04\]

Applying the pnorm function once again, the probability is seen below:

pnorm(-10.04)
## [1] 5.083708e-24

The test yielded a smaller difference between the two means, but the resulting probability is still much lower than the critical value of α = 0.01. This results in a rejected null hypothesis once again, leading the alternative hypothesis of \(μ_a < μ_b\) being accepted. This proves that there is still a significant difference between simulated lab classes compared to physical lab classes.

The most concerning thing about the simulated lab class is their retention rate. It is obvious that there is a decrease in test scores by the simulated lab class. So to test the retention rate, hypothesis testing will be utilized once again, this time with the following comparisons:

Upon initial inspection of the graphs, the physical lab class seem very consistent while the simulated lab class varies by quite a bit. To test the retention rate, the two tests will be compared with each other using hypothesis testing. \[H_o: μ_a = μ_b\] \[H_a: μ_a > μ_b\]

Where \(H_o\) states that there is no significant difference meaning that the retention rate is high, while \(H_a\) states that there is a decrease in score, alluding to a lower retention rate. Using the same equation as before, the values for the physical classes will be inputted where \(μ_a=Initial\) and \(μ_b=Followup\).

\[z=\frac{13.78-13.35}{\sqrt{\frac{1.15^2}{40}+\frac{1.14^2}{40}}}=1.68\]

Then inputting this z-value to the pnorm function:

pnorm(1.68, lower.tail=FALSE)
## [1] 0.04647866

The resulting probability is 0.046. This value is greater than the critical value of α = 0.01 since \(0.04>0.01\). This means that the test has failed to reject the null hypothesis, meaning that there is no significant difference between the initial test and the follow-up test of the physical lab students. A little to no difference means that the retention rate of the students is very high.

 

The next test will involve comparing the initial and follow-up test of the simulated lab students. Using the same equation and hypotheses, the researchers will then input the values collected where \(μ_a=Initial\) and \(μ_b=Followup\).

\[z=\frac{31.65-25.50}{\sqrt{\frac{2.68^2}{40}+\frac{7.57^2}{40}}}=4.84\]

Using the pnorm function once again:

pnorm(4.84, lower.tail=FALSE)
## [1] 6.491956e-07

The resulting value is much lower than the critical value of α = 0.01. This means that the null hypothesis is rejected. This leads to the test accepting the alternative hypothesis which states that there is a significant difference between the initial and follow-up test. The follow-up test scores are significantly lower than the initial test scores which leads the researchers to believe that the simulated lab class may perform better in tests, but ultimately ends up with a lower retention rate.

 

Discussion

Due to the current state of the world today, the research done here may be very useful for schools utilizing the online system that would encourage the use of a simulated lab compared to an online one. Due to the information that was obtained from the data and the statistics tests, it could be seen that at first there is a significant advantage to using the simulation compared to the physical setup of the past. However as stated previously, when testing for the retention rate of the students there is a dip in the retention rate of the students that took the simulation test. Though the test scores are still greater compared to the students who took the physical exercise, there is still a trend that can point to the retention rate eventually becoming a problem to the point of the test scores of the simulated subjects to become lower than the physical subjects. This could also be a limitation of the study, being after the first follow up there is no data on the trend of the student’s score after. This would pose as a big hurdle of colleges using the simulated method today due to the fact that the students are being trained, in college , for a profession that would occur in the future. At add to that the information obtained in college must also be retained for many years and not dwindle. Though the simulation group gives good results at first there is much to be worried about in the time to come. Compared to the physical group that gave lower results but had a very steady retention rate. So the main questions for institutions that would operate using this method is that are the high test scores worth the sacrifice of the students retention rate? But it doesn’t have to be either or, a hybrid of both techniques could be applied and researched upon to see if the subjects could obtain the advantages of both methods. As of now there is no data or evidence to say which method is better than the other be it the high score low retention or the low scoring good retention.

 

Conclusion

From the study done and comparing the two setups, it can be concluded that the simulations are effective at teaching students the necessary skills in a networking course. At a college level, having these are especially important due to it typically being what the students will apply in their careers. It is clear that in both tests taken by the students using physical and simulated setups they, have differing scores from each other. The students using simulated setups had higher proficiency compared to the other group. Though that group did not have the better retention, their mean scores and range showed that that group of students had a better grasp of the topic as compared to the other group. This method can be recommended to students and faculty for the online setting give the charts and computations.  


References:

Javidi, G., & Sheybani, E. (2008). Content-based computer simulation of a networking course: An assessment. Journal of Computers, 3(3). https://doi.org/10.4304/jcp.3.3.64-72

Montgomery, D. C., & Runger, G. C. (2019). 10 Statistical Inference for Two Samples. In Applied Statistics and Probability for Engineers (7th ed., pp. 262–279). essay, Wiley.