Final Project: Exploring Students’ Perceptions of AI in Education: Analyzing the Attitudes and Perspectives of Cybernetics Students
ETR537
Author
Andy Kmiecik
Published
November 29, 2023
Project Introduction
The arrival of Artificial Intelligence (AI) in educational settings has sparked a significant shift in the landscape of learning and teaching methodologies. This research project aims to delve into the attitudes and perceptions of undergraduate students in their 2nd and 3rd years at the Faculty of Cybernetics, Statistics, and Economic Informatics (Petrascu, 2023). The focus is on understanding their views regarding the role of AI in education, a domain that is rapidly evolving and increasingly influencing various aspects of learning and teaching processes.
AI’s integration into educational systems has been a subject of extensive research and debate. Scholars like Chen et al., (2023) have explored the potential of AI-driven student assistants in classrooms, emphasizing the need for designing chatbots to support student success. This aligns with the broader narrative of AI as a facilitator of personalized learning experiences and an enhancer of educational outcomes (Lin et al., 2023). However, the perception of AI’s role in education varies among students, influenced by factors such as their familiarity with technology, the perceived effectiveness of AI tools, and the overall impact on their learning experience (Rahim et al., 2022).
The Faculty of Cybernetics, Statistics, and Economic Informatics, with its focus on cutting-edge technologies and data-driven approaches, provides an ideal setting to explore these perceptions. The students here are likely to have a unique perspective on AI in education, given their academic background and potential exposure to AI tools and methodologies. This research seeks to understand whether these students view AI as a beneficial tool in enhancing their educational experience or if there are reservations about its implications and effectiveness.
This study aims to explore how these perceptions align with the broader trends in AI adoption in education. As Kooli (2023) notes, the ethical implications and solutions surrounding chatbots in education and research are critical areas of concern. This aspect is particularly relevant in the context of a faculty that deals with cybernetics and informatics, where ethical considerations are paramount.
This research project is positioned to contribute valuable insights into the evolving role of AI in education, particularly from the perspective of students who are at the forefront of experiencing these changes. The findings are expected to not only reflect the current state of AI in educational settings but also provide direction for future research and implementation strategies in this rapidly advancing field.
Prepare
As part of this project, I am preparing to analyze the collected data to gain insights into students’ perceptions of AI in education.The dataset used for this research project consists of responses obtained from a survey conducted among students in the Faculty of Cybernetics, Statistics, and Economic Informatics. The survey was thoughtfully designed to capture various aspects of students’ opinions, beliefs, and expectations concerning AI’s impact on the educational landscape.
Variables
The dataset includes responses to a range of questions, each designed to provide insights into specific facets of students’ perceptions of AI in education. Here is an overview of the key variables included in the dataset:
Knowledge Level (Question 1): This variable represents students’ self-assessment of their knowledge about AI on a scale from 1 to 10, where 1 indicates being “not informed at all,” and 10 denotes being “extremely informed.”
Sources of AI Learning (Question 2): This variable encompasses the sources that students utilize to learn about AI, including the internet, books/scientific papers, social media, discussions with family/friends, or opting not to inform themselves about AI.
Agreement with Statements (Questions 3 and 4): These variables capture students’ agreement or disagreement with a series of statements related to AI, such as AI encouraging dehumanization, AI leading to job losses, and the perceived impact of AI on global economic growth.
Emotional Response (Question 5): This variable reflects the emotional response of students when thinking about AI, which can be curiosity, fear, indifference, or trust.
Perceived Impact Areas (Question 6): Students are asked to identify areas where they believe AI would have a significant impact, such as education, medicine, agriculture, and more.
Usefulness in Education (Question 7): This variable quantifies students’ perceptions of the usefulness of AI in the educational process on a scale from 1 to 10, where 1 represents “not useful at all,” and 10 signifies “extremely useful.”
Advantages in Teaching, Learning, and Evaluation (Questions 8, 9, and 10): These variables capture students’ opinions on the main advantages of AI in the teaching, learning, and evaluation processes.
Disadvantages in Education (Question 11): This variable identifies the main disadvantages students associate with AI in the educational process.
Demographic Information (Questions 12 to 16): These variables encompass students’ gender, year of study, major, exam performance, and GPA for their last year of study.
Data Exploration
Next steps will involve exploring this dataset, conducting data preprocessing, and preparing it for in-depth analysis. This will include tasks such as data cleaning, handling missing values, and potentially creating additional derived variables to address specific research objectives. We will also perform exploratory data analysis to uncover initial insights into students’ perceptions of AI in education.
Research Question
How do undergraduate students in their 2nd and 3rd years at the Faculty of Cybernetics, Statistics, and Economic Informatics perceive the role of AI in education?
Load Packages and Data
To perform our analysis, I will be utilizing the following R packages and libraries:
library(tidyverse)
── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
✔ dplyr 1.1.3 ✔ readr 2.1.4
✔ forcats 1.0.0 ✔ stringr 1.5.0
✔ ggplot2 3.4.3 ✔ tibble 3.2.1
✔ lubridate 1.9.2 ✔ tidyr 1.3.0
✔ purrr 1.0.2
── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
✖ dplyr::filter() masks stats::filter()
✖ dplyr::lag() masks stats::lag()
ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
library(ggraph)
Warning: package 'ggraph' was built under R version 4.3.2
library(dplyr)
These libraries will provide the necessary tools and functions for data manipulation, exploratory analysis, and, potentially, predictive modeling.
Wrangle
Data preparation and transformation were essential steps in preparing our dataset for comprehensive analysis in this research project focused on Students’ Perceptions of AI in Education. These steps aimed to ensure that the dataset was clean, organized, and structured appropriately to facilitate meaningful insights into students’ views on AI in education.
During the data preprocessing phase, several operations were carried out. Variable names were standardized to enhance consistency and ease of interpretation. Additionally, a careful filtering process was applied to retain only the most relevant variables, eliminating any non-essential or redundant data. This meticulous filtering ensured that our analysis would focus exclusively on the variables directly pertinent to our research questions.
In the data transformation phase, specific actions were taken to align the dataset with our research objectives. Numeric responses to questions, such as students’ self-assessment of their knowledge about AI (Question 1) and the perceived usefulness of AI in education (Question 7), were categorized to simplify interpretation and analysis. For instance, numeric scales were converted into categorical variables, allowing us to explore perceptions in broader terms.
To address certain research inquiries effectively, new variables were introduced. For example, a binary variable was created to indicate whether a student had passed all their exams (Question 15), differentiating between pass (0) and fail (1).
Responses to Question 2, which queried the sources students used to learn about AI, were merged and categorized to identify prevalent sources of AI-related information among students. While our dataset primarily consists of numeric responses, future data collection efforts may allow for sentiment analysis to be applied to non-numeric responses, such as open-ended comments or textual feedback.
Each data preprocessing and transformation step was undertaken with precision and a clear purpose, making certain that the dataset was optimized for analysis. These efforts resulted in a dataset that is not only clean and organized but also aligned effectively with the research questions, facilitating insightful exploration of students’ perceptions of AI in education.
Rows: 91 Columns: 35
── Column specification ────────────────────────────────────────────────────────
Delimiter: ","
chr (2): Q2.AI_sources, Q6.Domains
dbl (33): ID, Q1.AI_knowledge, Q2#1.Internet, Q2#2.Books/Papers, Q2#3.Social...
ℹ Use `spec()` to retrieve the full column specification for this data.
ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
View(Survey_AI)# Load required librarieslibrary(dplyr)# Import the datasetdata <-read.csv("Survey_AI.csv")
Variable Renaming
First we will start with standardizng variable names to ensure consistency and ease of interpretation. For instance, rename variables to more descriptive and concise names:
Some variables are on numeric scales. Categorize them to simplify analysis and interpretation. For example, categorize the “knowledge_level” variable into bins:
data <- data %>%mutate(knowledge_level_category =case_when( knowledge_level <=3~"Not Informed", knowledge_level <=6~"Somewhat Informed", knowledge_level <=8~"Moderately Informed",TRUE~"Very Informed" ) )
Merging and Categorizing Sources of Learning
Combine and categorize responses for the “sources_of_learning” variable to identify prevalent sources of AI-related information:
data <- data %>%mutate(sources_of_learning =ifelse( sources_of_learning =="I don't inform myself about AI", "No Information", sources_of_learning ),sources_of_learning =factor(sources_of_learning) )
Creating a Binary Variable for Exam Success
Create a binary variable “exam_success” to differentiate between students who passed (0) and those who didn’t (1):
library(dplyr)data <- data %>%mutate(exam_success =ifelse(`exam_success`=="Yes", 0, 1) )
Data Filtering
Filter the dataset to exclude any non-essential or redundant variables that may not directly contribute to answering the research question:
data <- data %>%select( knowledge_level_category, sources_of_learning, emotional_response, perceived_impact_areas, usefulness_in_education, advantages_teaching, advantages_learning, advantages_evaluation, disadvantages_education, gender, year_of_study, major, exam_success, gpa_last_year )
Analyze
Descriptive Statistics
Descriptive statistics to help summarize and understand the basic characteristics of the dataset.
# Summary statistics for numeric variablessummary(data)
knowledge_level_category
Length:91
Class :character
Mode :character
sources_of_learning
Internet :25
Internet;Social media :14
Internet;Books/Scientific papers (physical/online format) :10
Internet;Books/Scientific papers (physical/online format);Social media: 9
Internet;Social media;Discussions with family/friends : 7
No Information : 6
(Other) :20
emotional_response perceived_impact_areas usefulness_in_education
Min. :1.000 Length:91 Min. : 2.00
1st Qu.:1.000 Class :character 1st Qu.: 6.00
Median :1.000 Mode :character Median : 8.00
Mean :1.582 Mean : 7.44
3rd Qu.:2.000 3rd Qu.: 9.00
Max. :4.000 Max. :10.00
advantages_teaching advantages_learning advantages_evaluation
Min. :1.000 Min. :1.000 Min. :1.000
1st Qu.:1.000 1st Qu.:1.000 1st Qu.:2.000
Median :2.000 Median :2.000 Median :2.000
Mean :1.923 Mean :1.879 Mean :2.253
3rd Qu.:3.000 3rd Qu.:2.000 3rd Qu.:3.000
Max. :3.000 Max. :3.000 Max. :3.000
disadvantages_education gender year_of_study major
Min. :1.000 Min. :1.000 Min. :1.000 Min. :1.000
1st Qu.:1.000 1st Qu.:1.000 1st Qu.:1.000 1st Qu.:1.000
Median :2.000 Median :1.000 Median :2.000 Median :2.000
Mean :2.099 Mean :1.352 Mean :1.626 Mean :1.923
3rd Qu.:3.000 3rd Qu.:2.000 3rd Qu.:2.000 3rd Qu.:2.500
Max. :4.000 Max. :2.000 Max. :2.000 Max. :3.000
exam_success gpa_last_year
Min. :1 Min. :5.200
1st Qu.:1 1st Qu.:7.200
Median :1 Median :7.700
Mean :1 Mean :7.799
3rd Qu.:1 3rd Qu.:8.700
Max. :1 Max. :9.700
# Proportion of students who passed all examspass_proportion <-mean(data$exam_success ==0)
The descriptive statistics provided valuable insights into the dataset related to students’ perceptions of AI in education. In terms of their self-assessed knowledge about artificial intelligence (AI), it appears that the majority of students rate themselves with a low level of knowledge. On average, students rated their knowledge at approximately 1.582 on a scale from 1 (not informed at all) to 4 (extremely informed). This suggests that many students in the sample may not consider themselves highly informed about AI.
Regarding the sources used to learn about AI, the internet emerges as the most prevalent source, followed by combinations of internet with books, social media, and discussions. Some students indicated that they do not actively seek information about AI.
When it comes to students’ emotional response to AI, the data indicates that, on average, students tend to have a positive emotional response, with a mean of approximately 1.879. This suggests that curiosity and other positive emotions are common when students think about AI.
In terms of the perceived impact areas of AI, students, on average, believe that AI has a positive impact in various domains, with a mean score of approximately 7.44. This includes areas such as education, medicine, and agriculture.
Students’ perceptions of the usefulness of AI in education show a moderate level of agreement, with a mean of approximately 2.253. This suggests that while students see some utility in AI for education, it is not overwhelmingly perceived as highly useful.
The advantages of AI in teaching, learning, and evaluation processes are viewed by students with varying degrees of agreement, with means around 1.923, 1.879, and 2.099, respectively. This indicates that students recognize some advantages but not to an extreme extent.
Students also acknowledge some disadvantages of AI in education, as indicated by a mean of approximately 2.099. However, these disadvantages do not appear to be highly prominent in their perceptions.
The dataset includes information about students’ gender, year of study, major, exam success, and GPA. Majority of students did not pass all their exams, and on average, students have a relatively high GPA of approximately 7.799 in their last year of study.
These descriptive statistics offer an initial understanding of the dataset and set the stage for further analysis to explore relationships and patterns in students’ perceptions of AI in education.
Exploratory Data Analysis (EDA)
EDA involves visually examining the data to uncover underlying patterns, spot anomalies, and test hypotheses.
library(ggplot2)# Bar plot for knowledge_level_categoryggplot(data, aes(x = knowledge_level_category)) +geom_bar(fill ="blue", color ="black") +labs(title ="Knowledge Level Category", x ="Category", y ="Count")
# Bar plot for emotional_responseggplot(data, aes(x =as.factor(emotional_response))) +geom_bar(fill ="orange", color ="black") +labs(title ="Emotional Response to AI", x ="Emotional Response", y ="Count")
# Bar plot for genderggplot(data, aes(x =as.factor(gender))) +geom_bar(fill ="purple", color ="black") +scale_x_discrete(labels =c("Male", "Female")) +labs(title ="Gender Distribution", x ="Gender", y ="Count")
# Box plot for usefulness_in_educationggplot(data, aes(y = usefulness_in_education, x =1)) +geom_boxplot(fill ="lightblue") +labs(title ="Usefulness in Education", x ="", y ="Rating")
Interpretation
In this study, the bar plots reveal insightful trends. A significant majority of students rate themselves as “Somewhat Informed” (45 students) about AI, indicating a basic or average understanding of the subject, while “Moderately Informed” students (30 students) suggest a fair level of familiarity without deep expertise. In contrast, only a small group considers themselves “Very Informed” (7 students), and small number are “Not Informed” (9 students). This distribution highlights a general awareness of AI among students, with varying degrees of knowledge depth.
Regarding their emotional response to AI, the majority of students (63) exhibit curiosity (Response 1) as their primary emotion. This suggests a strong, collective interest and eagerness to learn more about AI among the students. On the other hand, a smaller number of students experience fear (12 students, Response 2), indicating concerns or apprehensions about AI. Indifference (Response 3) is noted among 10 students, reflecting a neutral or dispassionate stance toward AI. Trust (Response 4) is the least common response, with only 6 students identifying with it, indicating that while some students are confident in the benefits and reliability of AI, this sentiment is not as widespread as curiosity. This distribution of emotional responses highlights a dominant feeling of curiosity, coupled with varied levels of apprehension, neutrality, and trust towards AI among the students.
The gender distribution in the dataset shows a higher number of male (59) compared to female (32) respondents, indicating a gender disparity that could reflect broader trends in the field of technology and AI education.
The box plot for the “Usefulness in Education” variable provides additional insight. The median value, indicated by the black line in the box, is at 8, suggesting that the central tendency of students’ ratings towards the usefulness of AI in education is high. The box, representing the interquartile range, extends from 6 to 9, indicating that most students’ ratings fall within this range. This high median rating and the interquartile range imply that a majority of the students perceive AI as quite useful in educational settings, although there’s some variation in this perception.
These findings portray a student body that is generally aware and somewhat knowledgeable about AI, with a predominant emotional response towards the technology, a discernible gender skew, and a positive perception of AI’s usefulness in education. This provides a foundation for further exploration into how AI education can be tailored to address varying levels of knowledge, emotional responses, and gender imbalances within this field.
Correlation Analysis
library(corrplot)
Warning: package 'corrplot' was built under R version 4.3.2
corrplot 0.92 loaded
# Convert ordinal variables to numericdata$usefulness_in_education <-as.numeric(as.character(data$usefulness_in_education))data$emotional_response <-as.numeric(as.character(data$emotional_response))# Selecting relevant variables for correlation analysisselected_data <- data %>%select(usefulness_in_education, emotional_response, gpa_last_year)# Compute the correlation matrixcorrelation_matrix <-cor(selected_data, use ="complete.obs")# Display the correlation matrixprint(correlation_matrix)
# Visualize the correlation matrixcorrplot(correlation_matrix, method ="circle")
Interpretation
The correlation matrix provides insights into the relationships between students’ perceptions of the usefulness of AI in education, their emotional responses towards AI, and their academic performance, as indicated by their GPA. The analysis reveals a very weak negative correlation (-0.06) between the perceived usefulness of AI in education and emotional responses. This indicates that students’ views on how useful AI is for their education are not significantly influenced by their emotional reactions towards AI, such as curiosity, fear, indifference, or trust.
There is a slightly stronger, but still weak, positive correlation (0.277) between the perceived usefulness of AI in education and students’ GPAs. This suggests that students with higher GPAs tend to see AI as slightly more beneficial in their education, but the correlation is not robust enough to assert a strong relationship.
The negative correlation (-0.127) between emotional response and GPA is also weak, hinting at a slight tendency for students with lower GPAs to have more pronounced emotional responses towards AI, whether these are positive or negative.
Overall, these correlations indicate that there are some relationships between how students perceive AI’s usefulness in education, their emotional responses to AI, and their academic performance. However, these relationships are relatively weak, suggesting that perceptions of AI’s utility in education and emotional reactions towards AI are largely independent of academic success as measured by GPA. It’s important to remember that these correlations do not imply causation, and other factors not measured in the study might influence these relationships.
Communication
In exploring how second and third-year students at the Faculty of Cybernetics, Statistics, and Economic Informatics perceive the role of AI in education, the study reveals various insights. A majority of these students, predominantly males, possess a basic to moderate understanding of AI, as indicated by 45 students rating themselves as “Somewhat Informed” and 30 as “Moderately Informed.” This suggests a general awareness of AI, with varying depths of knowledge. Emotionally, a strong sense of curiosity (63 students) emerges as the dominant response towards AI, overshadowing feelings of fear, indifference, and trust, which are less prevalent. This curiosity reflects a collective eagerness to engage with AI, potentially shaping their learning experiences and expectations. Regarding the perceived usefulness of AI in their education, students generally regard it favorably, with a high median rating of 8 out of 10. This positive perception, with some variation, underscores AI’s perceived benefits in enhancing educational experiences. However, correlation analyses reveal that these perceptions and emotional responses are weakly related to academic performance, as measured by GPA. This independence suggests that students’ attitudes towards AI and its role in education are shaped by factors beyond just academic achievement. Overall, the study highlights that students at the Faculty recognize the importance and potential of AI in education, demonstrated by their curiosity and perceived usefulness, despite their varied levels of knowledge and the presence of gender disparities. These insights pave the way for tailored educational strategies and initiatives to deepen AI understanding and address the diverse emotional and knowledge-based needs of students.
References
Chen, Y., Jensen, S., Albert, L. J., Gupta, S., & Lee, T. (2023). Artificial intelligence (AI) student assistants in the classroom: Designing chatbots to support student success. Information Systems Frontiers, 25(1), 161-182. https://link.springer.com/article/10.1007/s10796-022-10291-4
Kooli, C. (2023). Chatbots in education and research: A critical examination of ethical implications and solutions. Sustainability, 15(7), 5614. https://doi.org/10.3390/su15075614
Lin, C. C., Huang, A. Y., & Yang, S. J. (2023). A review of ai-driven conversational chatbots implementation methodologies and challenges (1999–2022). Sustainability, 15(5), 4012. https://www.mdpi.com/2071-1050/15/5/4012
Rahim, N. I. M., Iahad, N. A., Yusof, A. F., & Al-Sharafi, M. A. (2022). AI-Based chatbots adoption model for higher-education institutions: A hybrid PLS-SEM-Neural network modelling approach. Sustainability, 14(19), 12726. file:///C:/Users/AndrzejKmiecik/Downloads/sustainability-14-12726-v2.pdf