
This study explores the use of the Generative artificial intelligence (GenAI) tool ChatGPT in higher education. Amidst the potential benefits and the risk of misuse, this research investigates the tool's role as a classroom aid and its impact on learning outcomes and experiences. Three case studies involving undergraduate and postgraduate ICT students were conducted. Findings revealed a positive perception of ChatGPT as a useful and enjoyable learning resource. Most students indicated a willingness to use such AI tools in the future. Additionally, the study suggested improved performance in functionality, user flow, and content comprehension among students using ChatGPT, compared to those relying solely on traditional search engines.
Citation: Mahmoud Elkhodr, Ergun Gide, Robert Wu, Omar Darwish. ICT students' perceptions towards ChatGPT: An experimental reflective lab analysis[J]. STEM Education, 2023, 3(2): 70-88. doi: 10.3934/steme.2023006
[1] | Zishan Ahmed, Shakib Sadat Shanto, Akinul Islam Jony . Potentiality of generative AI tools in higher education: Evaluating ChatGPT's viability as a teaching assistant for introductory programming courses. STEM Education, 2024, 4(3): 165-182. doi: 10.3934/steme.2024011 |
[2] | Sasha Nikolic, Zach Quince, Anna Lidfors Lindqvist, Peter Neal, Sarah Grundy, May Lim, Faham Tahmasebinia, Shannon Rios, Josh Burridge, Kathy Petkoff, Ashfaque Ahmed Chowdhury, Wendy S.L. Lee, Rita Prestigiacomo, Hamish Fernando, Peter Lok, Mark Symes . Project-work Artificial Intelligence Integration Framework (PAIIF): Developing a CDIO-based framework for educational integration. STEM Education, 2025, 5(2): 310-332. doi: 10.3934/steme.2025016 |
[3] | Arosh S. Perera Molligoda Arachchige, Kamel Chebaro, Alice J. M. Jelmoni . Advances in large language models: ChatGPT expands the horizons of neuroscience. STEM Education, 2023, 3(4): 263-272. doi: 10.3934/steme.2023016 |
[4] | Shakib Sadat Shanto, Zishan Ahmed, Akinul Islam Jony . PAIGE: A generative AI-based framework for promoting assignment integrity in higher education. STEM Education, 2023, 3(4): 288-305. doi: 10.3934/steme.2023018 |
[5] | Hemraj Ramsurrun, Roushdat Elaheebocus, Aatish Chiniah . Decline in enrollment in science and technology education: From the perspectives of Mauritian educators. STEM Education, 2025, 5(1): 1-18. doi: 10.3934/steme.2025001 |
[6] | Ibrahim Khalil, Amirah AL Zahrani, Bakri Awaji, Mohammed Mohsen . Teachers' perceptions of teaching mathematics topics based on STEM educational philosophy: A sequential explanatory design. STEM Education, 2024, 4(4): 421-444. doi: 10.3934/steme.2024023 |
[7] | Gaifang Dong . Successes and lessons from a trial of the three-way university-enterprise cooperation program on data science and big data processing technology in China. STEM Education, 2022, 2(4): 293-302. doi: 10.3934/steme.2022018 |
[8] | Jingwen He, Shirley Simon, Feng-Kuang Chiang . A comparative study of pre-service teachers' perceptions on STEAM education in UK and China. STEM Education, 2022, 2(4): 318-344. doi: 10.3934/steme.2022020 |
[9] | Fadhilah Jamaluddin, Ahmad Zabidi Abdul Razak, Suzieleez Syrene Abdul Rahim . Navigating the challenges and future pathways of STEM education in Asia-Pacific region: A comprehensive scoping review. STEM Education, 2025, 5(1): 53-88. doi: 10.3934/steme.2025004 |
[10] | Hakan Ulum, Menşure Alkış Küçükaydın . Understanding Turkish students' STEM career aspirations, STEM hopes and goals, parental perception, and cultural capital: A path analysis. STEM Education, 2024, 4(4): 364-380. doi: 10.3934/steme.2024021 |
This study explores the use of the Generative artificial intelligence (GenAI) tool ChatGPT in higher education. Amidst the potential benefits and the risk of misuse, this research investigates the tool's role as a classroom aid and its impact on learning outcomes and experiences. Three case studies involving undergraduate and postgraduate ICT students were conducted. Findings revealed a positive perception of ChatGPT as a useful and enjoyable learning resource. Most students indicated a willingness to use such AI tools in the future. Additionally, the study suggested improved performance in functionality, user flow, and content comprehension among students using ChatGPT, compared to those relying solely on traditional search engines.
The rapid development of artificial intelligence (AI) technologies has created numerous opportunities and challenges in different sectors, including ICT education. ChatGPT, a powerful Generative AI (GenAI), has attracted considerable attention. Since ChatGPT's emergence, many other AI-based tools have proliferated. Some of these tools can generate clear and well-structured answers, while others can summarise information, generate images, and solve complex questions and scenarios, offering a range of possibilities in various domains. Despite their advantages, the use of GenAI tools like ChatGPT in educational settings has sparked debate among educators, researchers, and policymakers regarding their potential benefits and drawbacks [14]. This is mainly because these GenAI tools can be exploited by students who intend to cheat, as expressed by some educators.
Other educators have also raised concerns that AI tools could misinform students by providing them with incorrect information, which has led to calls for banning or restricting their use in schools [6]. On the other hand, supporters of ChatGPT argue that it presents an opportunity to teach students the responsible use of AI tools in an ethical and effective manner, as well as train them on how to engage with these tools in a creative and critical way [6]. This contrast reflects the wider debate about the integration of AI tools in the education sector, as stakeholders struggle to balance the possible benefits and risks. Existing research on GenAI like ChatGPT has demonstrated its capacity to support learning outcomes similar to those achieved with human tutors, providing personalised feedback and instruction [1]. Teachers have been found to use ChatGPT more frequently than students for various purposes, such as generating lesson plans, tests, and sample solutions [6]. In a survey conducted in [8], it was revealed that 22 of the surveyed students were found using ChatGPT for coursework assistance on a weekly basis. It was also found that 73 of the surveyed teachers believed that ChatGPT has improved students' performances. However, other studies, such as the one reported in [17] which surveyed more than 900 instructors, reported mixed views on whether ChatGPT was viewed as a threat or an opportunity when used in educational settings. Many of the surveyed instructors indicated that they have not yet developed AI guidelines for their classrooms.
Concerns regarding the potential misuse of ChatGPT were not unfounded. For instance, in the survey conducted by Study.com [17], it was reported that 89 of students admitted to using ChatGPT to assist them with their homework, and 48 used it to assist them with their at-home tests or quizzes. Interestingly, 74 of the surveyed students supported banning ChatGPT in schools. This suggests that while students recognise the potential benefits of using ChatGPT, they also acknowledged the potential for abuse or dependency. Given that a significant percentage of students believed that ChatGPT should be banned, educators need to take the initiative in offering guidance and guidelines on the effective and ethical use of AI in educational settings. In addition to its potential role in the classroom, ChatGPT has been studied for its applications in academic research. For example, the study reported in [12] demonstrated that ChatGPT can code open-text responses from the British Election Study with a 92 accuracy rate compared to a human coder. Other studies such as [11] found that ChatGPT could revolutionise academic research by streamlining data analysis, enabling more open-text questions, and transforming scholarly writing and communication.
Nonetheless, it is noted that in most of the studies reported in the literature, students were not asked whether they were using ChatGPT as a tutor or as an assistive technology. Therefore, given the diverse perspectives and research findings on the use of ChatGPT in education, it is crucial to conduct further empirical investigations to better understand the potential impact of this technology on teaching and learning.
To this end, this study aimed to examine the effectiveness of ChatGPT as an assistive technology at both undergraduate (UG) and postgraduate (PG) ICT levels. A controlled experiment was conducted with three case studies. The first and second case studies involved UG students studying Human-computer Interaction (HCI) as part of their bachelor's degree. The third case study involved PG students studying a similar design unit at the master's level. ICT students' backgrounds naturally align with the context of this study. Thus, their selection in the experiment was guided by the fact that these groups of users have previous experience with digital tools, and possibly other assistive technologies. Accordingly, they are suitable candidates for offering firsthand reflection and insights into the effectiveness of ChatGPT when used in the classroom. Given that UG students were earlier on in their learning journey, the study aimed to gauge their adaptations and reactions while they were still forming their self-learning skills. On the other hand, PG students are generally considered to be expert users of technology when compared to UG students. Therefore, it is likely they have previous experience and familiarity with AI concepts and are better at finding and researching answers for tutorial questions, thus offering valuable insights into the effectiveness of using ChatGPT from an advanced learning perspective.
In each case study, students were divided into two groups and assigned tutorial exercises. The first group was allowed to use ChatGPT to assist them with the tutorial exercises. The other group was prohibited from using ChatGPT or any other similar AI tools. They were instructed to rely only on traditional search engines and lecture materials. The groups then swapped. The tutorial also included a reflective exercise where students provided feedback and reflections on their experiences. The instructors in all case studies observed the students and assessed their performance using qualitative observations. They have also collected quantitative data by marking the students' submissions for both tasks (with and without the aid of ChatGPT) using a specific rubric designed for the tutorial. The aim was to measure the impact of ChatGPT on students' learning outcomes, performance, and efficiency, as well as their satisfaction and perceptions.
This study aims to contribute to the ongoing research on the integration of AI technologies like ChatGPT in educational settings. The findings may provide some valuable insights for educators, researchers, and policymakers who seek to leverage the benefits of AI for enhancing teaching and learning experiences while addressing potential risks and challenges.
The rest of this paper is structured as follows: Section 2 provides a summary of the related literature work. The methodology of the research is provided in Section 3. Section 4 discusses the results and provides a summary of the findings and their implications. Section 5 discusses the limitations of this work and future directions, while section 6 provides the concluding remarks.
GenAI tools in general, including ChatGPT, have been the subject of numerous studies that explore their potential and challenges in different contexts [8,9,10], including in education [1]. Several studies have examined the use of AI in a broader sense, highlighting the advantages, challenges, and ethical considerations involved in adopting GenAI technologies in different settings [15,19]. For instance, a case study on traffic safety evaluated ChatGPT's ability to prepare manuscripts for publication [10]. The study found a significant disparity between human-generated and ChatGPT-generated introductions. The findings imply that the use of GenAI in scientific writing requires careful consideration and understanding. In other areas such as healthcare, ChatGPT efficiency was also put to test. For instance, Jeblick et al. [7] reported a study in which ChatGPT was used to simplify radiology reports for non-experts. While most radiologists agreed that the simplified reports were correct and not potentially harmful to patients, the study found that there were instances where ChatGPT made incorrect statements and missed key medical findings. The study concluded that further research is needed to ensure the safe and responsible use of ChatGPT in critical settings such as healthcare.
In the context of education, the opportunities offered by GenAI in generating assessment feedback, particularly for non-traditional students, were explored in [19]. The results also emphasised the need to address the ethical issues pertaining to the use of GenAI. The research raised several other practical issues that required attention such as data protection, transparency, accountability, accessibility, and inclusivity. It argues that these issues are essential to consider before adopting GenAI as a tool for writing assessment feedback. ChatGPT was also found to support personalised and interactive learning and formative assessment practices in [3]. The study in [4] highlighted the potential challenges associated with the use of GenAI tools like ChatGPT. These included challenges such as incorrect information, biases, and privacy issues. To address these issues, the study in [1] called for collaboration among educators, policymakers, and researchers, to ensure the safe and effective use of ChatGPT in educational settings.
Additionally, a study examining the use of ChatGPT in teaching and learning English as a foreign language (EFL) found that ChatGPT offered major opportunities for teachers and education institutes to improve second/foreign language teaching. This similarly provided researchers with an array of research opportunities, especially towards a more personalised learning experience [5]. Another study analysed ChatGPT's potential in enhancing individual communication and business writing skills [2]. It suggested strategies to address the risks associated with using ChatGPT, such as plagiarism and unlearning. In [13], ChatGPT-generated and human tutor-generated algebra hints were compared and evaluated. The results demonstrated that although both human and ChatGPT output produced positive learning gains, only the human tutor hints were significant. Other studies attempted to study the benefits of employing ChatGPT as a tutor in the classroom rather than seeing it as a competitor to humans. For instance, the study reported in [9] suggests that ChatGPT can create personalised learning experiences for students by tailoring the content and pace of the material based on their needs and abilities. It can also assist students with homework or studying for tests by providing instant responses and resources to help them understand the material [18]. This is essential as generating interactive quizzes and polls were found to improve students' engagement and participation [16].
In line with these efforts, this study attempts to add to the growing literature on the use of ChatGPT in educational settings. Instead of viewing GenAI as a threat to humans, this research seeks to evaluate the effectiveness of employing ChatGPT as an assistive technology tool in the classroom through three case studies conducted on ICT students.
A practical tutorial experiment was conducted to investigate the impact of using ChatGPT as an assistive technology in the classroom compared to using search engines like Google. The experiment was carried out across three case studies. The first case study involved students studying HCI on campus (face-to-face teaching) at Central Queensland University (CQU Australia) at the UG level. The second case study was also conducted on a similar cohort of students enrolled in the same unit as distance students (online students). Whereas the third case study involved students studying a similar design unit at the PG level, also from CQU. The tutorial experiment included the same setup, such as exercises, across the three case studies.
The goal was to determine if ChatGPT could enhance UG and PG ICT students' learning outcomes and improve their efficiency. The experiment also attempted to examine whether there were variances in the perceptions and performance gains across UG and PG students. The study employed both qualitative and quantitative research approaches. The participants of this study were drawn from three different case studies:
● Case Study 1: 15 first-year UG ICT students from the Bachelor of IT (Information Technology) program at CQUniversity Australia.
● Case Study 2: 18 first-year UG ICT students (distance/online students) from the Bachelor of IT program at CQUniversity Australia.
● Case Study 3: 19 PG ICT students enrolled in the Master of IT program at CQUniversity Australia, with prior bachelor's degrees in IT or a related field.
In each case study, students were divided into two groups and assigned tutorial exercises. The first group was allowed to use ChatGPT to assist them with the exercises. The second group was prohibited from using ChatGPT or any other AI tools. They were instructed to rely only on traditional search engines and lecture materials. Students were given two tutorial tasks to complete, which required them to analyse a given case study. They were then asked to develop personas and draw wireframes (low-fidelity prototypes) for the given scenario. The groups then swapped so that those who were previously not allowed to use ChatGPT were now permitted to do so, while the group who had access to ChatGPT was asked to cease using it. The decision to swap groups - alternating between using ChatGPT and not using it - was a crucial element of our experimental design. This approach enabled us to observe the performance of the same group of students under both conditions, thereby providing a more robust comparison of learning outcomes. By assessing the same group's performance with and without the use of ChatGPT, we were able to better isolate the impact of the AI tool on students' learning and tutorial task performance. After completing all the tutorial technical exercises, students were asked to complete a reflection activity. The reflection activity consisted of the following prompts:
● Describe your experience using ChatGPT during the tutorial tasks. Did you find it enjoyable and helpful?
● How did using ChatGPT compare to using search engines for completing the tasks?
● To what extent did you rely on ChatGPT to generate answers for you or to enhance your own understanding?
During the experiment, the instructors ensured that students were following the guidelines provided to them. In all three case studies, the instructors acted as facilitators providing guidance to the students on how to appropriately use ChatGPT as an assistive tool. Prior to the experiment, students were introduced to ChatGPT and the benefits of using GenAI tools to aid comprehension of materials and for exemplifying concepts. They were also briefed about the capacity of ChatGPT in fostering idea generation and were discouraged from using the tool merely as an answer generator. The instructors informed the students about the purpose of the experiment and made a substantial effort to educate the students about the importance of using AI as a self-promoting and self-directed learning tool rather than as a tool to generate answers. During the experiment, the instructors took notes about the students' interactions and engagement with ChatGPT. The ICT students' responses to these prompts were gathered and analysed qualitatively to understand their perceptions of ChatGPT's helpfulness, enjoyability, and reliance on the tool. In addition, students' answers to the tutorial tasks were assessed by using a quick rubric.
The methodology used in this study, as illustrated in Figure 1, employed a mixed-method approach incorporating both qualitative and quantitative data-gathering techniques. The analysis of the students' performances in Tasks 1 and 2 involved the examination of their responses to the reflective exercise, which provided valuable qualitative data. Additionally, the instructors' notes were used to gather qualitative insights into the students' performances. To complement the qualitative analysis, quantitative data were obtained through the rubric scoring process. At the completion of each task, the instructors assessed the students' work based on user flow and hierarchy criteria and assigned scores out of 5 accordingly. These rubric scores formed the foundation of the quantitative data analysis. By combining both qualitative and quantitative data-gathering methods, this methodology enabled a comprehensive evaluation of students' performance in the study. The qualitative analysis provided rich insights into student experiences and perspectives, while the quantitative analysis offered a more objective performance measurement based on predefined criteria.
The responses from the ICT students' reflective exercises were analysed using thematic analysis, which led to the categorisation of their perceptions into four categories: helpfulness, enjoyability, perceived benefits and drawbacks, and their reliance on the tool. Rubric scores from each case study also provided an objective and quantitative measure of students' performances in the tutorial tasks when using ChatGPT compared to traditional search engines. These scores revealed differences in the quality of work produced with and without ChatGPT and helped to assess its effectiveness as a digital tutor in the classroom.
ChatGPT Helpfulness: Most ICT students across the three case studies found ChatGPT helpful in quickly generating relevant information and providing ideas. They also appreciated the user-friendly interface and the speed at which they received responses from the tool.
Enjoyment and Engagement: ICT students in Case Studies 1 and 2 generally reported enjoying the experience of using ChatGPT, citing the novelty and interactive nature of the tool. However, students in Case Study 3 found ChatGPT to be less enjoyable and engaging, possibly due to their advanced technology skills, compared with the UG students involved in the first two case studies.
Reliance on ChatGPT: ICT students from Case Studies 1 and 2 reported moderate reliance on ChatGPT, appreciating its ability to provide quick answers, but still relying on search engines for additional information. In contrast, students in Case Study 3 reported lower reliance on ChatGPT. There were significant variances in the experiences reported by first times users compared to other ChatGPT-experienced users.
Perceived Benefits and Drawbacks: ICT students across the three case studies identified several benefits of using ChatGPT, including time-saving, increased efficiency, and better organisation of information. Some drawbacks mentioned were the occasional provision of irrelevant or incorrect information, over-reliance on the tool, and concerns about the potential impact on critical thinking and problem-solving skills.
Rubric Scores: The rubric scores for the tutorial tasks were analysed descriptively to assess the differences in the quality of work produced with and without ChatGPT. Regarding functionality and User Flow, across all three case studies, ICT students who used ChatGPT generally produced better quality work in terms of functionality and user flow than those who relied solely on search engines. This suggests that ChatGPT might have supported students in understanding and applying HCI concepts more effectively. In terms of content and Information Hierarchy, ICT students who used ChatGPT demonstrated slightly better content and information hierarchy in their wireframes than those who used search engines alone. While the improvement in information hierarchy was not as significant as with the task associated with user flows, it still indicated that ChatGPT might have provided more structured and relevant information, and helped students organise their ideas and make informed decisions during the design process.
The results from the analysis of the reflection exercises and rubric scores suggest that ChatGPT has the potential to be a helpful and enjoyable tool for ICT students in HCI units. However, students' reliance on the tool and perceived benefits and drawbacks varied across the case studies, emphasising the importance of considering individual differences and contextual factors when evaluating the effectiveness of ChatGPT in educational settings. The findings of this experimental research are also parallel to the research conducted by Sandu and Gide [16]. A descriptive summary of the experimental research results is presented in Table 1. Table 2 presents a summary of the comparison of UG and PG ICT students' performances in the experiment, comparing the three case studies based on the instructors' notes, the impact on the learning outcomes and student's reflections on their likelihood of future AI use, and the perceived benefits and drawbacks. In Case Study 3 (PG), the percentage of first-time AI users is lower than in Case Study 2 but higher than in Case Study 1. Interestingly, the PG ICT students demonstrated a lower reliance on AI, suggesting that their experience or level of education may have influenced their approach to using AI tools like ChatGPT. The lower reliance on AI among PG ICT students could also indicate that they might be more focused on learning the subject matter rather than just generating answers.
Parameter/Metric | Case Study 1: UG | Case Study 2: UG | Case Study 3: PG |
Helpfulness (Reflections) | Generally found helpful | Generally found helpful | Somewhat helpful |
Enjoyability (Reflections) | Mostly enjoyable experience | Moderate enjoyment | Moderate enjoyment |
Reliance on ChatGPT (Observation + reflection) | Moderate | Moderate | Low |
Benefits (observation +reflection) | Faster and more direct answers | Faster and more direct answers | Comparable to UG students |
Functionality and User Flow (rubric) | Better performance | Better performance | Comparable to UG students |
Content and Information Hierarchy (rubric) | Better performance | Better performance | Comparable to UG students |
Parameter/Metric | UG Students (Case Study 1 & 2) | PG Students (Case Study 3) |
Impact on Learning Outcomes | Generally positive | Mixed results |
Likelihood of Future AI Use | High | Moderate |
Perceived benefits | Timesaving, focused answers | Timesaving, convenience |
Perceived drawbacks | Reliability, accuracy concerns | Reliability, accuracy concerns |
Table 3 shows the relationship between the reliance on AI, the level of study (UG/PG), and the proportion of first-time AI users in each case study. It helps to understand how the variance in reliance on AI changes depending on the level of study and the percentage of first-time AI users. The instructors collected first-time AI users' data during the experiment. The instructors took notes of the students who did not have a ChatGPT account and were using it for the first time.
Case Study | Reliance on AI | First-time AI Users (%) |
Case Study 1 (UG) | Moderate | 7 out of 15 (46.7%) |
Case Study 2 (UG) | Moderate | 12 out of 18 (66.7%) |
Case Study 3 (PG) | Low | 8 out of 19 (40%) |
Table 4 provides a summary of the students' perceptions regarding the helpfulness of using ChatGPT and how enjoyable they found using the tool. These are categorised into four levels: Very Helpful, Helpful, Somewhat Helpful, and Not Helpful, for the first criterion, and Very Enjoyable, Enjoyable, and Not Enjoyable for the second criterion. The data included in the table were derived from the reflection exercise.
Metrics | Case Study 1 (UG, 15) | Case Study 2 (UG, 18) | Case Study 3 (PG, 19) |
Helpfulness | |||
Very Helpful | 8/15 (53.3%) | 5/18 (27.8%) | 2/19 (10.5%) |
Helpful | 4/15 (26.7%) | 7/18 (38.9%) | 5/19 (26.3%) |
Somewhat Helpful | 3/15 (20.0%) | 6/18 (33.3%) | 10/19 (52.6%) |
Not Helpful | 0/15 (0.0%) | 0/18 (0.0%) | 2/19 (10.5%) |
Enjoyability | |||
Very Enjoyable | 10/15 (66.7%) | 8/18 (44.4%) | 4/19 (21.1%) |
Enjoyable | 3/15 (20.0%) | 7/18 (38.9%) | 6/19 (31.6%) |
Not Enjoyable | 2/15 (13.3%) | 3/18 (16.7%) | 9/19 (47.4%) |
First-time AI Users | 7/15 (46.7%) | 12/18 (66.7%) | 8/19 40(%) |
Likelihood of Use | |||
Likely | 6/15 (40.0%) | 8/18 (44.4%) | 5/19 (26.3%) |
Neutral | 4/15 (26.7%) | 5/18 (27.8%) | 7/19 (36.8%) |
Unlikely | 5/15 (33.3%) | 5/18 (27.8%) | 7/19 (36.8%) |
Descriptive statistics were employed to summarise and compare the answers provided by the students in the reflection exercise. The ratings were quantified and categorised into themes to highlight the variations in their experiences with ChatGPT across the case studies. This statistical approach allowed for a systematic examination of the data, providing valuable insights into the students' perceptions and enabling meaningful comparisons between the different aspects evaluated. The analysis of the data in Table 4 revealed several key findings. Across all case studies, a majority of the students found ChatGPT to be helpful in assisting them with the tutorial tasks. Specifically, in Case Study 1, 53.3% of the students rated it as Very Helpful, while 26.7% found it Helpful. In Case Study 2, 27.8% of the students found it Very Helpful, and 38.9% found it Helpful. In Case Study 3, 10.5% rated it as Very Helpful, and 26.3% found it Helpful.
Furthermore, when considering the reliance on AI, students in Case Study 1 and Case Study 2 exhibited a moderate level of reliance on ChatGPT. They mainly used it to generate answers without extensive self-learning. Conversely, students in Case Study 3 showed a lower reliance on ChatGPT and focused more on utilising it to enhance their understanding and learning independently. Regarding the likelihood of using ChatGPT or similar AI tools as a tutor in their studies, the results varied across the case studies. In Case Study 1 and Case Study 2, a majority of the students expressed a moderate likelihood of utilising AI tools, while in Case Study 3, a lower likelihood was observed.
To further understand the performance improvement with the use of ChatGPT, we compared the metrics between Task 1 (ChatGPT prohibited) and Task 2 (ChatGPT allowed) for each case study. Table 5 presents the comparison of performance metrics in Task 1 and Task 2 for Case Study 1 (UG, 15 students). Similarly, Table 6 presents the comparison of performance metrics in Task 1 and Task 2 for Case Study 2 (UG, 18 students), and Table 7 for Case Study 3 (PG, 19 students).
Functionality and User Flow | |||
Performance Metric | Task 1 (ChatGPT Prohibited) | Task 2 (ChatGPT Allowed) | Average Percentage Change |
Incomplete or confusing user flow, missing or poorly placed key elements | 8/15 (53.3%) | 0/15 (0.0%) | |
Basic user flow, but some elements or interactions could be improved | 7/15 (46.7%) | 5/15 (33.3%) | |
Intuitive user flow, all key elements and interactions are well-planned and easy to understand | 0/15 (0.0%) | 10/15 (66.7%) | |
Average Score | 1.6/5 | 3.0/5 | |
Average Percentage Change | 87.5% | ||
Content and Information Hierarchy | |||
Missing important aspects such as navigation, layout, important content | 6/15 (40.0%) | 0/15 (0.0%) | |
Some important content were missing | 6/15 (40.0%) | 6/15 (40.0%) | |
Well-developed wireframe | 3/15 (20.0%) | 9/15 (60.0%) | |
Average Score | 2.4/5 | 3.0/5 | |
Average Percentage Change | 25.0% | ||
Overall Average Percentage Change: 56.3% |
Functionality and User Flow | |||
Performance Metric | Task 1 (No ChatGPT) | Task 2 (ChatGPT Allowed) | Average Percentage Change |
Incomplete or confusing user flow, missing or poorly placed key elements | 6/18 (33.3%) | 0/18 (0.0%) | |
Basic user flow, but some elements or interactions could be improved | 7/18 (38.9%) | 5/18 (27.8%) | |
Intuitive user flow, all key elements and interactions are well-planned and easy to understand | 0/18 (0.0%) | 10/18 (55.6%) | |
Average Score | 1.4/5 | 3.0/5 | |
Average Percentage Change | 85.6% | ||
Content and Information Hierarchy | |||
Performance Metric | Task 1 (No ChatGPT) | Task 2 (ChatGPT Allowed) | Average Percentage Change |
Missing important aspects such as navigation, layout, important content | 6/18 (33.3%) | 0/18 (0.0%) | |
Some important content were missing | 7/18 (38.9%) | 6/18 (33.3%) | |
Well-developed wireframe | 0/18 (0.0%) | 10/18 (55.6%) | |
Average Score | 1.8/5 | 3.0/5 | |
Average Percentage Change | 28.0% | ||
Overall Average Percentage Change: 56.8% |
Functionality and User Flow | |||
Performance Metric | Task 1 (No ChatGPT) | Task 2 (With ChatGPT) | Average Percentage Change |
Incomplete or confusing user flow, missing or poorly placed key elements | 8/19 (42.1%) | 0/19 (0.0%) | |
Basic user flow, but some elements or interactions could be improved | 6/19 (31.6%) | 7/19 (36.8%) | |
Intuitive user flow, all key elements and interactions are well-planned and easy to understand | 5/19 (26.3%) | 12/19 (63.2%) | |
Average Score | 1.8/5 | 3.2/5 | |
Average Percentage Change | 77.8% | ||
Content and Information Hierarchy | |||
Missing important aspects such as navigation, layout, important content | 7/19 (36.8%) | 0/19 (0.0%) | |
Some important content were missing | 6/19 (31.6%) | 4/19 (21.1%) | |
Well-developed wireframe | 6/19 (31.6%) | 15/19 (78.9%) | |
Average Score | 2.35/5 | 3.15/5 | |
Average Percentage Change | 34.0% | ||
Overall Average Percentage Change: 55.9% |
Additionally, Table 8 provides a comprehensive comparison of the performance metrics across all three case studies. By comparing the results presented in these tables, it is evident that ChatGPT's introduction in Task 2 positively impacted the performance metrics across all case studies. The average scores for functionality and user flow, as well as content and information hierarchy, increased in Task 2 compared to Task 1 in each case study. The average percentage change values indicate the extent of improvement achieved by the students. There are also other factors that could have influenced the results and warrant further research. For instance, across all three studies, more than 40% of students were first-time users of ChatGPT. Therefore, these students had no exposure or previous experience with ChatGPT. ChatGPT output quality is directly impacted by the quality of the prompt. Thus, it is anticipated that some of these students would have a change of heart once they are more experienced in using GenAI tools like ChatGPT. Consequently, further research is needed to validate this proposition.
Functionality and User Flow | |||
Performance Metric | Case Study 1 | Case Study 2 | Case Study 3 |
Average Score (Task 1 - ChatGPT Prohibited) | 1.6/5 | 1.4/5 | 1.8/5 |
Average Score (Task 2 - ChatGPT Allowed) | 3.0/5 | 3.0/5 | 3.2/5 |
Average Percentage Change | 87.5% | 85.6% | 92.9% |
Content and Information Hierarchy | |||
Performance Metric | Case Study 1 | Case Study 2 | Case Study 3 |
Average Score (Task 1 - ChatGPT Prohibited) | 2.4/5 | 1.8/5 | 2.35/5 |
Average Score (Task 2 - ChatGPT Allowed) | 3.0/5 | 3.0/5 | 3.15/5 |
Average Percentage Change | 25.0% | 28.0% | 34.0% |
Overall Average Percentage Change | |||
56.3% | 56.8% | 55.9% |
This research found that ICT students at both UG and PG educational levels, in our experiment, generally perceived ChatGPT as a helpful and enjoyable learning tool, with most students indicating a willingness to use AI tools such as ChatGPT in their future studies. The results also suggested that students using ChatGPT performed better in terms of functionality and user flow, as well as content and information hierarchy, compared to those using search engines.
In addition to the primary findings of this research study, it was observed that ICT students' level of reliance on ChatGPT for generating answers was moderate, indicating a balanced approach to using the AI tool for learning. Furthermore, it was found that the first-time users of AI tools like ChatGPT represented a significant portion of the participants across all three case studies. Further research examining whether their perceptions change once they have gained more experience using GenAI tools would be of importance.
An interesting observation from the study was that PG ICT students, compared to UG ICT students, found ChatGPT to be somewhat less helpful and enjoyable. This might be due to a higher level of prior knowledge or technical expertise among PG students, leading to a perceived reduced need for AI assistance. It could also be attributed to differences in expectations and learning goals between UG and PG students. Another noteworthy finding was the variation in performance and satisfaction with ChatGPT across students' cohorts. ICT UG students from Case Study 1 had slightly higher mean scores for helpfulness and enjoyment compared to the UG ICT students from Case Study 2. Acknowledging that the sample size was relatively small and any generalisation may sound unsupported, it might be appropriate to suggest that learning environment factors, such as differences in the study mode or students' educational and cultural backgrounds, may have influenced some of the outcomes of the study.
Notably, across all three case studies, it is evident that students performed significantly better when using ChatGPT in Task 2, particularly in terms of functionality and user flow. This improvement can be attributed to the nature of the task, which required students to generate interaction scenarios and identify personas and key features for their application designs. This aspect of the tutorial involved creativity and critical thinking, and ChatGPT proved to be a valuable tool in assisting students in generating user flows and ideas that align with HCI design principles.
In contrast, for the second criterion, which focuses on content and hierarchy, the improvement facilitated by ChatGPT was not as pronounced as in the first exercise. This can be attributed to the fact that ChatGPT is a text-generating model, while the task required visualisation skills for designing paper wireframes. Students may have faced challenges in accurately translating the output from ChatGPT into their wireframes or implementing the suggested changes effectively. Additionally, other factors such as a lack of previous experience with wireframe design could have played a role in the outcomes.
This claim is further supported by the results indicating that more experienced users, such as PG students, performed better than UG students in both Task 1 and Task 2. This observation suggests that prior knowledge and experience in wireframe design could positively influence students' abilities to leverage ChatGPT effectively.
Overall, in the context of this study, the findings highlight that ChatGPT can be a valuable tool in enhancing critical thinking and problem-solving skills, particularly when utilised to support tasks that involve creativity and user-centric design principles.
The findings of this experimental lab study have several implications for the use of AI tools such as ChatGPT in ICT educational settings, as shown in Figure 2. The reported positive perception of ICT students in our experiment on ChatGPT's helpfulness and enjoyability suggests that incorporating GenAI tools in the classroom could enhance the learning experience of students. However, educators need to provide tailored guidance and equip students with guidelines on how they can use these tools to support their learning journey in and outside of the classroom. Integrating ChatGPT in the lesson or the curriculum may enable students to efficiently learn complex concepts and better develop problem-solving skills.
Secondly, the experimental study results showed that ICT students used ChatGPT for learning purposes rather than just depending on it for answers. This indicates that GenAI tools can improve students' learning experiences when they are applied appropriately in a suitable and supervised setting established by the instructors. One way to achieve this is perhaps by incorporating GenAI tools into the learning activities as they currently do with other tools such as interactive content.
The extended findings from this experimental study are illustrated in Figure 2. It reveals more insights into the potential use of GenAI tools like ChatGPT in ICT education setups and the implications it presents to relevant stakeholders, such as teachers, ICT students, and policymakers. The extended findings, while not entirely or directly resulting from the analysis of the students' data, were formulated based on the instructors and researchers involved in this study. These have been summarised into the following groups:
Pedagogical Applications: The findings of this experimental study support the integration of AI tools like ChatGPT into ICT teaching and learning practices. Educators can leverage ChatGPT to complement traditional teaching methods, design engaging learning experiences, and facilitate the development of higher-order critical and innovative thinking skills. For instance, teachers can use ChatGPT to design interactive assignments, encourage collaborative and inclusive problem-solving, as well as support personalised learning.
ICT Student Autonomy and Metacognition: This research shows that ChatGPT can foster ICT student autonomy and metacognition as it has the ability to promote moderate reliance on the tool for generating reasonable answers. By using AI tools to supplement their learning, ICT students can become more self-directed and reflective, enhancing their ability to evaluate their understanding, identify gaps in knowledge, and apply appropriate learning strategies.
Digital Literacy and AI Ethics: These experimental research findings suggest that the increased adoption of AI tools in ICT education underscores the importance of developing digital literacy and fostering ethical awareness among students. As part of their educational experience, ICT students should be exposed to the potential risks, limitations, and ethical considerations associated with AI technologies. This includes understanding the potential for inaccuracies, bias, privacy concerns, and responsible use of AI tools like ChatGPT.
Institutional Support and Training: The instructors' experiences and experimental findings of this study suggest that there is a need for adequate institutional support and training for effective AI integration. Educational institutions should invest in the professional development of teachers, provide the necessary technological infrastructure, and establish guidelines and best practices for AI adoption in teaching and learning.
Customisation and Contextualisation: The instructors' experiences and findings from this experimental study indicate that ICT students' experiences with ChatGPT may be influenced by factors such as their educational level (UG and PG) and prior knowledge. To maximise the benefits of AI tools in ICT education, it is essential to customise and contextualise their application based on the specific needs, goals, and contexts of individual learners, units, and courses. This could involve tailoring AI tools to accommodate diverse learning styles, addressing specific learning objectives, or integrating them into relevant ICT disciplinary contexts, which call for further research in these areas.
Collaboration and Community Building: The findings highlight the potential role of AI tools like ChatGPT in facilitating collaboration and community building among ICT students. Accordingly, the researchers of this experimental study suggest that by promoting cooperative learning, AI tools can help ICT students develop interpersonal skills, foster a sense of belonging, and enhance their capacity to work effectively in diverse ICT teams.
Despite the promising findings, this experimental study has several limitations that call for further research. Firstly, the sample size was relatively small, which limits the generalisability of the findings. Secondly, other factors that were not controlled in the study, such as individual differences in learning styles, motivation levels, and prior knowledge of the topic, could also influence the outcomes and confound the results. Thirdly, the study relied on self-reported reflective feedback from UG and PG ICT students to assess their enjoyment, ease of learning, and overall satisfaction with ChatGPT or search engines. These subjective measures may be subject to response bias, social desirability bias, or other factors that could influence students' responses.
Furthermore, the study was conducted in a supervised setting, which might have influenced ICT students' behaviour and motivation to use ChatGPT for learning rather than cheating. Future research should investigate how students interact with AI tools like ChatGPT in unsupervised environments and explore strategies to ensure that such tools are used responsibly and effectively to enhance learning rather than promote academic dishonesty. A significant proportion of the participants were found to be first-time users of ChatGPT. This may have influenced their experiences and perceptions of the technology as well.
Consequently, future research should account for these variables and use more rigorous methods to isolate the effects of ChatGPT on students' learning. This would help in providing a more comprehensive understanding of the potential benefits and challenges associated with using AI tools like ChatGPT in educational settings.
This experimental lab study aimed to determine whether ChatGPT could enhance UG and PG ICT students' learning outcomes and improve their efficiency. Three case studies were conducted in this research. The first two case studies focused on UG ICT students whereas the third case study focused on PG ICT students. A mixed-method approach was employed, combining qualitative and quantitative data gathering and analysis methods. Qualitative data were obtained through the analysis of student responses from the reflective exercise and the instructors' notes. These qualitative insights provided a deep understanding of students' experiences and perspectives. In addition, quantitative data were gathered through the rubric scoring process, which objectively measured students' performances based on predefined criteria. The rubric scores served as a quantitative measure of student achievement in terms of functionality and user flow, as well as content and information hierarchy. The research outcomes highlight that ICT students across both educational levels (UG and PG) generally perceived ChatGPT as a helpful and enjoyable learning tool, with most students indicating a willingness to use AI tools such as ChatGPT in their future studies. The results also suggest that students using ChatGPT performed better in terms of functionality and user flow, as well as content and information hierarchy, compared to those using search engines. This experimental lab study also provides several implications and suggestions for the use of ChatGPT in educational settings, including pedagogical applications, and digital literacy.
The authors declare that they have not used Artificial Intelligence (AI) tools in the writing of this article. However, ChatGPT 4 was employed for specific formatting tasks, such as converting tables into LaTeX and enhancing the readability of the tables. In addition, ChatGPT 4 was utilised for copyediting and proofreading certain sections, including the abstract and the conclusion. The authors took precautions to ensure that ChatGPT 4 did not introduce any text that was not authored by the original writers. Throughout the process, the authors reviewed and revised the suggestions provided by ChatGPT 4 to maintain the accuracy and integrity of the final manuscript.
We would like to thank the constructive feedback provided by the reviewers.
Dr. Ergun Gide is the Guest Editor of special issue "The Impact of Generative Artificial Intelligence (GAI) Tools on Higher Education: ChatGPT and others" for STEM Education. Dr. Ergun Gide was not involved in the editorial review and the decision to publish this article.
[1] |
Adiguzel, T., Kaya, M.H. and Cansu, F.K., Revolutionizing education with ai: Exploring the transformative potential of chatgpt. Contemporary Educational Technology, 2023, 15(3): 429. https://doi.org/10.30935/cedtech/13152 doi: 10.30935/cedtech/13152
![]() |
[2] | Atlas, S., Chatgpt for higher education and professional development: A guide to conversational ai. 2023. |
[3] |
Eysenbach, G., The role of chatgpt, generative language models, and artificial intelligence in medical education: A conversation with chatgpt and a call for papers. JMIR Medical Education, 2023, 9: e46885. https://doi.org/10.2196/46885 doi: 10.2196/46885
![]() |
[4] |
Hassani, H. and Silva, E.S., The role of chatgpt in data science: how ai-assisted conversational interfaces are revolutionizing the field. Big data and cognitive computing, 2023, 7(2): 62. https://doi.org/10.3390/bdcc7020062 doi: 10.3390/bdcc7020062
![]() |
[5] | Hong, W.C.H., The impact of chatgpt on foreign language teaching and learning: opportunities in education and research. Journal of Educational Technology and Innovation, 2023, 5. |
[6] | Hostetter, A., Call, N., Frazier, G., James, T., Linnertz, C., Nestle, E., et al., Student and faculty perceptions of artificial intelligence in student writing. 2023. https://doi.org/10.31234/osf.io/7dnk9 |
[7] | Jeblick, K., Schachtner, B., Dexl, J., Mittermeier, A., Stüber, A.T., Topalis, J., et al., Chatgpt makes medicine easy to swallow: An exploratory case study on simplified radiology reports. 2022. |
[8] | Jimenez, K., Chatgpt in the classroom: Here's what teachers and students are saying. 2023. |
[9] |
Kasneci, E., Seßler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., et al., Chatgpt for good? on opportunities and challenges of large language models for education. Learning and Individual Differences, 2023,103: 102274. https://doi.org/10.1016/j.lindif.2023.102274 doi: 10.1016/j.lindif.2023.102274
![]() |
[10] | Kutela, B., Msechu, K., Das, S. and Kidando, E., Chatgpt's scientific writings: A case study on traffic safety. 2023. https://doi.org/10.2139/ssrn.4329120 |
[11] |
Lund, B.D., Wang, T., Mannuru, N.R., Nie, B., Shimray, S. and Wang, Z., Chatgpt and a new academic reality: Artificial intelligence written research papers and the ethics of the large language models in scholarly publishing. Journal of the Association for Information Science and Technology, 2023. https://doi.org/10.2139/ssrn.4389887 doi: 10.2139/ssrn.4389887
![]() |
[12] |
Mellon, J., Bailey, J., Scott, R., Breckwoldt, J. and Miori, M., Does gpt-3 know what the most important issue is? using large language models to code open-text social survey responses at scale. Using Large Language Models to Code Open-Text Social Survey Responses At Scale, 2022. https://doi.org/10.2139/ssrn.4310154 doi: 10.2139/ssrn.4310154
![]() |
[13] | Pardos, Z.A. and Bhandari, S., Learning gain differences between chatgpt and human tutor generated algebra hints. 2023. |
[14] |
Rudolph, J., Tan, S. and Tan, S., Chatgpt: Bullshit spewer or the end of traditional assessments in higher education? Journal of Applied Learning and Teaching, 2023, 6(1). https://doi.org/10.37074/jalt.2023.6.1.9 doi: 10.37074/jalt.2023.6.1.9
![]() |
[15] |
Sallam, M., Chatgpt utility in healthcare education, research, and practice: Systematic review on the promising perspectives and valid concerns. Healthcare, 2023, 11: 887. https://doi.org/10.3390/healthcare11060887 doi: 10.3390/healthcare11060887
![]() |
[16] | Sandu, N. and Gide, E., Adoption of ai-chatbots to enhance student learning experience in higher education in india. 2019 18th International Conference on Information Technology Based Higher Education and Training (ITHET), 2019, 1–5. |
[17] | Study.Com., Productive teaching tool or innovative cheating. |
[18] | Sun, G.H. and Hoelscher, S.H., The chatgpt storm and what faculty can do. Nurse Educator, 2023, 48(3): 119–124. |
[19] |
Tlili, A., Shehata, B., Adarkwah, M.A., Bozkurt, A., Hickey, D.T., Huang, R., et al., What if the devil is my guardian angel: Chatgpt as a case study of using chatbots in education. Smart Learning Environments, 2023, 10: 15. https://doi.org/10.1186/s40561-023-00237-x doi: 10.1186/s40561-023-00237-x
![]() |
1. | Rogério Costa, Adriana Lage Costa, Ana Amélia Carvalho, 2024, Chapter 7, 978-3-031-52295-6, 121, 10.1007/978-3-031-52296-3_7 | |
2. | Daniel Casey, ChatGPT in public policy teaching and assessment: An examination of opportunities and challenges, 2024, 0313-6647, 10.1111/1467-8500.12647 | |
3. | Kam C. Sum, Kei-Hin Ng, Siu-Ting Siu, Ho-Yin Chui, Cheuk-Lut Au, Chiu F. Li, 2024, AI Assisted Microcontroller Based Kits for STEM Education, 979-8-3503-5280-1, 1, 10.1109/ISEC61299.2024.10665244 | |
4. | Sandra Saúde, João Paulo Barros, Inês Almeida, Impacts of Generative Artificial Intelligence in Higher Education: Research Trends and Students’ Perceptions, 2024, 13, 2076-0760, 410, 10.3390/socsci13080410 | |
5. | Indah Werdiningsih, Inda Indrawati, Diyenti Rusdin, Francisca Maria Ivone, Yazid Basthomi, , Revolutionizing EFL writing: unveiling the strategic use of ChatGPT by Indonesian master’s students, 2024, 11, 2331-186X, 10.1080/2331186X.2024.2399431 | |
6. | Zishan Ahmed, Shakib Sadat Shanto, Most. Humayra Khanom Rime, Md. Kishor Morol, Nafiz Fahad, Md. Jakir Hossen, Md. Abdullah-Al-Jubair, The Generative AI Landscape in Education: Mapping the Terrain of Opportunities, Challenges, and Student Perception, 2024, 12, 2169-3536, 147023, 10.1109/ACCESS.2024.3461874 | |
7. | Lianyu Cai, Mgambi Msambwa Msafiri, Daniel Kangwa, Exploring the impact of integrating AI tools in higher education using the Zone of Proximal Development, 2024, 1360-2357, 10.1007/s10639-024-13112-0 | |
8. | John Olayemi Okunlola, Investigating Perceptions and Practices of Plagiarism among Nigerian Undergraduates in the Fourth Industrial Revolution Era: The Role of ChatGPT, 2024, 2720-7722, 1796, 10.38159/ehass.202451114 | |
9. | Yasmany García-Ramírez, Pablo Campoverde, Fabián Díaz, Josué Ortega, Javier Vásquez, 2024, Chapter 16, 978-981-97-3301-9, 183, 10.1007/978-981-97-3302-6_16 | |
10. | Saif M I Alkhaaldi, Carl H Kassab, Zakia Dimassi, Leen Oyoun Alsoud, Maha Al Fahim, Cynthia Al Hageh, Halah Ibrahim, Medical Student Experiences and Perceptions of ChatGPT and Artificial Intelligence: Cross-Sectional Study, 2023, 9, 2369-3762, e51302, 10.2196/51302 | |
11. | Elena María García-Alonso, Ana Cristina León-Mejía, Roberto Sánchez-Cabrero, Raquel Guzmán-Ordaz, Training and Technology Acceptance of ChatGPT in University Students of Social Sciences: A Netcoincidental Analysis, 2024, 14, 2076-328X, 612, 10.3390/bs14070612 | |
12. | Maria Giovina Pasca, Gabriella Arcese, ChatGPT between opportunities and challenges: an empirical study in Italy, 2024, 1754-2731, 10.1108/TQM-08-2023-0268 | |
13. | Marie Lobet, Antoine Honet, Marc Romainville, Valérie Wathelet, ChatGPT : quel en a été l’usage spontané d’étudiants de première année universitaire à son arrivée?, 2024, 2562-0630, 67, 10.52358/mm.vi18.379 | |
14. | Yanhua Liu, Jaeuk Park, Sean McMinn, Using generative artificial intelligence/ChatGPT for academic communication: Students' perspectives, 2024, 34, 0802-6106, 1437, 10.1111/ijal.12574 | |
15. | Sasha Nikolic, Carolyn Sandison, Rezwanul Haque, Scott Daniel, Sarah Grundy, Marina Belkina, Sarah Lyden, Ghulam M. Hassan, Peter Neal, ChatGPT, Copilot, Gemini, SciSpace and Wolfram versus higher education assessments: an updated multi-institutional study of the academic integrity impacts of Generative Artificial Intelligence (GenAI) on assessment, teaching and learning in engineering, 2024, 2205-4952, 1, 10.1080/22054952.2024.2372154 | |
16. | João Batista, Anabela Mesquita, Gonçalo Carnaz, Generative AI and Higher Education: Trends, Challenges, and Future Directions from a Systematic Literature Review, 2024, 15, 2078-2489, 676, 10.3390/info15110676 | |
17. | Chung Kwan Lo, Khe Foon Hew, Morris Siu-yung Jong, The influence of ChatGPT on student engagement: A systematic review and future research agenda, 2024, 219, 03601315, 105100, 10.1016/j.compedu.2024.105100 | |
18. | Raj Sandu, Ergun Gide, Mahmoud Elkhodr, The role and impact of ChatGPT in educational practices: insights from an Australian higher education case study, 2024, 3, 2731-5525, 10.1007/s44217-024-00126-6 | |
19. | Jason Luong, Chih-Chen Tzang, Sean McWatt, Cecilia Brassett, Dana Stearns, Mandeep G. Sagoo, Carol Kunzel, Takeshi Sakurai, Chung-Liang Chien, Geoffroy Noel, Anette Wu, Exploring Artificial Intelligence Readiness in Medical Students: Analysis of a Global Survey, 2024, 2156-8650, 10.1007/s40670-024-02190-x | |
20. | Marcin Sikorski, 2024, Chat GPT Wrote it: What HCI Educators Can Learn from their Students?, 978-83-972632-0-8, 10.62036/ISD.2024.25 | |
21. | Dan Sun, Azzeddine Boudouaia, Chengcong Zhu, Yan Li, Would ChatGPT-facilitated programming mode impact college students’ programming behaviors, performances, and perceptions? An empirical study, 2024, 21, 2365-9440, 10.1186/s41239-024-00446-5 | |
22. | Sibusisiwe Dube, Sinokubekezela Dube, Belinda Mutunhu Ndlovu, Kudakwashe Maguraushe, Lario Malungana, Fungai Jacqueline Kiwa, Martin Muduva, 2024, Chapter 18, 978-3-031-62272-4, 258, 10.1007/978-3-031-62273-1_18 | |
23. | Firas Almasri, Exploring the Impact of Artificial Intelligence in Teaching and Learning of Science: A Systematic Review of Empirical Research, 2024, 54, 0157-244X, 977, 10.1007/s11165-024-10176-3 | |
24. | Biju Kunnumpurath, Vishnu Achutha Menon, Ajith Paul, 2024, chapter 3, 9798369342688, 32, 10.4018/979-8-3693-4268-8.ch003 | |
25. | Odin Monrad Schei, Anja Møgelvang, Kristine Ludvigsen, Perceptions and Use of AI Chatbots among Students in Higher Education: A Scoping Review of Empirical Studies, 2024, 14, 2227-7102, 922, 10.3390/educsci14080922 | |
26. | Abdessalam Ouaazki, Kristoffer Bergram, Juan Carlos Farah, Denis Gillet, Adrian Holzer, 2024, Generative AI-Enabled Conversational Interaction to Support Self-Directed Learning Experiences in Transversal Computational Thinking, 9798400705113, 1, 10.1145/3640794.3665542 | |
27. | David Soto, Manabu Higashida, Shizuka Shirai, Mayumi Ueda, Yuki Uranishi, Enhancing Learning Dynamics: Integrating Interactive Learning Environments and ChatGPT for Computer Networking Lessons, 2024, 246, 18770509, 3595, 10.1016/j.procs.2024.09.198 | |
28. | Slobodan Adžić, Tijana Savić Tot, Vladimir Vuković, Pavle Radanov, Jelena Avakumović, Understanding Student Attitudes toward GenAI Tools: A Comparative Study of Serbia and Austria, 2024, 12, 23348496, 583, 10.23947/2334-8496-2024-12-3-583-611 | |
29. | Msafiri Mgambi Msambwa, Zhang Wen, Kangwa Daniel, The Impact of AI on the Personal and Collaborative Learning Environments in Higher Education, 2025, 60, 0141-8211, 10.1111/ejed.12909 | |
30. | Peijun Wang, Yuhui Jing, Shusheng Shen, A systematic literature review on the application of generative artificial intelligence (GAI) in teaching within higher education: Instructional contexts, process, and strategies, 2025, 10967516, 100996, 10.1016/j.iheduc.2025.100996 | |
31. | Kangwa Daniel, Msafiri Mgambi Msambwa, Zhang Wen, Can Generative AI Revolutionise Academic Skills Development in Higher Education? A Systematic Literature Review, 2025, 60, 0141-8211, 10.1111/ejed.70036 | |
32. | Ritesh Chugh, Darren Turnbull, Ahsan Morshed, Fariza Sabrina, Salahuddin Azad, Rashid Md Mamunur, Shahriar Kaisar, Sudha Subramani, The Promise and Pitfalls: A Literature Review of Generative Artificial Intelligence as a Learning Assistant in ICT Education, 2025, 33, 1061-3773, 10.1002/cae.70002 | |
33. | Qian Liu, Anjin Hu, Tehmina Gladman, Steve Gallagher, Eight Months into Reality: A Scoping Review of the Application of ChatGPT in Higher Education Teaching and Learning, 2025, 0742-5627, 10.1007/s10755-025-09790-4 | |
34. | Daniel Kangwa, Mgambi Msambwa Msafiri, Antony Fute, Balancing innovation and ethics: promote academic integrity through support and effective use of GenAI tools in higher education, 2025, 2730-5953, 10.1007/s43681-025-00689-6 | |
35. | Marina Belkina, Scott Daniel, Sasha Nikolic, Rezwanul Haque, Sarah Lyden, Peter Neal, Sarah Grundy, Ghulam M. Hassan, Implementing Generative AI (GenAI) in Higher Education: A Systematic Review of Case Studies, 2025, 2666920X, 100407, 10.1016/j.caeai.2025.100407 | |
36. | Manuela Farinosi, Claudio Melchior, ‘I Use ChatGPT, but Should I?’ A Multi‐Method Analysis of Students' Practices and Attitudes Towards AI in Higher Education, 2025, 60, 0141-8211, 10.1111/ejed.70094 |
Parameter/Metric | Case Study 1: UG | Case Study 2: UG | Case Study 3: PG |
Helpfulness (Reflections) | Generally found helpful | Generally found helpful | Somewhat helpful |
Enjoyability (Reflections) | Mostly enjoyable experience | Moderate enjoyment | Moderate enjoyment |
Reliance on ChatGPT (Observation + reflection) | Moderate | Moderate | Low |
Benefits (observation +reflection) | Faster and more direct answers | Faster and more direct answers | Comparable to UG students |
Functionality and User Flow (rubric) | Better performance | Better performance | Comparable to UG students |
Content and Information Hierarchy (rubric) | Better performance | Better performance | Comparable to UG students |
Parameter/Metric | UG Students (Case Study 1 & 2) | PG Students (Case Study 3) |
Impact on Learning Outcomes | Generally positive | Mixed results |
Likelihood of Future AI Use | High | Moderate |
Perceived benefits | Timesaving, focused answers | Timesaving, convenience |
Perceived drawbacks | Reliability, accuracy concerns | Reliability, accuracy concerns |
Case Study | Reliance on AI | First-time AI Users (%) |
Case Study 1 (UG) | Moderate | 7 out of 15 (46.7%) |
Case Study 2 (UG) | Moderate | 12 out of 18 (66.7%) |
Case Study 3 (PG) | Low | 8 out of 19 (40%) |
Metrics | Case Study 1 (UG, 15) | Case Study 2 (UG, 18) | Case Study 3 (PG, 19) |
Helpfulness | |||
Very Helpful | 8/15 (53.3%) | 5/18 (27.8%) | 2/19 (10.5%) |
Helpful | 4/15 (26.7%) | 7/18 (38.9%) | 5/19 (26.3%) |
Somewhat Helpful | 3/15 (20.0%) | 6/18 (33.3%) | 10/19 (52.6%) |
Not Helpful | 0/15 (0.0%) | 0/18 (0.0%) | 2/19 (10.5%) |
Enjoyability | |||
Very Enjoyable | 10/15 (66.7%) | 8/18 (44.4%) | 4/19 (21.1%) |
Enjoyable | 3/15 (20.0%) | 7/18 (38.9%) | 6/19 (31.6%) |
Not Enjoyable | 2/15 (13.3%) | 3/18 (16.7%) | 9/19 (47.4%) |
First-time AI Users | 7/15 (46.7%) | 12/18 (66.7%) | 8/19 40(%) |
Likelihood of Use | |||
Likely | 6/15 (40.0%) | 8/18 (44.4%) | 5/19 (26.3%) |
Neutral | 4/15 (26.7%) | 5/18 (27.8%) | 7/19 (36.8%) |
Unlikely | 5/15 (33.3%) | 5/18 (27.8%) | 7/19 (36.8%) |
Functionality and User Flow | |||
Performance Metric | Task 1 (ChatGPT Prohibited) | Task 2 (ChatGPT Allowed) | Average Percentage Change |
Incomplete or confusing user flow, missing or poorly placed key elements | 8/15 (53.3%) | 0/15 (0.0%) | |
Basic user flow, but some elements or interactions could be improved | 7/15 (46.7%) | 5/15 (33.3%) | |
Intuitive user flow, all key elements and interactions are well-planned and easy to understand | 0/15 (0.0%) | 10/15 (66.7%) | |
Average Score | 1.6/5 | 3.0/5 | |
Average Percentage Change | 87.5% | ||
Content and Information Hierarchy | |||
Missing important aspects such as navigation, layout, important content | 6/15 (40.0%) | 0/15 (0.0%) | |
Some important content were missing | 6/15 (40.0%) | 6/15 (40.0%) | |
Well-developed wireframe | 3/15 (20.0%) | 9/15 (60.0%) | |
Average Score | 2.4/5 | 3.0/5 | |
Average Percentage Change | 25.0% | ||
Overall Average Percentage Change: 56.3% |
Functionality and User Flow | |||
Performance Metric | Task 1 (No ChatGPT) | Task 2 (ChatGPT Allowed) | Average Percentage Change |
Incomplete or confusing user flow, missing or poorly placed key elements | 6/18 (33.3%) | 0/18 (0.0%) | |
Basic user flow, but some elements or interactions could be improved | 7/18 (38.9%) | 5/18 (27.8%) | |
Intuitive user flow, all key elements and interactions are well-planned and easy to understand | 0/18 (0.0%) | 10/18 (55.6%) | |
Average Score | 1.4/5 | 3.0/5 | |
Average Percentage Change | 85.6% | ||
Content and Information Hierarchy | |||
Performance Metric | Task 1 (No ChatGPT) | Task 2 (ChatGPT Allowed) | Average Percentage Change |
Missing important aspects such as navigation, layout, important content | 6/18 (33.3%) | 0/18 (0.0%) | |
Some important content were missing | 7/18 (38.9%) | 6/18 (33.3%) | |
Well-developed wireframe | 0/18 (0.0%) | 10/18 (55.6%) | |
Average Score | 1.8/5 | 3.0/5 | |
Average Percentage Change | 28.0% | ||
Overall Average Percentage Change: 56.8% |
Functionality and User Flow | |||
Performance Metric | Task 1 (No ChatGPT) | Task 2 (With ChatGPT) | Average Percentage Change |
Incomplete or confusing user flow, missing or poorly placed key elements | 8/19 (42.1%) | 0/19 (0.0%) | |
Basic user flow, but some elements or interactions could be improved | 6/19 (31.6%) | 7/19 (36.8%) | |
Intuitive user flow, all key elements and interactions are well-planned and easy to understand | 5/19 (26.3%) | 12/19 (63.2%) | |
Average Score | 1.8/5 | 3.2/5 | |
Average Percentage Change | 77.8% | ||
Content and Information Hierarchy | |||
Missing important aspects such as navigation, layout, important content | 7/19 (36.8%) | 0/19 (0.0%) | |
Some important content were missing | 6/19 (31.6%) | 4/19 (21.1%) | |
Well-developed wireframe | 6/19 (31.6%) | 15/19 (78.9%) | |
Average Score | 2.35/5 | 3.15/5 | |
Average Percentage Change | 34.0% | ||
Overall Average Percentage Change: 55.9% |
Functionality and User Flow | |||
Performance Metric | Case Study 1 | Case Study 2 | Case Study 3 |
Average Score (Task 1 - ChatGPT Prohibited) | 1.6/5 | 1.4/5 | 1.8/5 |
Average Score (Task 2 - ChatGPT Allowed) | 3.0/5 | 3.0/5 | 3.2/5 |
Average Percentage Change | 87.5% | 85.6% | 92.9% |
Content and Information Hierarchy | |||
Performance Metric | Case Study 1 | Case Study 2 | Case Study 3 |
Average Score (Task 1 - ChatGPT Prohibited) | 2.4/5 | 1.8/5 | 2.35/5 |
Average Score (Task 2 - ChatGPT Allowed) | 3.0/5 | 3.0/5 | 3.15/5 |
Average Percentage Change | 25.0% | 28.0% | 34.0% |
Overall Average Percentage Change | |||
56.3% | 56.8% | 55.9% |
Parameter/Metric | Case Study 1: UG | Case Study 2: UG | Case Study 3: PG |
Helpfulness (Reflections) | Generally found helpful | Generally found helpful | Somewhat helpful |
Enjoyability (Reflections) | Mostly enjoyable experience | Moderate enjoyment | Moderate enjoyment |
Reliance on ChatGPT (Observation + reflection) | Moderate | Moderate | Low |
Benefits (observation +reflection) | Faster and more direct answers | Faster and more direct answers | Comparable to UG students |
Functionality and User Flow (rubric) | Better performance | Better performance | Comparable to UG students |
Content and Information Hierarchy (rubric) | Better performance | Better performance | Comparable to UG students |
Parameter/Metric | UG Students (Case Study 1 & 2) | PG Students (Case Study 3) |
Impact on Learning Outcomes | Generally positive | Mixed results |
Likelihood of Future AI Use | High | Moderate |
Perceived benefits | Timesaving, focused answers | Timesaving, convenience |
Perceived drawbacks | Reliability, accuracy concerns | Reliability, accuracy concerns |
Case Study | Reliance on AI | First-time AI Users (%) |
Case Study 1 (UG) | Moderate | 7 out of 15 (46.7%) |
Case Study 2 (UG) | Moderate | 12 out of 18 (66.7%) |
Case Study 3 (PG) | Low | 8 out of 19 (40%) |
Metrics | Case Study 1 (UG, 15) | Case Study 2 (UG, 18) | Case Study 3 (PG, 19) |
Helpfulness | |||
Very Helpful | 8/15 (53.3%) | 5/18 (27.8%) | 2/19 (10.5%) |
Helpful | 4/15 (26.7%) | 7/18 (38.9%) | 5/19 (26.3%) |
Somewhat Helpful | 3/15 (20.0%) | 6/18 (33.3%) | 10/19 (52.6%) |
Not Helpful | 0/15 (0.0%) | 0/18 (0.0%) | 2/19 (10.5%) |
Enjoyability | |||
Very Enjoyable | 10/15 (66.7%) | 8/18 (44.4%) | 4/19 (21.1%) |
Enjoyable | 3/15 (20.0%) | 7/18 (38.9%) | 6/19 (31.6%) |
Not Enjoyable | 2/15 (13.3%) | 3/18 (16.7%) | 9/19 (47.4%) |
First-time AI Users | 7/15 (46.7%) | 12/18 (66.7%) | 8/19 40(%) |
Likelihood of Use | |||
Likely | 6/15 (40.0%) | 8/18 (44.4%) | 5/19 (26.3%) |
Neutral | 4/15 (26.7%) | 5/18 (27.8%) | 7/19 (36.8%) |
Unlikely | 5/15 (33.3%) | 5/18 (27.8%) | 7/19 (36.8%) |
Functionality and User Flow | |||
Performance Metric | Task 1 (ChatGPT Prohibited) | Task 2 (ChatGPT Allowed) | Average Percentage Change |
Incomplete or confusing user flow, missing or poorly placed key elements | 8/15 (53.3%) | 0/15 (0.0%) | |
Basic user flow, but some elements or interactions could be improved | 7/15 (46.7%) | 5/15 (33.3%) | |
Intuitive user flow, all key elements and interactions are well-planned and easy to understand | 0/15 (0.0%) | 10/15 (66.7%) | |
Average Score | 1.6/5 | 3.0/5 | |
Average Percentage Change | 87.5% | ||
Content and Information Hierarchy | |||
Missing important aspects such as navigation, layout, important content | 6/15 (40.0%) | 0/15 (0.0%) | |
Some important content were missing | 6/15 (40.0%) | 6/15 (40.0%) | |
Well-developed wireframe | 3/15 (20.0%) | 9/15 (60.0%) | |
Average Score | 2.4/5 | 3.0/5 | |
Average Percentage Change | 25.0% | ||
Overall Average Percentage Change: 56.3% |
Functionality and User Flow | |||
Performance Metric | Task 1 (No ChatGPT) | Task 2 (ChatGPT Allowed) | Average Percentage Change |
Incomplete or confusing user flow, missing or poorly placed key elements | 6/18 (33.3%) | 0/18 (0.0%) | |
Basic user flow, but some elements or interactions could be improved | 7/18 (38.9%) | 5/18 (27.8%) | |
Intuitive user flow, all key elements and interactions are well-planned and easy to understand | 0/18 (0.0%) | 10/18 (55.6%) | |
Average Score | 1.4/5 | 3.0/5 | |
Average Percentage Change | 85.6% | ||
Content and Information Hierarchy | |||
Performance Metric | Task 1 (No ChatGPT) | Task 2 (ChatGPT Allowed) | Average Percentage Change |
Missing important aspects such as navigation, layout, important content | 6/18 (33.3%) | 0/18 (0.0%) | |
Some important content were missing | 7/18 (38.9%) | 6/18 (33.3%) | |
Well-developed wireframe | 0/18 (0.0%) | 10/18 (55.6%) | |
Average Score | 1.8/5 | 3.0/5 | |
Average Percentage Change | 28.0% | ||
Overall Average Percentage Change: 56.8% |
Functionality and User Flow | |||
Performance Metric | Task 1 (No ChatGPT) | Task 2 (With ChatGPT) | Average Percentage Change |
Incomplete or confusing user flow, missing or poorly placed key elements | 8/19 (42.1%) | 0/19 (0.0%) | |
Basic user flow, but some elements or interactions could be improved | 6/19 (31.6%) | 7/19 (36.8%) | |
Intuitive user flow, all key elements and interactions are well-planned and easy to understand | 5/19 (26.3%) | 12/19 (63.2%) | |
Average Score | 1.8/5 | 3.2/5 | |
Average Percentage Change | 77.8% | ||
Content and Information Hierarchy | |||
Missing important aspects such as navigation, layout, important content | 7/19 (36.8%) | 0/19 (0.0%) | |
Some important content were missing | 6/19 (31.6%) | 4/19 (21.1%) | |
Well-developed wireframe | 6/19 (31.6%) | 15/19 (78.9%) | |
Average Score | 2.35/5 | 3.15/5 | |
Average Percentage Change | 34.0% | ||
Overall Average Percentage Change: 55.9% |
Functionality and User Flow | |||
Performance Metric | Case Study 1 | Case Study 2 | Case Study 3 |
Average Score (Task 1 - ChatGPT Prohibited) | 1.6/5 | 1.4/5 | 1.8/5 |
Average Score (Task 2 - ChatGPT Allowed) | 3.0/5 | 3.0/5 | 3.2/5 |
Average Percentage Change | 87.5% | 85.6% | 92.9% |
Content and Information Hierarchy | |||
Performance Metric | Case Study 1 | Case Study 2 | Case Study 3 |
Average Score (Task 1 - ChatGPT Prohibited) | 2.4/5 | 1.8/5 | 2.35/5 |
Average Score (Task 2 - ChatGPT Allowed) | 3.0/5 | 3.0/5 | 3.15/5 |
Average Percentage Change | 25.0% | 28.0% | 34.0% |
Overall Average Percentage Change | |||
56.3% | 56.8% | 55.9% |