
Graphs are highly prevalent as a form of quantitative data in various science, technology, engineering and mathematics fields. Thus, graphical literacy is especially important in understanding today's world and being scientifically literate. However, students often face difficulties in graph interpretation and differ substantially in their graphical literacy. While many teachers are aware of students' difficulties in answering graph items, there is limited knowledge about how students go about attempting graph items. In this exploratory study, we investigated the eye-gaze patterns of experts and novices in graph interpretation of five science inference-based multiple-choice items requiring no prior content knowledge to problem-solve. Experts refer to science university faculty members who are currently teaching science content courses to undergraduate students. Novices refer to university undergraduates majoring in one of the science subjects. Participants' eye-gaze movements were recorded using the Dikablis eye-tracker, and their eye-gaze patterns and total glance time (s) were subsequently analyzed using the software D-Lab 3.0. Experts focused more on the question stem, whereas novices focused more on the graph. Additionally, experts tend to focus on contextual and graph data features initially, before moving to cues such as options. Conversely, novices demonstrated more sporadic search patterns. The findings contribute to the literature that compares how experts and novices' problem-solve graph items that are inference-based. An interesting future study on the eye gaze patterns and accuracy of answers is suggested from a finding. This study also provides a set of heuristics to be adopted in the teaching and learning of graph interpretation. The findings of this study have implications for teachers in the way they scaffold students' approach to answering graphical items. Additionally, students can employ heuristics to answer graphical items more effectively.
Citation: Tang Wee Teo, Zi Qi Peh. An exploratory study on eye-gaze patterns of experts and novices of science inference graph items[J]. STEM Education, 2023, 3(3): 205-229. doi: 10.3934/steme.2023013
[1] | Adi Nur Cahyono, Riza Arifudin, Rozak Ilham Aditya, Bagus Surya Maulana, Zsolt Lavicza . An exploratory study on STEM education through math trails with digital technology to promote mathematical literacy. STEM Education, 2025, 5(1): 41-52. doi: 10.3934/steme.2025003 |
[2] | Muzakkir, Rose Amnah Abd Rauf, Hutkemri Zulnaidi . Development and validation of the Quran – Science, Technology, Engineering, Art, And Mathematics (Q-STEAM) module. STEM Education, 2024, 4(4): 346-363. doi: 10.3934/steme.2024020 |
[3] | Usman Ghani, Xuesong Zhai, Riaz Ahmad . Mathematics skills and STEM multidisciplinary literacy: Role of learning capacity. STEM Education, 2021, 1(2): 104-113. doi: 10.3934/steme.2021008 |
[4] | Jun Karren V. Caparoso, Antriman V. Orleans . DiGIBST: An inquiry-based digital game-based learning pedagogical model for science teaching. STEM Education, 2024, 4(3): 282-298. doi: 10.3934/steme.2024017 |
[5] | Shumeng Ni, Zhujun Jiang, Fengkuang Chiang . Visual attention to different types of graphical representations in elementary school mathematics textbooks: An eye-movement-based study. STEM Education, 2025, 5(3): 448-472. doi: 10.3934/steme.2025022 |
[6] | Hao Meng, Lin Yue, Geng Sun, Jun Shen . Reasoning arithmetic word problems entailing implicit relations based on the chain-of-thought model. STEM Education, 2023, 3(4): 251-262. doi: 10.3934/steme.2023015 |
[7] | İrfan Gümüş, Esra Bozkurt Altan . Professional development in STEM for science teachers: Examining improvements in lesson planning and implements†. STEM Education, 2025, 5(3): 425-447. doi: 10.3934/steme.2025021 |
[8] | William Guo . A practical strategy to improve performance of Newton's method in solving nonlinear equations. STEM Education, 2022, 2(4): 345-358. doi: 10.3934/steme.2022021 |
[9] | Vincent Theodore M. Balo, Joje Mar P. Sanchez . Evaluation of educational assessment module for flexible STEM education. STEM Education, 2025, 5(1): 130-151. doi: 10.3934/steme.2025007 |
[10] | Menşure Alkış Küçükaydın, Çiğdem Akkanat Avşar . Collective creativity in science classrooms: Scale adaptation and an investigation in terms of demographic variables. STEM Education, 2025, 5(2): 207-228. doi: 10.3934/steme.2025011 |
Graphs are highly prevalent as a form of quantitative data in various science, technology, engineering and mathematics fields. Thus, graphical literacy is especially important in understanding today's world and being scientifically literate. However, students often face difficulties in graph interpretation and differ substantially in their graphical literacy. While many teachers are aware of students' difficulties in answering graph items, there is limited knowledge about how students go about attempting graph items. In this exploratory study, we investigated the eye-gaze patterns of experts and novices in graph interpretation of five science inference-based multiple-choice items requiring no prior content knowledge to problem-solve. Experts refer to science university faculty members who are currently teaching science content courses to undergraduate students. Novices refer to university undergraduates majoring in one of the science subjects. Participants' eye-gaze movements were recorded using the Dikablis eye-tracker, and their eye-gaze patterns and total glance time (s) were subsequently analyzed using the software D-Lab 3.0. Experts focused more on the question stem, whereas novices focused more on the graph. Additionally, experts tend to focus on contextual and graph data features initially, before moving to cues such as options. Conversely, novices demonstrated more sporadic search patterns. The findings contribute to the literature that compares how experts and novices' problem-solve graph items that are inference-based. An interesting future study on the eye gaze patterns and accuracy of answers is suggested from a finding. This study also provides a set of heuristics to be adopted in the teaching and learning of graph interpretation. The findings of this study have implications for teachers in the way they scaffold students' approach to answering graphical items. Additionally, students can employ heuristics to answer graphical items more effectively.
Graphical displays of data are ubiquitous in today's society, and they can be found vastly in newspapers, television, science, engineering and education fields [5,18]. Graphs are an alternative form of representation of data and may aid in visualization through the formation of a mental image of data, thus resulting in a better understanding and comparison of data [45]. Graphs are typically used to represent mathematical functions, present data from sciences, and describe scientific theories [40]. In science, graphs are used to summarize complex information or relationships, and thus are typically used in formal scientific communication, for instance in research articles. Further, graphical representation is highly prevalent in the scientific field as it eases the understanding of quantitative information and can be useful in depicting a quantitative or scientific concept [40]. The organization of data into graphs and tables is an integral method of data representation in order to deduce relationships between variables [18]. It also aids in understanding data and can often reveal what is typically unseen in other representations such as verbal description [8] or scientific textbooks [18]. As such, reading and interpreting graphical data is pivotal in science, technology, engineering, and mathematics learning, where graphical literacy is required for students in their research and coursework [2].
The use of graphs as a form of data representation is also highly prevalent in daily life and hence, graphical literacy is a central skill to be literate in today's information age [18]. Graphical literacy refers to the ability to construct, produce, present, read and interpret charts, maps, graphs, and other visual presentations and graphical inscriptions [37]. According to dual-coding theory, information is retained and retrieved with greater ease when coded in verbal and visual forms [32]. Given the prevalence of graphs as a form of quantitative data [38,39], graphical literacy is especially important in the understanding of today's world and being scientifically literate.
In a previous study by the second author and coworkers [48], they reported on five graph items in a set of science inference tests designed to investigate middle school students' inference abilities. These items are designed to assess students' science inference abilities without needing prior science knowledge. One interesting finding inferred from the Rasch analysis of the students' responses is that students have experienced difficulty with graph items that contain more than one set of data (e.g., multiple lines or curves) or require them to make projections beyond the given information (e.g., graph axes) to make estimations. However, during this study, the authors could not conduct a follow-up with the school students during the pandemic to find out how students' problem-solve the items and what difficulties they faced.
Nonetheless, the intriguing results have inspired this follow-up exploratory study with university faculty members responsible for teaching undergraduates majoring in science and were on track to become science school teachers upon graduation from the four-year degree program. We recruited both groups of participants, forming the "expert" and "novice" groups. It should be underscored that the experts' content knowledge is irrelevant in answering the questions as these are designed to measure science inference skills. The university faculty members are considered experts due to their experience dealing with graphical data required in their research work.
To gain more insights into the processes and outcomes of the problem-solving process, we harnessed the affordances of eye-tracking technologies to conduct an exploratory study that investigated how the experts and novices went about this process based on eye movement patterns. In this paper, we discuss the findings of the eye-gaze patterns of experts and novices in graph interpretation of all five multiple-choice items with graphs. The eye-gaze patterns and total glance time for each participant are examined. Thus, in studying how experts and novices attempt the graphical items, this study hopes to identify a set of heuristics for answering graphical items. This set of heuristics could be adopted by teachers in coaching students, to provide better scaffolding for students when answering graphical items. The findings of this study have implications for science teachers in designing graph items and teaching students the approach to problem-solving graph items.
In what follows, we discuss the literature that examines the difficulties faced by students when interpreting graphs. This helps to justify this study. Extensive reviews of eye tracking studies have been conducted [1,38,42,44] hence, we briefly explain the technology but focus more on the applications of this technology to understand how experts and novices decipher graphs differently so that we may distil useful strategies for teaching and learning.
Previous studies have reported on students' difficulty in problem-solving graph items [18,30]. Students experience difficulties in graph reading and interpretation, graph construction, and graph evaluation [30]. Glazer [18] has found that college students often struggle to understand and use graphical data. Interpreting graphical data as a picture is one of the most common cognitive errors in graph interpretation. This occurs when graphs are viewed as literal depictions of situations rather than abstract quantitative data [18]. Apart from this, difficulties in graph interpretation include confusing the slope and the heights [22], confusing the interval and a point, perceiving the graph as a collection of discrete points [25], focusing on x-y trends only [41], and difficulties due to the amount of information presented in the graph, its format, and inappropriate visual features or teachers' expertise [6]. Teachers' expertise might pose a barrier to implementing meaningful practice in graphing competence [18].
To help students in graph interpretation, it is crucial that teachers are well-equipped with the relevant skills and strategies required [21,46]. However, studies have reported that teachers lack graph interpretation competencies [6,11,33]. A study by Bowen and Roth [6] has investigated the responses of preservice elementary and secondary science teachers to data and graph interpretation tasks. The study suggested that preservice teachers are unprepared in teaching data collection and analysis and that more experience is required in engaging in data and graph interpretation practices [6]. Similarly, the study by Patahuddin and Lowrie [33] illustrates that most middle school teachers have difficulty answering questions requiring reading beyond the data. Questions that require reading beyond the data are more cognitively demanding and require competent conceptual understanding. This concurs with another study by Çil and Kar [11], which reveals that while preservice science teachers can read values and trends in graphs, they are not proficient at the higher levels of graph interpretation. As aforementioned, teachers' expertise poses one of the barriers to student's graph interpretation competencies. However, teachers may not have the skills required to teach graph interpretation. Hence, to aid students in graph interpretation, it is crucial that teachers have the relevant skills and strategies required.
Studies comparing problem-solving by experts and novices have shown a difference in how experts and novices solve problems [7,10,19]. In a study on information problem solving by experts and novices, it is noted that experts tend to spend more time on the task and activate their prior knowledge more often than novices [7]. Further, a study by Harsh et al. [19] reveals that the level of science expertise directly impacts how individuals direct their attention when completing graph-based tasks. More specifically, experts tend to focus on contextual and graph data features initially, before moving to cues such as prompts and provided answers. In comparison, novices demonstrate more sporadic search patterns that oscillate between task-based cues and other image elements. Additionally, it is highlighted that experts tend to focus more on contextual elements such as title and axis that may inform their understanding of the image and graph data. Comparatively, novices tend to focus their attention and rely on cues such as provided answer options and question prompts. Notably, there are no interviews conducted with the participants about their thought processes, hence limiting the conclusions that can be drawn about what they think and see. In our study, the research participants were interviewed about their thought processes after answering the graphical multiple-choice items.
In 2011, Gegenfurtner, et al. [17] employed eye-tracking technology in examining expertise differences in the comprehension of visualizations. It is revealed that experts, as compared to non-experts, tend to have shorter fixation durations, more fixations on task-relevant areas, and fewer fixations on task-redundant areas. Another study by Tsai et al. [50] has employed eye-tracking technology in examining students' visual attention when solving a multiple-choice science problem. This study highlights that when solving an image-based multiple-choice science problem, students generally focus more on chosen options than rejected alternatives and spend more time analyzing relevant factors as compared to irrelevant ones. Further, it is revealed that those who answered the question correctly focused more on relevant factors. In comparison, students who answer the question incorrectly face difficulties in decoding the problem, recognizing the relevant factors, and in self-regulating concentration. Both studies reveal that in comprehending visualizations, the most important aspect would be the task-relevant areas. Additionally, both studies underscored how eye-tracking could be a method for examining how learners comprehend visuals such as graphs. Thus, in this current study, eye-tracking technology was employed in examining how experts and novices attempted five graphical science inference-based multiple-choice items.
This study is premised on the assumptions about the differences in how experts and novices interpret graphical items [38]. This assumption is also backed by several other eye-tracking studies comparing differences between experts and novices learning about aviation Ziv [53], concept mapping [14] and surgical procedures [28] where making observations (e.g., noticing changes in weather conditions outside aircraft, identifying connections between different concepts, and observing changes to different parts of an organ) and focusing on relevant information (e.g., weather change, related concepts, and critical veins connected to the organs) are enacted in interpreting and problem-solving graphs. Additionally, Yen et al.'s [52] study has shown the differences in how science and non-science students solve scientific graph problems. A recent review article [38] on 32 articles comparing the gaze behaviors of experts and non-experts, reveals consistent arguments for the importance of unpacking the differences between experts and novices so that effective strategies undertaken by experts could be identified and shared to hone the practices of novice learners.
While there is extensive literature comparing experts and novices and in graph problem-solving, our work contributes to the current literature in two ways. First, it extends the work of the Tan et al. [47] who has unpacked the characteristics of science inference items, including the five graph items examined here, that were relatively harder or easier for students to answer. However, the earlier study [47] has not shed light on how students solve the science inference graph items. It also does not inform the readers of ways to support students in problem-solving such types of graph items. Thus, the findings and the heuristics discussed at the end of this paper have practical significance to teachers, especially in honing their pedagogical practices. This is supported by studies showing how eye trackers when applied to medical training, improve the learning of surgeons and radiologists [3,31]. This study also contributes to the literature on eye-tracking studies of graph items by including science inference-based graph items for analysis. While several studies have examined graphs, it is not explicitly reported if the items will evoke prior content knowledge in problem-solving. In this paper, the five graph items analyzed are inference-based items where prior content knowledge is not required in problem-solving. As such, having the prior knowledge will likely not interfere with the eye gaze patterns studied as the participants answer based on the data given.
Eye-tracking technology has been used widely in various fields including education studies, cognitive psychology and educational psychology. The application of this technology lies based on the eye-mind hypothesis, which posits that the fixation of a human gaze is closely related to what the mind processes [23]. However, we acknowledge that this hypothesis is not subscribed to by all researchers doing eye-tracking research [43]. The use of eye-tracking technology in research reveals the underlying cognitive mechanisms involved, thus contributing to the understanding of information processing [36]. Further, the data obtained from eye-tracking provides information on the visual attention allocation during the execution of the primary task [4].
Eye-tracking technology is less intrusive than data collection methods such as thinking-aloud interviews. Although thinking aloud while attempting the graphical items may be an effective way of providing a rich source of data on thought processes, it may alter the thinking process itself, since cognitive resources are redirected from the execution of the primary task [51]. In contrast, eye-tracking technology does not disrupt the execution of the primary task, which refers to attempting the graphical items as in this research.
However, as aforementioned, eye-tracking technology lies on the premise of the eye-mind hypothesis [23], and assumes that eye movements indicate internal cognitive processes. In addition, eye-tracking technology may be limited as it does not reveal the success or failure of comprehending a particular piece of information [20]. The time spent attending to a particular feature does not equate to an adequate comprehension of the underlying principle it denotes [13]. Hence, the data derived from eye-tracking technology needs to be complemented with other forms of data, such as interviews regarding participants' thought processes, which were employed in this study.
This study aims to address the following research question: What are the differences (if any) in the eye tracking patterns of experts and novices when they are problem-solving graph items? The findings of the research question will provide insights into how expertise levels affect the strategies employed in graph interpretation. Given how experts and novices differ in the way they solve problems, it is postulated that the different expertise levels will affect the strategies employed in graph interpretation. The differences in eye-gaze patterns of experts and novices and the thought processes gathered can aid in understanding how novices and experts tackle graphical items, and subsequently a set of heuristics could be derived, providing answers to the second research question. Heuristics is an approach to problem-solving; it provides a set of strategies in decision making [12]. In deriving a set of heuristics for graph interpretation, students can adopt it as a general guideline and aid them in tackling graphical items more effectively.
The research participants included three University science faculty members ("experts") and three science undergraduates ("novices") from a university in Singapore. All three science faculty members attained a PhD and had over 15 years of teaching experience at the time of the study. In the context of this study, science faculty members were regarded as content experts whereas science undergraduates were regarded novices based on their qualifications and years of experience in the field of science and science education. The participants were recruited based on personal contacts. All of them participated voluntarily and gave their consent prior to the data collection.
The Dikablis eye-tracker was used to record the gaze patterns of participants. The scene and gaze-patterns of participants were recorded by the eye-tracker. The same eye-tracker was used with the six participants. To ensure the quality of data analysis, the eye-tracker was calibrated for each participant before proceeding with the data collection. Participants were explicitly requested not to move their heads after calibrating the eye-tracker. This was to ensure the accuracy of the calibration. Each participant was tasked to determine the answer to the graphical multiple-choice items projected onto the monitor. Participants sat in front of the monitor at approximately 50 centimeters. There were five graphical multiple-choice items to be answered and no time limit was imposed. The graphical multiple-choice items were administered on a computer. Following this, participants were interviewed about their thought processes when answering the graphical multiple-choice items. The interview was audio-recorded and subsequently transcribed. The entire duration of each participation was one hour, and data collection was conducted individually in a room (Figure 1). Research ethics approval was obtained, and participants' consent was obtained prior to the start of the study.
The five graphical multiple-choice items were obtained from the science inference test instruments constructed by Tan et al. [47]. In the study conducted by Tan et al. [47], they constructed a total of five graphical multiple-choice items out of 35 items to measure science inference skills. This meant that limited or no prior content knowledge was required in order to answer the questions correctly. Three of the graphical multiple-choice items were located above the item mean difficulty whereas two of the graphical multiple-choice items were located below the item mean difficulty. Questions located below the item mean difficulty implied that participants found these questions easier whereas questions located above the item mean difficulty implied that participants found these questions more challenging. In this study, we aimed to obtain a deeper understanding of how the participants went about answering the graphical multiple-choice items before deciding on an answer.
Most studies involving graph literacy have focused on the cognitive and perceptual processes when extracting specific information from graphs [35]. According to empirical studies that draw drew upon task analytic theories, simple graphs such as bar and line graphs typically follow these stages of processing: (a) pattern recognition, (b) determination of conceptual relations, and (c) encoding of visual features such as referents of the graphs [27,29]. To elaborate, an individual who attempts to problem-solve a graph will read parts of the question multiple times and then search for specific information on the graph during which they will shift between the axes and the main parts of the graph. Once the information is found, multiple shifts (saccades) will occur between the main part of the graph and the legend, and this information will be stored in the memory. The question will then be answered [9,34,49]. The literature suggests that an important starting point for analysis is to identify key areas in each graph items for analysis. Subsequently, we drew upon this information from the literature to identify the areas of interest (AOIs) in our analysis below.
The graph was divided into AOIs, namely, Title, Y-axis, X-axis, Question Stem, Options and Graph (Figure 1). In Figure 1, the title AOI includes the title. The y-axis and x-axis AOI include the y-axis and label, and the y-axis and label respectively. The graph AOI included all the data points and lines. The question stem AOI includes the question. Lastly, the options AOI includes all the options (A, B, C and D) that the research participants could choose as the best answer. The AOIs were defined the same way for all five graphical multiple-choice items. The eye-gaze patterns and total glance time (s) were analyzed using the software D-Lab 3.0. Total glance time (s) is defined as the accumulated glance duration in the direction of an AOI for the selected time interval. For instance, a total glance time of 40 seconds in the direction of the y-axis AOI implied that the participant spent 40 seconds focusing their attention on the y-axis AOI.
To compare the task performance of experts and novices, the number of participants providing the correct answer for each question was compared. Table 1 shows that the number of correct answers for both experts and novices was relatively similar. Further, Question 5 had the least number of correct answers, with only one correct answer out of the three expert participants, and two correct answers out of the three novice participants. Question 5 required the participants to recognize a pattern change and extend beyond the given information, which could have posed a challenge to experts and novices. This concurred with previous studies, highlighting that graphical items requiring reading beyond the data are more cognitively demanding [11,16,33]. Thus, a higher number of incorrect answers was expected for Question 5. Additionally, when Question 5 was administered to 1,397 Grade 7 students in Singapore, it was shown that this question was located above the item mean difficulty, meaning that most of the students found Question 5 difficult [47]. This suggests an interesting finding and possible future study about the relationship between the eye gaze patterns of experts and novices of the field and the accuracy of their problem-solving.
Question | Number of correct answers | |
Experts (N = 3) | Novices (N = 3) | |
1 | 3 | 3 |
2 | 3 | 3 |
3 | 3 | 3 |
4 | 2 | 3 |
5 | 1 | 2 |
Interestingly, when the research participants were asked which question was the most difficult to answer, most of them highlighted that Questions 3 and 4 were the most difficult, as these questions involved the interpretations of more than one curve. Below are excerpts from their interviews explaining their reasoning: "Generally, graphs with more than one line are more challenging because you have to tease out the different information." (Excerpt 1) and "Question 3 and 4 would be the more difficult ones because there are more lines… there are more trends to observe." (Excerpt 2). Both expert and novice participants similarly pointed out that graphs with more than one curve were more challenging. Graphs with more than one curve were more challenging as participants had to manage more information [6,47].
However, from Table 1, it could be seen that all participants answered Question 3 correctly and only one participant answered Question 4 incorrectly. This could be due to the effective strategies employed when tackling such questions that helped ensure the accuracy of their answers. When participants were asked what strategies they used when tackling these questions, it appeared that most of them placed great emphasis on both the question stem and the graph itself (Excerpt 3 & 4):
Excerpt 3. Response by expert participant A
"You have to read the question first, and then you look at the graph and then you try to answer the question. It takes a while to interpret the graph because there are many lines."
Excerpt 4. Response by expert participant C
"You have to start with understanding the question, what the question is specifically asking for because sometimes the phrasing of the question is not clear, so you need to read several times before you understand what the question is asking for. The second strategy would be looking at the graph and trying to understand the graph, which entails the understanding of the axis… The main thing is to understand what the question is about."
Both Excerpts 3 and 4 highlighted that to tackle questions involving the interpretations of more than one curve, the two most important aspects were the question stem and the graph itself. Understanding the question stem set the context and allowed participants to distil the important information to focus on. Following which, participants would have a better idea of how to interpret the graph. Our finding concurred with a previous study on the visual attention of students when solving graphical items [26]. The study revealed that students with low confidence in answering the question correctly spent a longer time on the question stem as compared to high confidence students [26]. This highlighted the importance of understanding the question stem when solving questions that were more challenging. This key finding of the strategies employed in answering more challenging graphical items would aid in the derivation of heuristics in the later section of this paper.
The mean time taken by experts and novices to answer each question was analyzed and our results revealed that in general, experts spent a longer amount of time at each question as compared to novices (Table 2). This concurred with previous findings [7] which highlighted that experts tend to spend more time on the task as compared to novices. A possible reason for the longer time spent on each question could be that experts spend a lot of time on the question stem. This could be seen from Figures 2, 4, 7, 8, 10 (below) which showed that experts generally focused most of the question stem AOI. As seen from Excerpts 3 and 4, experts placed great emphasis on the question stem as understanding the question stems set the context of the graphical items, thus allowing them to distil the important aspects of the graph to give more attention to.
Question | Mean time taken (s) | |
Experts (N = 3) | Novices (N = 3) | |
1 | 40.2 | 33.0 |
2 | 39.7 | 29.3 |
3 | 113 | 74.5 |
4 | 82.2 | 54.9 |
5 | 78.6 | 45.2 |
The mean total glance time of experts and novices for each question and for questions excluding incorrect answers were compared. The mean total glance time revealed the amount of time participants spent at the respective AOI, indicating the portion of the graph participants spent the most time looking.
The summary results for the mean total glance time for each AOI of the five graphical multiple-choice items for both experts and novices are shown in the following figures. Figure 2 showed that for Question 1, experts focused more on the question stem AOI whereas novices focused more on the graph AOI.
As compared to experts, novices spent less time on the question stem AOI for Question 1. Further, as compared to novices, experts spent less time on the graph AOI for Question 1. This could be due to the nature of Question 1. Question 1 (Figure 3) was a relatively easy question with only one line in the graph. Additionally, the answers could be obtained by a simple subtraction of two respective values [47]. Thus, experts did not require much time to interpret the less complex graph. Instead, more time was spent on analyzing the question stem. As aforementioned, experts viewed the question stem as one of the most important aspects of a graphical item, as it set the context and allowed them to distil the important aspects of the graph to focus on.
Figure 4 showed that for Question 2, both experts and novices focused on the question stem AOI the most. It could also be seen that generally, the time spent on the different AOIs for both experts and novices was similar for Question 2. Comparable to Question 1, Question 2 (Figure 5) was relatively easy as it only had one line in the graph, and the answers could be obtained by identifying one value in the graph [47]. Specifically, Question 2 involved the interpretations of a pH curve, of which both the science experts and science novices had prior knowledge. This could explain why both experts and novices spent less time on the graph AOI. As aforementioned, the question stem AOI was one of the most important aspects of a graphical item, which explained why most amount of time was spent on it for this question.
Our results revealed that for simple graphs that participants possessed prior knowledge, more time was spent on the question stem and less time was spent on the graph. This suggested that there was a hierarchy of importance for the different parts of the graph, depending on the difficulty level in comprehending the graph, and if prior knowledge was present. As compared to analyzing the graph directly, perhaps the greater emphasis should be on the question stem, which was key to problem-solving.
In comparison to Questions 1 and 2, Question 3 (Figure 6) was more difficult as it involved the interpretations of more than one curve [47]. Interestingly, our results highlighted that the experts and novices still focused on the question stem AOI the most (Figure 7). Previously, it was mentioned in Excerpts 3 and 4 that more emphasis was placed on the question stem AOI for questions that were more difficult. Thus, this explained why the mean total glance time for the question stem AOI was the highest for both experts and novices.
For Question 4, it was noted that experts focused more on the options AOI whereas novices focused more on the graph AOI (Figure 8). Interestingly, both experts and novices spent significantly less time on the question stem AOI for Questions 4 and 5, than Questions 1, 2 and 3. This could be due to the increased complexity of the graph. Question 4 (Figure 9) involved a graph with more than one curve and with differing trends. Additionally, it could be seen that the time spent on the graph AOI was much less for experts as compared to novices despite the difficulty level of Question 4. This highlighted the differences in expertise levels between experts and novices. Here, it could be deduced that experts found it less difficult to interpret the graph of Question 4 despite having four different curves.
Similar to our results for Question 4, it was observed that experts focused more on options, whereas novices focused more on the graph AOI when answering Question 5 (Figures 10, 11). Further, it was interesting to note that the total glance time for both experts and novices for the graph AOI was almost the same (Figure 10).
The AOI that the experts and novices focused on the most for each question based on the mean total glance time (s) was summarized in Table 3. From Table 3, it could be concluded that in general, experts focused more on the question stem AOI whereas novices focused more on the graph AOI. This was in contrast to a previous study [19] which reported that experts tend to focus more on contextual elements such as title and axis that may serve to inform their understanding of the image, as well as graph data. In comparison, novices tended to focus their attention and rely on cues such as options and question stem [19]. This could be explained by the theory of long-term working memory [15], which posited that advanced learners were able to encode information in the long-term memory faster and were able to access it efficiently for later task operations. Additionally, this was in agreement with the problem-solving literature [7], which highlighted that experts could activate their prior knowledge more often than novices. As such, experts, who worked frequently with graphical data, may spend a shorter amount of time understanding the data presented in the graphical item. Instead, more time was spent on analyzing the question stem with the data they had already understood. This was especially so for Questions 1 and 2, which were relatively easier. Question 1 involved a simple calculation question whereas Question 2 involved identification of a specific number on the graph. In comparison, novices might require more time to process the data presented in the graphical item. Thus, they spent the most amount of time on the graph AOI.
Question | Experts (N = 3) | Novices (N = 3) |
1 | Question stem | Graph |
2 | Question stem | Question stem |
3 | Question stem | Question stem |
4 | Options | Graph |
5 | Options | Graph |
From Table 1, it could be seen that there was one incorrect answer for Question 4 out of the three experts, two incorrect answers for Question 5 out of the three experts, and one incorrect answer for Question 5 out of the three novices. The summary results for the mean total glance time (s) for each AOI of Questions 4 and 5 by participants who answered correctly were shown in Figures 12 and 13.
Figure 12 showed that for experts who answered Question 4 correctly a significantly large amount of time was spent on the options AOI. This was similar to the data shown in Figure 8. Figure 13 showed that for experts who answered Question 5 correctly, a significantly large amount of time was spent on the y-axis AOI. This was in contrast to the data shown in Figure 10, where experts focused most on the options AOI for Question 5. Further, Figure 13 showed that for novices who answered Question 5 correctly, a large amount of time was spent on the options AOI. This was in contrast to the data shown in Figure 10 that showed the novices focused most on the graph AOI for Question 5.
The differences observed could be due to the nature of Question 5. As aforementioned, Question 5 required participants to recognize a pattern change and to extend beyond the given information. Since the question involved a prediction of the temperature in the Year 2011, it was likely that there was a greater emphasis on the y-axis which is also the year. Hence this could explain why for experts who answered Question 5 correctly, a large amount of time was spent on the y-axis AOI.
On the other hand, novices who answered Question 5 correctly spent a significantly large amount of time on the options AOI. Since Question 5 required participants to extend beyond the given information, it was likely that novices spent a greater amount of time choosing between the options available and determining which options provided the best possible answer. While the graph AOI was important as it provided the general trend, difficulty in the interpretation of the graph may lead to an incorrect answer. The mean total glance time novices spent on the graph AOI for Question 5 was the highest as compared to the other AOIs (Figure 6). This could also imply the difficulties novices faced in interpreting the trend for Question 5. Given this, relying on cues such as the options might instead aid novices in choosing the best answer.
After examining the eye-gaze patterns of each participant for the five graphical multiple-choice items, it was found that for experts. In Figure 14, the line represents the eye movement, numbers indicate the order, and the dot size represents the time spent fixated at that point. The first thing that the experts noticed was the title AOI. This was followed by the y-axis AOI, x-axis AOI, question AOI and pattern AOI, in no particular order. In general, the experts took notice of the options AOI the last. For novices (Figure 15), it was noted that in general, they would first notice the title AOI. In Figure 15, the line represents the eye movement, numbers indicate the order, and dot size represents the time spent fixated at that point. Following this, there was no clear trend of how novices continued to analyze the graphical item.
This finding corresponded with previous studies [19], which showed that experts initially focused on contextual and graph data features, before moving to cues such as prompts and provided answers. On the other hand, novices demonstrated more sporadic search patterns that oscillated between task-based cues and other image elements.
As Question 5 had the greatest number of incorrect answers out of the five questions, the eye-gaze patterns for participants who answered Question 5 incorrectly were examined. From the eye-gaze patterns, there was no clear trend in the way the three participants analyzed Question 5.
Examining how experts and novices interpreted graphical items has deepened our understanding of the skills required to answer graphical items correctly. The findings of our study could be used to identify a set of heuristics for answering graphical items effectively. The practical implication of this study is that it has afforded a set of heuristics for teachers to scaffold students' problem-solving of graph questions and for students to use adaptively to answer graph questions in a more structured manner.
This set of heuristics was derived from the eye-gaze patterns of both experts and novices. Based on our results, the heuristics are structured according to: (1) the order of priority for the different components of a graph and (2) problem solving. Table 4 presents an overview of the set of heuristics for answering graphical items effectively. The order of priority for the different components of a graph is encompassed in steps 1 and 2, whereas the heuristics for problem solving are encompassed in steps 3, 4 and 5.
Heuristics for answering graphical items effectively |
1. Direct the attention towards graphical information, which includes the title, y-axis and x-axis. 2. Analyze the graph and identify any visible trend. 3. Focus on the question stem. This should be the main focus when answering graphical items. 4. When there is a possible answer in mind, look at the options available and choose the best possible option. 5. If there is difficulty in interpreting the graph, the emphasis should be on the options available. |
The first step in answering graphical items is to direct the attention towards graph information, such as the title, y-axis and x-axis. This reflects the eye-gaze patterns of experts and novices who have answered the graphical multiple-choice items correctly. Both experts (Figure 9) and novices (Figure 10) noticed the title AOI first. Looking at the title AOI first sets the context for the graphical item. The second step would involve analyzing the graph and identifying any visible trend. After which, the focus would be on the question stem. This should be the main focus when answering graphical multiple-choice items, as it presents the issue the question is asking, and it helps to narrow down the focus of the graph. This reflects the strategies used by experts and novices when answering the more difficult questions. Excerpt 3 and 4 highlight that emphasis on the question stem is an effective strategy for tackling more difficult questions. Additionally, Table 3 highlights that experts generally focus on the question stem most. Once the graph analysis is done and a possible answer is in mind, the fourth step would be looking at the available options. This reflects the experts' eye-gaze patterns (Figure 9), which show that experts move on to cues such as the options last, after analyzing the contextual and graph data feature. If there is difficulty in interpreting the graph, the emphasis should be on the options, as multiple-choice items are about choosing the best possible answer [24]. This is reflective of how novices tackled more challenging questions such as Question 5 (Figure 8). It is noted that novices would focus more on the options available when faced with difficulties in graph interpretation.
This paper has reported on a study that adopted eye-tracking technology to shed deeper insights into how participants with different levels of content knowledge and teaching experience attempt graph items. The findings of this study are built on a previous study by Teo et al. [48] who identified a set of graph items and their levels of difficulty as determined by more than 1,000 Grade 7 students. While the participants of this study are not Grade 7 students because of data collection constraints during the COVID-19 pandemic, the findings of the study have helped us to infer different approaches undertaken by experts and novices when solving these science inference items requiring limited prior content knowledge. The difference between the expert and novice strategies has illuminated the steps that students could undertake to problem-solve graph items in a more systematic and effective manner. The heuristics we have derived from the findings could be used to scaffold the bridging of the expert-novice gap.
While the study has generated a set of useful heuristics, we are mindful that this exploratory study was conducted with a small sample size. As such, our findings may be influenced by sampling error. This could explain some of the disagreements with previous studies as aforementioned. Thus, future studies could collect data from a larger sample size for more comprehensive research. While we have earlier cited studies from medical and aerospace education to show how expert knowledge and practices have been successfully taught to students and interns, the heuristics are suggested as simple guides for teachers and could only be validated if put into use and researched.
Additionally, this study has examined only the eye-gaze patterns of science professors (experts) and science undergraduates (novices). However, science undergraduates still possessed a certain level of expertise as compared to secondary school students. As such, future studies (if the situation permits) could involve the collection of data from secondary school students and teachers as well. This would allow for more comprehensive and relevant research.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
There is no conflict of interest.
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NIE. NTU IRB Approval was obtained prior the data collection (IRB-2019-09-005).
[1] |
Alemdag, E. and Cagiltay, K., A systematic review of eye tracking research on multimedia learning. Computers & Education, 2018,125: 413428. https://doi.org/10.1016/j.compedu.2018.06.023 doi: 10.1016/j.compedu.2018.06.023
![]() |
[2] |
Angra, A. and Gardner, S.M., Reflecting on graphs: Attributes of graph choice and construction practices in biology. CBE Life Sciences Education, 2017, 16(3). https://doi.org/10.1187/cbe.16-08-0245 doi: 10.1187/cbe.16-08-0245
![]() |
[3] |
Ashraf, H., Sodergren, M.H., Merali, N., Mylonas, G., Singh, H. and Darzi, A., Eye-tracking technology in medical education: A systematic review. Medical Teacher, 2018, 40(1): 62–29. https://doi.org/10.1080/0142159X.2017.1391373 doi: 10.1080/0142159X.2017.1391373
![]() |
[4] |
Ariasi, N. and Mason, L., Uncovering the effect of text structure in learning from a science text: An eye-tracking study. Instructional Science, 2011, 39: 581–601. https://doi.org/10.1007/s11251-010-9142-5 doi: 10.1007/s11251-010-9142-5
![]() |
[5] | Blackwell, A.F., Introduction thinking with diagrams, in Thinking with Diagrams, A.F. Blackwell, Ed. 2001, 1–3. Springer Netherlands. https://doi.org/10.1007/978-94-017-3524-7_1 |
[6] |
Bowen, G.M. and Roth, W.-M., Data and graph interpretation practices among preservice science teachers. Journal of Research in Science Teaching, 2005, 42(10): 1063–1088. https://doi.org/10.1002/tea.20086 doi: 10.1002/tea.20086
![]() |
[7] |
Brand-Gruwel, S., Wopereis, I. and Vermetten, Y., Information problem solving by experts and novices: Analysis of a complex cognitive skill. Computers in Human Behavior, 2005, 21(3): 487–508. https://doi.org/10.1016/j.chb.2004.10.005 doi: 10.1016/j.chb.2004.10.005
![]() |
[8] | Burke, M.C., A mathematician's proposal. Carnegie perspectives. Carnegie Foundation for the Advancement of Teaching, 2007. |
[9] |
Carpenter, P.A. and Shah, P., A model of the perceptual and conceptual processes in graph comprehension. Journal of Experimental Psychology: Applied, 1998, 4(2): 75–100. https://doi.org/10.1037/1076-898X.4.2.75 doi: 10.1037/1076-898X.4.2.75
![]() |
[10] |
Chi, M.T.H., Feltovich, P.J. and Glaser, R., Categorization and representation of physics problems by experts and novices. Cognitive Science, 1981, 5(2): 121–152. https://doi.org/10.1207/s15516709cog0502_2. doi: 10.1207/s15516709cog0502_2
![]() |
[11] | Çil, E. and Kar, H., Pre-service science teachers' interpretations of graphs: A cross-sectional study. Medical Science Educator, 2015, 24(1): 36–44. |
[12] |
Dale, S., Heuristics and biases: The science of decision-making. Business Information Review, 2015, 32(2): 93–99. https://doi.org/10.1177/0266382115592536 doi: 10.1177/0266382115592536
![]() |
[13] |
de Koning, B., Tabbers, H., Rikers, R. and Paas, F., Attention guidance in learning from a complex animation: Seeing is understanding? Learning and Instruction, 2010, 20(2): 111–122. https://doi.org/10.1016/j.learninstruc.2009.02.010 doi: 10.1016/j.learninstruc.2009.02.010
![]() |
[14] |
Dougusoy-Taylan, B. and Cagiltay, K., Cognitive analysis of experts' and novices' concept mapping processes: An eye tracking study. Computers in Human Behavior, 2014, 36: 82–93. https://doi.org/10.1016/j.chb.2014.03.036 doi: 10.1016/j.chb.2014.03.036
![]() |
[15] |
Ericsson, K.A. and Kintsch, W., Long-term working memory. Psychological Review, 1995,102(2): 211–245. https://doi.org/10.1037/0033-295X.102.2.211 doi: 10.1037/0033-295X.102.2.211
![]() |
[16] |
Franconeri, S.L., Alvarez, G.A., Cavanagh, P., Flexible cognitive resources: competitive content maps for attention and memory. Trends in Cognitive Sciences, 2013, 17(3): 134–141. https://doi.org/10.1016/j.tics.2013.01.010 doi: 10.1016/j.tics.2013.01.010
![]() |
[17] |
Gegenfurtner, A., Lehtinen, E. and Säljö, R., Expertise differences in the comprehension of visualizations: A meta-analysis of eye-tracking research in professional domains. Educational Psychology Review, 2011, 23: 523–552. https://doi.org/10.1007/s10648-011-9174-7 doi: 10.1007/s10648-011-9174-7
![]() |
[18] |
Glazer, N., Challenges with graph interpretation: A review of the literature. Studies in Science Education, 2011, 47(2): 183–210. https://doi.org/10.1080/03057267.2011.605307 doi: 10.1080/03057267.2011.605307
![]() |
[19] |
Harsh, J.A., Campillo, M., Murray, C., Myers, C., Nguyen, J. and Maltese, A.V., "Seeing" data like an expert: An eye-tracking study using graphical data representations. CBE-Life Sciences Education, 2019, 18(3). https://doi.org/10.1187/cbe.18-06-0102 doi: 10.1187/cbe.18-06-0102
![]() |
[20] |
Hyönä, J., The use of eye movements in the study of multimedia learning. Learning and Instruction, 2010, 20(2): 172–176. https://doi.org/10.1016/j.learninstruc.2009.02.013 doi: 10.1016/j.learninstruc.2009.02.013
![]() |
[21] |
Jacobbe, T. and Horton, R.M., Elementary school teachers' comprehension of data displays. Statistics Education Research Journal, 2010, 9(1): 27–45. https://doi.org/10.52041/serj.v9i1.386 doi: 10.52041/serj.v9i1.386
![]() |
[22] |
Janvier, C., The notion of chronicle as an epistemological obstacle to the concept of function. The Journal of Mathematical Behavior, 1998, 17(1): 79–103. https://doi.org/10.1016/S0732-3123(99)80062-5 doi: 10.1016/S0732-3123(99)80062-5
![]() |
[23] |
Just, M.A. and Carpenter, P.A., A theory of reading: From eye fixations to comprehension. Psychological Review, 1980, 87(4): 329–354. https://doi.org/10.1037/0033-295X.87.4.329 doi: 10.1037/0033-295X.87.4.329
![]() |
[24] |
Kehoe, J., Writing multiple-choice test items. Practical Assessment, Research, and Evaluation, 1994, 4. https://doi.org/10.7275/s3cc-7y76 doi: 10.7275/s3cc-7y76
![]() |
[25] | Kerslake, D., Graphs, in Children's Understanding of Mathematics, K. M. Hart, Ed. 1981,120–136. London: John Murray. |
[26] |
Klein, P., Lichtenberger, A., Küchemann, S., Becker, S., Kekule, M., Viiri, J., et al., Visual attention while solving the test of understanding graphs in kinematics: An eye-tracking analysis. European Journal of Physics, 2020, 41(2): 025701. https://doi.org/10.1088/1361-6404/ab5f51 doi: 10.1088/1361-6404/ab5f51
![]() |
[27] |
Kosslyn, S., Understanding charts and graphs. Applied Cognitive Psychology, 1989, 3: 185–226. https://doi.org/10.1002/acp.2350030302 doi: 10.1002/acp.2350030302
![]() |
[28] |
Li, S., Duffy, M.C., Lajoie, S.P., Zheng, J., Lachapelle, K., Using eye tracking to examine expert-novice differences during simulated surgical training: A case study. Computers in Human Behavior, 2023,144. https://doi.org/10.1016/j.chb.2023.107720 doi: 10.1016/j.chb.2023.107720
![]() |
[29] |
Lohse, G.L., A cognitive model for understanding graphical perception. Human Computer Interaction, 1993, 8(4): 353–388. https://doi.org/10.1207/s15327051hci0804_3 doi: 10.1207/s15327051hci0804_3
![]() |
[30] |
Meletiou-Mavrotheris, M. and Lee, C., Investigating college-level introductory statistics students' prior knowledge of graphing. Canadian Journal of Science, Mathematics and Technology Education, 2010, 10(4): 339–355. https://doi.org/10.1080/14926156.2010.524964 doi: 10.1080/14926156.2010.524964
![]() |
[31] |
Merali. N., Veeramootoo, D., Singh, S., Eye-tracking technology in surgical training. Journal of Investigative Surgery, 2019, 32(7): 587–593. https://doi.org/10.1080/08941939.2017.1404663 doi: 10.1080/08941939.2017.1404663
![]() |
[32] |
Paivio, A., Dual coding theory: Retrospect and current status. Canadian Journal of Psychology / Revue Canadienne de psychologie, 1991 43(3): 255–287. https://doi.org/10.1037/h0084295 doi: 10.1037/h0084295
![]() |
[33] |
Patahuddin, S.M. and Lowrie, T., Examining teachers' knowledge of line graph task: A case of travel task. International Journal of Science and Mathematics Education, 2019, 17(4): 781–800. https://doi.org/10.1007/s10763-018-9893-z doi: 10.1007/s10763-018-9893-z
![]() |
[34] |
Peebles, D. and Cheng, P.C.H., Modeling the effect of task and graphical representation on response latency in a graph reading task. Human Factors, 2003, 45(1): 28–46. https://doi.org/10.1518/hfes.45.1.28.27225 doi: 10.1518/hfes.45.1.28.27225
![]() |
[35] |
Ratwani, R.M., Trafton, J.G. and Boehm-Davis, D.A., Thinking graphically: Connecting vision and cognition during graph comprehension. Journal of Experimental Psychology: Applied, 2008, 14(1): 36–49. https://doi.org/10.1037/1076-898X.14.1.36 doi: 10.1037/1076-898X.14.1.36
![]() |
[36] | Rayner, K. and Slattery, T.J., Eye movements and moment-to-moment comprehension processes in reading, in Beyond Decoding: The Behavioral and Biological Foundations of Reading Comprehension, R. K. Wagner, C. Schatschneider, & C. Phythian-Sence, Eds. 2009, 27–45. Guilford Press. |
[37] | Readence, J., Bean, T. and Baldwin, S., Content Area Literacy: An Integrated Approach, 2004. Kendell/Hunt. |
[38] |
Ruf, V., Horrer, A., Berndt, M., Hofer, S.I., Fischer, F., Fischer, M.R., et al., A literature review comparing experts' and non-experts' visual processes of graphs during problem-solving and learning. Education Sciences, 2023, 13(2): 216–216. https://doi.org/10.3390/educsci13020216 doi: 10.3390/educsci13020216
![]() |
[39] |
Schmid, R., Review of the visual display of quantitative information. by E. R. Tufte. Taxon, 1989, 38(3): 451–451. https://doi.org/10.2307/1222290 doi: 10.2307/1222290
![]() |
[40] |
Shah, P. and Hoeffner, J., Review of graph comprehension research: Implications for instruction. Educational Psychology Review, 2002, 14(1): 47–69. https://doi.org/10.1023/A:1013180410169 doi: 10.1023/A:1013180410169
![]() |
[41] |
Shah, P., Mayer, R.E. and Hegarty, M., Graphs as aids to knowledge construction: Signaling techniques for guiding the process of graph comprehension. Journal of Educational Psychology, 1999, 91(4): 690–702. https://doi.org/10.1037/0022-0663.91.4.690 doi: 10.1037/0022-0663.91.4.690
![]() |
[42] |
Sharafi, Z. Soh, Z., Guéhéneuc, Y.G., A systematic literature review on the usage of eye-tracking in software engineering, Information and Software Technology, 2015, 67: 79–107. https://doi.org/10.1016/j.infsof.2015.06.008 doi: 10.1016/j.infsof.2015.06.008
![]() |
[43] | Shvarts, A. and Abrahamson, D., Coordination dynamics of semiotic mediation: A functional dynamic systems perspective on mathematics teaching/learning. Constructivist Foundations, 2023, 18(2): 220–234. |
[44] |
Spering, M., Montagnini, A., Do we track what we see? Common versus independent processing for motion perception and smooth pursuit eye movements: A review. Vision Research, 2011, 51(8): 836–852. https://doi.org/10.1016/j.visres.2010.10.017 doi: 10.1016/j.visres.2010.10.017
![]() |
[45] |
Susac, A., Bubić, A., Martinjak, P., Planinic, M. and Palmovic, M., Graphical representations of data improve student understanding of measurement and uncertainty: An eye-tracking study. Physical Review Physics Education Research, 2017, 13 (2): 020125. https://doi.org/10.1103/PhysRevPhysEducRes.13.020125 doi: 10.1103/PhysRevPhysEducRes.13.020125
![]() |
[46] | Szyjka, S., Mumba, F., Wise, K. and Wise, K., Confirmatory factor analysis of the questionnaire of attitude toward statistical graphs for use in science education. Journal of Baltic Science Education, 2011, 10(4): 261–276. |
[47] |
Tan, A.L., Teo, T.W., Choy, B.H. and Ong, Y.S., The S-T-E-M quartet. Innovation and Education, 2019, 1(3): 1–14. https://doi.org/10.1186/s42862-019-0005-x doi: 10.1186/s42862-019-0005-x
![]() |
[48] |
Teo, T.W. and Goh, W.P.G., Assessing lower track students' learning in science inference skills in Singapore. Asia-Pacific Science Education, 2019, 5(5): 1‒19. https://doi.org/10.1186/s41029-019-0033-z doi: 10.1186/s41029-019-0033-z
![]() |
[49] | Trafton, J.G., Marshall, S., Mintz, F. and Trickett, S.B., Extracting explicit and implicit information from complex visualizations, in Diagramatic Representation and Inference, M. Hegarty, B. Meyer, & H. Narayanan, Eds. 2002,206–220. Berlin: Springer-Verlag. |
[50] |
Tsai, M.J., Hou, H.T., Lai, M.L., Liu, W.Y. and Yang, F.Y., Visual attention for solving multiple-choice science problem: An eye-tracking analysis. Computers & Education, 2012, 58(1): 375–385. https://doi.org/10.1016/j.compedu.2011.07.012 doi: 10.1016/j.compedu.2011.07.012
![]() |
[51] |
Veenman, M.V.J., Van Hout-Wolters, B.H.A.M. and Afflerbach, P., Metacognition and learning: Conceptual and methodological considerations. Metacognition and Learning, 2006, 1: 3–14. https://doi.org/10.1007/s11409-006-6893-0 doi: 10.1007/s11409-006-6893-0
![]() |
[52] | Yen, M.H., Lee, C.N., Yang, Y.C., Eye movement patterns in solving scientific graph problems. Diagrammatic Representation and Inference: 7th International Conference, Diagrams 2012, Canterbury, UK, July 2-6, 2012. Proceedings 7, 2012,343–345. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-31223-6_46 |
[53] |
Ziv, G., Gaze behavior and visual attention: A review of eye tracking studies in aviation. The International Journal of Aviation Psychology, 2016, 26 (3-4): 75–104. https://doi.org/10.1080/10508414.2017.1313096 doi: 10.1080/10508414.2017.1313096
![]() |
1. | Şeyma Özdemir, Cemal Tosun, Investigation of Eighth-Grade Students’ processes of solving skill- based science questions by eye tracking technique, 2024, 1360-2357, 10.1007/s10639-024-12841-6 | |
2. | Tleubek Dalabayev, Мazhyn Skakov, Аli Choruh, Мakpal Nurizinova, THE ROLE OF PHYSICAL PRACTICAL COURSE IN IMPROVING RESEARCH SKILLS OF FUTURE TEACHERS, 2025, 49, 24135488, 10.59787/2413-5488-2025-49-1-37-51 |
Question | Number of correct answers | |
Experts (N = 3) | Novices (N = 3) | |
1 | 3 | 3 |
2 | 3 | 3 |
3 | 3 | 3 |
4 | 2 | 3 |
5 | 1 | 2 |
Question | Mean time taken (s) | |
Experts (N = 3) | Novices (N = 3) | |
1 | 40.2 | 33.0 |
2 | 39.7 | 29.3 |
3 | 113 | 74.5 |
4 | 82.2 | 54.9 |
5 | 78.6 | 45.2 |
Question | Experts (N = 3) | Novices (N = 3) |
1 | Question stem | Graph |
2 | Question stem | Question stem |
3 | Question stem | Question stem |
4 | Options | Graph |
5 | Options | Graph |
Heuristics for answering graphical items effectively |
1. Direct the attention towards graphical information, which includes the title, y-axis and x-axis. 2. Analyze the graph and identify any visible trend. 3. Focus on the question stem. This should be the main focus when answering graphical items. 4. When there is a possible answer in mind, look at the options available and choose the best possible option. 5. If there is difficulty in interpreting the graph, the emphasis should be on the options available. |
Question | Number of correct answers | |
Experts (N = 3) | Novices (N = 3) | |
1 | 3 | 3 |
2 | 3 | 3 |
3 | 3 | 3 |
4 | 2 | 3 |
5 | 1 | 2 |
Question | Mean time taken (s) | |
Experts (N = 3) | Novices (N = 3) | |
1 | 40.2 | 33.0 |
2 | 39.7 | 29.3 |
3 | 113 | 74.5 |
4 | 82.2 | 54.9 |
5 | 78.6 | 45.2 |
Question | Experts (N = 3) | Novices (N = 3) |
1 | Question stem | Graph |
2 | Question stem | Question stem |
3 | Question stem | Question stem |
4 | Options | Graph |
5 | Options | Graph |
Heuristics for answering graphical items effectively |
1. Direct the attention towards graphical information, which includes the title, y-axis and x-axis. 2. Analyze the graph and identify any visible trend. 3. Focus on the question stem. This should be the main focus when answering graphical items. 4. When there is a possible answer in mind, look at the options available and choose the best possible option. 5. If there is difficulty in interpreting the graph, the emphasis should be on the options available. |