Research article Special Issues

Challenges in representing information with augmented reality to support manual procedural tasks

  • Received: 31 October 2018 Accepted: 27 January 2019 Published: 25 February 2019
  • Support of manual procedural tasks such as repair or maintenance is one of the most promising areas for the application of augmented reality in industry. However, it is not yet fully understood how information like work instructions or CAD models must be represented in a way that users are optimally supported in accomplishing this kind of tasks. As an approach to this research challenge, a conceptual framework for modelling information representation in augmented reality is presented here. It introduces the idea of information objects, which are physical and virtual objects that provide relevant information for completing work steps of a task. These can be distinguished into multiple classes based on the levels of spatial connection that information in augmented reality can have. The classes are then used to identify possible sources of sensory or cognitive e ort for a user that is caused by the way information objects are included in an augmented reality system and not by the complexity of the task to be performed. Based on these sources, information representation challenges are formulated that must be addressed when creating augmented reality based support systems for procedural tasks. The five identified challenges are clarity, consistency, visibility, orientation and information linking. For each of them, a detailed explanation is given and literature is collected that provides indications of what can be done to overcome them.

    Citation: Tobias Müller. Challenges in representing information with augmented reality to support manual procedural tasks[J]. AIMS Electronics and Electrical Engineering, 2019, 3(1): 71-97. doi: 10.3934/ElectrEng.2019.1.71

    Related Papers:

  • Support of manual procedural tasks such as repair or maintenance is one of the most promising areas for the application of augmented reality in industry. However, it is not yet fully understood how information like work instructions or CAD models must be represented in a way that users are optimally supported in accomplishing this kind of tasks. As an approach to this research challenge, a conceptual framework for modelling information representation in augmented reality is presented here. It introduces the idea of information objects, which are physical and virtual objects that provide relevant information for completing work steps of a task. These can be distinguished into multiple classes based on the levels of spatial connection that information in augmented reality can have. The classes are then used to identify possible sources of sensory or cognitive e ort for a user that is caused by the way information objects are included in an augmented reality system and not by the complexity of the task to be performed. Based on these sources, information representation challenges are formulated that must be addressed when creating augmented reality based support systems for procedural tasks. The five identified challenges are clarity, consistency, visibility, orientation and information linking. For each of them, a detailed explanation is given and literature is collected that provides indications of what can be done to overcome them.


    加载中


    [1] Azuma R, Baillot Y, Behringer R, et al. (2001) Recent advances in augmented reality. IEEE Comput Graph 21: 34–47.
    [2] Azuma R and Furmanski C (2003) Evaluating label placement for augmented reality view management. In Proceedings of the 2nd IEEE/ACM international Symposium on Mixed and Augmented Reality, pp. 66–75, IEEE Computer Society.
    [3] Avery B, Sandor C and Thomas BH (2009) Improving spatial perception for augmented reality x-ray vision. In Virtual Reality Conference, 2009. VR 2009. IEEE, pp. 79–82, IEEE.
    [4] Azuma RT (1997) A survey of augmented reality. Presence: Teleoperators and virtual environments 6: 355–385. doi: 10.1162/pres.1997.6.4.355
    [5] Bell B, Feiner S and Höllerer T (2001) View management for virtual and augmented reality. In Proceedings of the 14th annual ACM symposium on User interface software and technology, pp. 101–110, ACM.
    [6] Bane R and Höllerer T (2004) Interactive tools for virtual x-ray vision in mobile augmented reality. In Proceedings of the 3rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 231–239, IEEE.
    [7] Bell B, Höllerer T and Feiner S (2002) An annotated situation-awareness aid for augmented reality. In Proceedings of the 15th annual ACM symposium on User interface software and technology, pp. 213–216, ACM.
    [8] Biederman I (1987) Recognition-by-components: a theory of human image understanding. Psychol rev 94: 115–117. doi: 10.1037/0033-295X.94.2.115
    [9] Bowman DA, Kruijff E, LaViola Jr JJ, et al. (2001) An introduction to 3-d user interface design. Presence: Teleoperators and virtual environments 10: 96–108. doi: 10.1162/105474601750182342
    [10] Bichlmeier C and Navab N (2006) Virtual window for improved depth perception in medical ar. In International Workshop on Augmented Reality environments for Medical Imaging and Computeraided Surgery (AMI-ARCS), Citeseer.
    [11] Biocca FA, Owen CB, Tang A, et al. (2007) Attention issues in spatial information systems: Directing mobile users' visual attention using augmented reality. J Manage Inform Syst 23: 163– 184. doi: 10.2753/MIS0742-1222230408
    [12] Bach C and Scapin DL (2003) Ergonomic criteria adapted to human virtual environment interaction. In Proceedings of the 15th Conference on l'Interaction Homme-Machine, pp. 24–31, ACM.
    [13] Bork F, Schnelzer C, Eck U, et al. (2018) Towards efficient visual guidance in limited field-of-view head-mounted displays. IEEE T Vis Comput Gr 24: 2983–2992. doi: 10.1109/TVCG.2018.2868584
    [14] Biocca FA, Tang A, Owen CB, et al. (2006) Attention funnel: omnidirectional 3d cursor for mobile augmented reality platforms. In Proceedings of the SIGCHI conference on Human Factors in computing systems, pp. 1115–1122, ACM.
    [15] Coelho EM, MacIntyre B and Julier SJ (2004) Osgar: A scene graph with uncertain transformations. In Proceedings of the 3rd IEEE/ACM International Symposium on Mixed and Augmented Reality, pp. 6–15, IEEE Computer Society.
    [16] Chen J, Pyla PS and Bowman DA (2004) Testbed evaluation of navigation and text display techniques in an information-rich virtual environment. In Virtual Reality, 2004. Proceedings. IEEE, pp. 181–289, IEEE.
    [17] Cutting JE (2003) Reconceiving perceptual space. Perceiving Pictures: An Interdisciplinary Approach to Pictorial Space, pp. 215–238.
    [18] Cutting JE and Vishton PM (1995) Perceiving layout and knowing distances: The integration, relative potency, and contextual use of different information about depth. Perception of Space and Motion, pp. 69–117.
    [19] Debernardis S, Fiorentino M, Gattullo M, et al. (2014) Text readability in head-worn displays: Color and style optimization in video versus optical see-through devices. IEEE T Vis Comput Gr 20: 125–139. doi: 10.1109/TVCG.2013.86
    [20] Dünser A, Grasset R, Seichter H, et al. (2007) Applying hci principles to ar systems design.
    [21] DiVerdi S, Höllerer T and Schreyer R (2004) Level of detail interfaces. In Proceedings of the 3rd IEEE/ACM International Symposium on Mixed and Augmented Reality, pp. 300–301, IEEE Computer Society.
    [22] Drascic D and Milgram P (1996) Perceptual issues in augmented reality. In Stereoscopic Displays and Virtual Reality Systems III, Vol. 2653, pp. 123–135, International Society for Optics and Photonics.
    [23] Dauenhauer R and Müller T (2016) An evaluation of information connection in augmented reality for 3d scenes with occlusion. In Proceedings of the 15th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 235–237, IEEE.
    [24] Darken RP, Sullivan JA and Lennerton M (2003) A chromakey augmented virtual environment for deployable training.
    [25] Elmqvist N, Assarsson U and Tsigas P (2007) Employing dynamic transparency for 3d occlusion management: Design issues and evaluation. Human-Computer Interaction–INTERACT 2007, pp. 532–545.
    [26] Ellis SR, Breant F, Manges B, et al. (1997) Factors influencing operator interaction with virtual objects viewed via head-mounted see-through displays: viewing conditions and rendering latency. In Virtual Reality Annual International Symposium, 1997., IEEE 1997, pp. 138–145.
    [27] Ellis SR, Wolfram A and Adelstein BD (2002) Three dimensional tracking in augmented environments: user performance trade-offs between system latency and update rate. In Proceedings of the Human Factors and Ergonomics Society annual meeting, Vol. 46, pp. 2149–2153, Sage CA: Los Angeles, CA: SAGE Publications.
    [28] Furmanski C, Azuma R and DailyM(2002) Augmented-reality visualizations guided by cognition: Perceptual heuristics for combining visible and obscured information. In Proceedings of the first International Symposium on Mixed and Augmented Reality (ISMAR), pp. 215–320, IEEE.
    [29] Fairchild MD (2013) Color appearance models. John Wiley & Sons.
    [30] Fischer J, Bartz D and Straber W (2005) Stylized augmented reality for improved immersion. In IEEE Proceedings. VR 2005. Virtual Reality, 2005., pp. 195–202, IEEE.
    [31] Fiorentino M, Debernardis S, Uva AE, et al. (2013) Augmented reality text style readability with see-through head-mounted displays in industrial context. Presence: Teleoperators and Virtual Environments 22: 171–190. doi: 10.1162/PRES_a_00146
    [32] Feng Y (2008) Estimation of light source environment for illumination consistency of augmented reality. Congress on Image and Signal Processing 3: 771–775.
    [33] Flatla DR and Gutwin C (2010) Individual models of color differentiation to improve interpretability of information visualization. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2563–2572, ACM.
    [34] Feiner S, MacIntyre B, Höllerer T, et al. (1997) A touring machine: Prototyping 3d mobile augmented reality systems for exploring the urban environment. Digest of Papers. First International Symposium on Wearable Computers 1: 74–81.
    [35] Franklin M (2006) The lessons learned in the application of augmented reality. Technical report.
    [36] Gavish N, Gutierrez T, Webel S, et al. (2011) Design guidelines for the development of virtual reality and augmented reality training systems for maintenance and assembly tasks. The International Conference of the SKILLS 2011 1: 29.
    [37] Grasset R, Langlotz T, Kalkofen D, et al. (2012) Image-driven view management for augmented reality browsers. In Mixed and Augmented Reality (ISMAR), 2012 IEEE International Symposium on, pp. 177–186, IEEE.
    [38] Gagne RM and Rohwer WD (1969) Instructional psychology. Annual Review of Psychology 20: 381–418. doi: 10.1146/annurev.ps.20.020169.002121
    [39] Gabbard JL, Swan JE, Hix D, et al. (2005) An empirical user-based study of text drawing styles and outdoor background textures for augmented reality. In Virtual Reality, 2005. Proceedings. VR 2005. IEEE, pp. 11–18, IEEE.
    [40] Gabbard JL, Swan JE and Hix D (2006) The effects of text drawing styles, background textures, and natural lighting on text legibility in outdoor augmented reality. Presence: Teleoperators and Virtual Environments 15: 16–32. doi: 10.1162/pres.2006.15.1.16
    [41] Gabbard JL, Swan JE, Hix D, et al. (2007) Active text drawing styles for outdoor augmented reality: A user-based study and design implications. In Virtual Reality Conference, 2007. VR'07. IEEE, pp. 35–42, IEEE.
    [42] Gabbard JL, Swan JE, Zedlitz J, et al. (2010) More than meets the eye: An engineering study to empirically examine the blending of real and virtual color spaces. In Virtual Reality Conference (VR), 2010 IEEE, pp. 79–86, IEEE.
    [43] HallerM(2004) Photorealism or/and non-photorealism in augmented reality. In Proceedings of the 2004 ACM SIGGRAPH international conference on Virtual Reality continuum and its applications in industry, pp. 189–196, ACM.
    [44] Henderson SJ and Feiner S (2009) Evaluating the benefits of augmented reality for task localization in maintenance of an armored personnel carrier turret. In 2009 8th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 135–144, IEEE.
    [45] Henderson SJ and Feiner SK (2011) Augmented reality in the psychomotor phase of a procedural task. In 2011 10th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 191–200, IEEE.
    [46] Holloway RL (1997) Registration error analysis for augmented reality. Presence: Teleoperators and Virtual Environments 6: 413–432. doi: 10.1162/pres.1997.6.4.413
    [47] Hirsh IJ and Sherrick Jr CE (1961) Perceived order in different sense modalities. Journal of experimental psychology 62: 423. doi: 10.1037/h0045283
    [48] Itoh Y, Dzitsiuk M, Amano T, et al. (2015) Semi-parametric color reproduction method for optical see-through head-mounted displays. IEEE T Vis Comput Gr 21: 1269–1278. doi: 10.1109/TVCG.2015.2459892
    [49] Johnson LG, Edwards P and Hawkes D (2003) Surface transparency makes stereo overlays unpredictable: the implications for augmented reality. Studies in health technology and informatics 94: 131–136.
    [50] Jacobs K and Loscos C (2006) Classification of illumination methods for mixed reality. Computer Graphics Forum 25: 29–51. doi: 10.1111/j.1467-8659.2006.00816.x
    [51] Julier S, Lanzagorta M, Baillot Y, et al. (2000) Information filtering for mobile augmented reality. In Augmented Reality, 2000. (ISAR 2000). Proceedings. IEEE and ACM International Symposium on, pp. 3–11, IEEE.
    [52] Jones JA, Swan II JE, Singh G, et al. (2008) The effects of virtual reality, augmented reality, and motion parallax on egocentric depth perception. In Proceedings of the 5th symposium on Applied perception in graphics and visualization, pp. 9–14, ACM.
    [53] Jerome C and Witmer B (2005) The perception and estimation of egocentric distance in real and augmented reality environments. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 49, pp. 2249–2252. Sage CA: Los Angeles, CA: SAGE Publications.
    [54] Kalkofen D, Mendez E and Schmalstieg D (2007) Interactive focus and context visualization for augmented reality. In Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, pp. 1–10, IEEE Computer Society.
    [55] King GR, Piekarski W and Thomas BH (2005) Arvino-outdoor augmented reality visualisation of viticulture gis data. In Mixed and Augmented Reality, 2005. Proceedings. Fourth IEEE and ACM International Symposium on, pp. 52–55, IEEE.
    [56] Kruijff E, Swan JE and Feiner S (2010) Perceptual issues in augmented reality revisited. In Proceedings of the 2010 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 3–12, IEEE.
    [57] Knöpfle C, Weidenhausen J, Chauvigné L, et al. (2005) Template based authoring for ar based service scenarios. In IEEE Proceedings. VR 2005. Virtual Reality, 2005, pp. 249–252, IEEE.
    [58] Livingston MA, Ai Z and Decker JW (2009) A user study towards understanding stereo perception in head-worn augmented reality displays. In Proceedings of the 8th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 53–56, IEEE.
    [59] Livingston MA, Barrow JH and Sibley CM (2009) Quantification of contrast sensitivity and color perception using head-worn augmented reality displays. In Virtual Reality Conference, 2009. VR 2009. IEEE, pp. 115–122, IEEE.
    [60] Lerotic M, Chung AJ, Mylonas G, et al. (2007) Pq-space based non-photorealistic rendering for augmented reality. In International Conference on Medical Image Computing and Computer- Assisted Intervention, pp. 102–109, Springer.
    [61] Livingston MA, Garrett WF, Hirota G, et al. (1996) Technologies for augmented reality systems: Realizing ultrasound-guided needle biopsies. In Proceedings of the 23rd annual conference on computer graphics and interactive techniques, pp. 439–446, ACM.
    [62] Livingston MA, Gabbard JL, Swan II JE, et al. (2013) Basic perception in head-worn augmented reality displays. In Human factors in augmented reality environments, pp. 35–65, Springer.
    [63] Livingston MA (2006) Quantification of visual capabilities using augmented reality displays. In Proceedings of the 2006 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 3–12, IEEE.
    [64] Livingston MA, Swan II JE, Gabbard JL, et al. (2003) Resolving multiple occluded layers in augmented reality. In Proceedings of the 2nd IEEE/ACM International Symposium on Mixed and Augmented Reality, pp. 56–65, IEEE.
    [65] Leykin A and Tuceryan M (2004) Automatic determination of text readability over textured backgrounds for augmented reality systems. In Proceedings of the 3rd IEEE/ACM International Symposium on Mixed and Augmented Reality, pp. 224–230, IEEE.
    [66] MacIntyre B and Coelho EM (2000) Adapting to dynamic registration errors using level of error (loe) filtering. In Augmented Reality, 2000.(ISAR 2000). Proceedings. IEEE and ACM International Symposium on, pp. 85–88, IEEE.
    [67] Müller T and Dauenhauer R (2016) A taxonomy for information linking in augmented reality. In International Conference on Augmented Reality, Virtual Reality and Computer Graphics, pp. 368–387, Springer.
    [68] Marner MR, Irlitti A and Thomas BH (2013) Improving procedural task performance with augmented reality annotations. In Proceedings of the 12th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 39–48, IEEE.
    [69] Müller T and Rieger T (2015) Arpml: The augmented reality process modeling language. In of the 14th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 160–163, IEEE.
    [70] Madsen JB, Tatzqern M, Madsen CB, et al. (2016) Temporal coherence strategies for augmented reality labeling. IEEE T Vis Comput Gr 22: 1415–1423. doi: 10.1109/TVCG.2016.2518318
    [71] Müller T (2015) Towards a framework for information presentation in augmented reality for the support of procedural tasks. In International Conference on Augmented and Virtual Reality, pp. 490–497, Springer.
    [72] Milgram P, Zhai S, Drascic D, et al. (1993) Applications of augmented reality for humanrobot communication. In Intelligent Robots and Systems' 93, IROS'93. Proceedings of the 1993 IEEE/RSJ International Conference on, Vol. 3, pp. 1467–1472, IEEE.
    [73] Ng A, Lepinski J, Wigdor D, et al. (2012) Designing for low-latency direct-touch input. In Proceedings of the 25th annual ACM symposium on User interface software and technology, pp. 453–464, ACM.
    [74] Neumann U and Majoros A (1998) Cognitive, performance, and systems issues for augmented reality applications in manufacturing and maintenance. In Virtual Reality Annual International Symposium, 1998. Proceedings., IEEE 1998, pp. 4–11, IEEE.
    [75] Peterson SD, Axholt M, Cooper M, et al. (2009) Visual clutter management in augmented reality: Effects of three label separation methods on spatial judgments. In 3D User Interfaces, 2009. 3DUI 2009. IEEE Symposium on, pp. 111–118, IEEE.
    [76] Peterson S, Axholt M and Ellis SR (2008) Managing visual clutter: A generalized technique for label segregation using stereoscopic disparity. In Virtual Reality Conference, 2008. VR'08. IEEE, pp. 169–176, IEEE.
    [77] Peterson SD, Axholt M and Ellis SR (2008) Label segregation by remapping stereoscopic depth in far-field augmented reality. In Proceedings of the 7th IEEE/ACM International Symposium on Mixed and Augmented Reality, pp. 143–152, IEEE.
    [78] Polys NF, Bowman DA and North C (2011) The role of depth and gestalt cues in information-rich virtual environments. International Journal of Human-computer Studies 69: 30–51. doi: 10.1016/j.ijhcs.2010.05.007
    [79] Polys NF, Kim S and Bowman DA (2007) Effects of information layout, screen size, and field of view on user performance in information-rich virtual environments. Comput Animat Virt W 18: 19–38. doi: 10.1002/cav.159
    [80] Pöppel E (1997) A hierarchical model of temporal perception. Trends Cogn Sci 1: 56–61. doi: 10.1016/S1364-6613(97)01008-5
    [81] Robertson CM, MacIntyre B and Walker BN (2008) An evaluation of graphical context when the graphics are outside of the task area. In Proceedings of the 7th IEEE/ACM International Symposium on Mixed and Augmented Reality, pp. 73–76, IEEE.
    [82] Robertson CM, MacIntyre B andWalker BN (2009) An evaluation of graphical context as a means for ameliorating the effects of registration error. IEEE T Vis Comput Gr 15: 179–192. doi: 10.1109/TVCG.2008.100
    [83] Rosten E, Reitmayr G and Drummond T (2005) Real-time video annotations for augmented reality. International Symposium on Visual Computing, pp. 294–302, Springer, Berlin, Heidelberg.
    [84] Smallman HS and Boynton RM (1993) On the usefulness of basic colour coding in an information display. Displays 14: 158–165. doi: 10.1016/0141-9382(93)90037-6
    [85] Sielhorst T, Bichlmeier C, Heining SM, et al. (2006) Depth perception–a major issue in medical ar: evaluation study by twenty surgeons. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 364–372, Springer.
    [86] Sandor C, Cunningham A, Dey A, et al. (2010) An augmented reality x-ray system based on visual saliency. In Proceedings of the 9th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 27–36, IEEE.
    [87] Schnabel MA (2009) Framing mixed realities. In Mixed Reality In Architecture, Design And Construction, pp. 3–11, Springer.
    [88] Sandor C, Dey A, Cunningham A, et al. (2010) Egocentric space-distorting visualizations for rapid environment exploration in mobile mixed reality. In Virtual Reality Conference (VR), 2010 IEEE, pp. 47–50, IEEE.
    [89] Syberfeldt A, Danielsson O, Holm M, et al. (2015) Visual assembling guidance using augmented reality. Procedia Manufacturing 1: 98–109. doi: 10.1016/j.promfg.2015.09.068
    [90] Stanimirovic D, Damasky N, Webel S, et al. (2014) A mobile augmented reality system to assist auto mechanics. In Proceedings of the 13th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 305–306, IEEE.
    [91] Schinke T, Henze N and Boll S (2010) Visualization of off-screen objects in mobile augmented reality. In Proceedings of the 12th international conference on Human computer interaction with mobile devices and services, pp. 313–316, ACM.
    [92] Swan JE, Jones A, Kolstad E, et al. (2007) Egocentric depth judgments in optical, see-through augmented reality. IEEE T Vis Comput Gr 13: 429–442. doi: 10.1109/TVCG.2007.1035
    [93] Schwerdtfeger B and Klinker G (2008) Supporting order picking with augmented reality. In Proceedings of the 7th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 91–94, IEEE.
    [94] Sugano N, Kato H and Tachibana K ((2003) The effects of shadow representation of virtual objects in augmented reality. In Proceedings of the second IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 76–83, IEEE.
    [95] Shibata F, Nakamoto H, Sasaki R, et al. (2008) A view management method for mobile mixed reality systems. In IPT/EGVE, pp. 17–24.
    [96] Stock I andWeber M (2006) Authoring technical documentation using a generic document model. In Proceedings of the 24th Annual ACM International Conference on Design of Communication, SIGDOC '06, pp. 172–179, ACM.
    [97] Thanedar V and Höllerer T (2004) Semi-automated placement of annotations in videos. Univ California, Santa Barbara, CA, USA, Tech Rep 11.
    [98] Tang A, Owen C, Biocca F, et al. (2003) Comparative effectiveness of augmented reality in object assembly. In Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 73–80, ACM.
    [99] Tönnis M, Plecher DA and Klinker G (2013) Representing information–classifying the augmented reality presentation space. Computers & Graphics 37: 997–1011.
    [100] Tümler J Dissertation: Untersuchungen zu nutzerbezogenen und technischen aspekten beim langzeiteinsatz mobiler augmented reality systeme in industriellen anwendungen.
    [101] Vincent T, Nigay L and Kurata T (2012) Classifying handheld augmented reality: Three categories linked by spatial mappings. In Workshop on Classifying the AR Presentation Space at ISMAR 2012.
    [102] Webel S, Bockholt U, Engelke T, et al. (2011) Augmented reality training for assembly and maintenance skills. In BIO web of conferences, Vol. 1, p. 97, EDP Sciences.
    [103] Webel S, Bockholt U, Engelke T, et al. (2013) An augmented reality training platform for assembly and maintenance skills. Robot Auton Syst 61: 398–403. doi: 10.1016/j.robot.2012.09.013
    [104] Webel S, Bockholt U and Keil J (2011) Design criteria for ar-based training of maintenance and assembly tasks. In International Conference on Virtual and Mixed Reality, pp. 123–132, Springer.
    [105] Wither J, DiVerdi S and Höllerer T (2009) Annotation in outdoor augmented reality. Computers & Graphics 33: 679–689.
    [106] Yeh M and Wickens CD (2000) Attention and trust biases in the design of augmented reality displays. Technical report, Aviation Research Lab, University of Illinois, Urbana-Champaign, Savoy, Illinois.
  • Reader Comments
  • © 2019 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(5363) PDF downloads(1736) Cited by(5)

Article outline

Figures and Tables

Figures(6)  /  Tables(5)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog