Teaching American Sign Language in Mixed Reality Article

Shao, Q, Sniffen, A, Blanchet, J et al. (2020). Teaching American Sign Language in Mixed Reality . 4(4), 10.1145/3432211

cited authors

  • Shao, Q; Sniffen, A; Blanchet, J; Hillis, ME; Shi, X; Haris, TK; Liu, J; Lamberton, J; Malzkuhn, M; Quandt, LC; Mahoney, J; Kraemer, DJM; Zhou, X; Balkcom, D

authors

abstract

  • This paper presents a holistic system to scale up the teaching and learning of vocabulary words of American Sign Language (ASL) the system leverages the most recent mixed-reality technology to allow the user to perceive her own hands in an immersive learning environment with first-and third-person views for motion demonstration and practice. Precise motion sensing is used to record and evaluate motion, providing real-Time feedback tailored to the specific learner. As part of this evaluation, learner motions are matched to features derived from the Hamburg Notation System (HNS) developed by sign-language linguists. We develop a prototype to evaluate the efficacy of mixed-reality-based interactive motion teaching. Results with 60 participants show a statistically significant improvement in learning ASL signs when using our system, in comparison to traditional desktop-based, non-interactive learning. We expect this approach to ultimately allow teaching and guided practice of thousands of signs.

publication date

  • December 17, 2020

Digital Object Identifier (DOI)

volume

  • 4

issue

  • 4