7647 Views (as of 05/2023)
  1. Lorna Quandt
  2. http://www.tinyurl.com/actionbrainlab
  3. Assistant Professor
  4. Presenter’s NSFRESOURCECENTERS
  5. Gallaudet University
  1. Melissa Malzkuhn
  2. http://www.motionlightlab.com
  3. Creative Director, Motion Light Lab
  4. Presenter’s NSFRESOURCECENTERS
  5. Gallaudet University
  1. Athena Willis
  2. Graduate Student
  3. Presenter’s NSFRESOURCECENTERS
  4. Gallaudet University
Facilitators’
Choice

Signing Avatars & Immersive Learning (SAIL): Development and Testing of a Nov...

NSF Awards: 1839379

2019 (see original presentation & discussion)

Adult learners

Improved resources for learning American Sign Language (ASL) are in high demand. The aim of this Cyberlearning project is to investigate the feasibility of a system in which signing avatars (computer-animated virtual humans built from motion capture recordings) teach users ASL in an immersive virtual environment. The system is called Signing Avatars & Immersive Learning (SAIL). The project focuses on developing and testing this entirely novel ASL learning tool, fostering the inclusion of underrepresented minorities in STEM. 

This project leverages the cognitive neuroscience of embodied learning to test the SAIL system. Signing avatars are created using motion capture recordings of native deaf signers signing in ASL. The avatars are placed in a virtual reality landscape accessed via head-mounted goggles. Users enter the virtual reality environment, and the user's own movements are captured via a gesture-tracking system. A "teacher" avatar guides users through an interactive ASL lesson involving both the observation and production of signs. Users learn ASL signs from both the first-person perspective and the third-person perspective. The inclusion of the first-person perspective may enhance the potential for embodied learning processes. Following the development of SAIL, the project involves conducting an electroencephalography (EEG) experiment to examine how the sensorimotor systems of the brain are engaged by the embodied experiences provided in SAIL. The project team pioneers the integration of multiple technologies: avatars, motion capture systems, virtual reality, gesture tracking, and EEG with the goal of making progress toward an improved tool for sign language learning.

This video has had approximately 806 visits by 648 visitors from 213 unique locations. It has been played 431 times as of 05/2023.
Click to See Activity Worldwide
Map reflects activity with this presentation from the 2019 STEM for All Video Showcase: Innovations in STEM Education website, as well as the STEM For All Multiplex website.
Based on periodically updated Google Analytics data. This is intended to show usage trends but may not capture all activity from every visitor.
show more
Discussion from the 2019 STEM for All Video Showcase (21 posts)
  • Icon for: Lorna Quandt

    Lorna Quandt

    Lead Presenter
    Assistant Professor
    May 12, 2019 | 08:21 p.m.

    Welcome, visitors! Thank you for watching our video about the SAIL project at Gallaudet University. Our project is in the early stages, as we work towards creating and testing a proof-of-concept ASL learning experience in virtual reality. We have completed the motion capture recordings of ASL content, and are currently creating signing avatars from the recordings and building the interactive lessons. Once we have a working version of SAIL, we will conduct an EEG cognitive neuroscience experiment that will help us see how the "embodied learning" aspect of SAIL influences ASL learning. We welcome questions and comments on any aspect of this project. And again, thank you for your interest! 

     
    1
    Discussion is closed. Upvoting is no longer available

    David Kraemer
  • Icon for: Karen Mutch-Jones

    Karen Mutch-Jones

    Facilitator
    Senior Researcher
    May 13, 2019 | 11:37 a.m.

    Even at this early stage of the project, the novel aspects of this complex learning tool are evident, and the potential for supporting (and maybe even transforming) ASL learning is notable.  Very exciting!  While I know you aren't yet ready to discuss outcomes, I wonder if you could share reactions/comments from new signers who are helping you to test SAIL.  Also, you mention that MoCap allows you to capture "impeccable data" from markers on the body.  What types of movement (or other feedback) are you paying attention to?  

     
    1
    Discussion is closed. Upvoting is no longer available

    Lorna Quandt
  • Icon for: Lorna Quandt

    Lorna Quandt

    Lead Presenter
    Assistant Professor
    May 13, 2019 | 01:20 p.m.

    Hi Karen, thanks for the comment! We sure hope that SAIL can jumpstart a transformation of ASL learning tools. 

    We haven't had any new signers test SAIL yet, because right now we are transforming the motion capture recordings into avatar characters. Until those are created, the motion capture recordings are not user-friendly--they look like dots moving on a screen (aka, point-light displays). I am really curious to hear what our first sign-naive users think--their feedback will be really valuable to future iterations of the project.

    The Motion Capture data records native signers producing basic ASL content, much the same way as Hollywood studios use motion capture to record characters movements and facial expressions for use in movies. The 16 cameras capture the hand, body, finger, and face movements that are critical to ASL with extremely high accuracy. We then use those recordings to render our signing avatars. The other type of motion recording which our project uses was not really highlighted in the video, although there were glimpses of it. This method is gesture tracking, in which a LEAP gesture tracker will track the SAIL user's own movements as they interact with signing avatars, allowing them to see their own sign productions alongside those of the signing avatars. 

     

     
    1
    Discussion is closed. Upvoting is no longer available

    Karen Mutch-Jones
  • Icon for: Kate Meredith

    Kate Meredith

    Informal Educator
    May 13, 2019 | 10:15 p.m.

    Technologically, this is absolutely fascinating!  Your video is well done.  Good pace.  Just the right amount of information.  I have been working with students and administration at the Wisconsin School for the Deaf for a number of years to address language access issues for deaf students in astronomy.  Outside some finger-spelling and a few basics, I can confidently sign "Finished? Save to desktop" So. . . I am wondering how you imagine your signing avatar helping me prep for class, assuming I am working with a teacher who signs.

     
    1
    Discussion is closed. Upvoting is no longer available

    Lorna Quandt
  • Icon for: Lorna Quandt

    Lorna Quandt

    Lead Presenter
    Assistant Professor
    May 14, 2019 | 09:11 a.m.

    Hi Kate, thanks for watching and I am happy to hear you enjoyed the video!

    We envision that in the not too distant future, you could take brief ASL lessons through your personal VR device, like an oculus rift or even a mobile-phone based VR setup. So maybe after dinner, you'd put on your VR headset and do a few of the gamified ASL lessons. In this future iteration of SAIL, the lessons might involve points, levels, and feedback about your signing. So you could complete ASL I in VR, interacting with avatars who are demonstrating high-quality ASL content. That is our dream for SAIL!

  • Icon for: Kristin Pederson

    Kristin Pederson

    STEM Director, Project Development & Communication
    May 14, 2019 | 06:10 p.m.

    Hi Lorna--

    Your project inspires me! In partnership with the Smithsonian National Museum of Natural History, we at Twin Cities PBS are producing "When Whales Walked: Journeys in Deep Time," a multi-platform project that includes a national PBS documentary and educational outreach which includes a virtual reality game/experience for museum spaces. (Check out our video in this showcase!) My question for you: what have you found to be the biggest challenges--and "wins"--of creating VR content? We are longtime media producers, but this is our first foray into the medium. We've been lucky to work with wonderful partners. I am interested in hearing your thoughts about production. Thanks!  

     
    1
    Discussion is closed. Upvoting is no longer available

    Lorna Quandt
  • Icon for: Lorna Quandt

    Lorna Quandt

    Lead Presenter
    Assistant Professor
    May 15, 2019 | 07:58 a.m.

    Thanks for your interest! I am happy to hear you enjoyed the video! Great question--there are certainly a lot of challenges in creating useable VR content. So far, some of those challenges in regards to our project involve the high fidelity that we require in order to show people fluid, fluent ASL. We need very high definition on the face, arms, and every joint of the fingers, to ensure that the eventual signing avatars can produce beautiful, natural ASL. Another challenge is in getting all the different components of our system connected to one another. We have oculus VR goggles, a LEAP gesture tracker, working on a Unity-based gaming engine. There are certainly challenges with making sure every element is working in sync. Finally, we work really hard to make sure our avatars and VR environment feel natural and pleasant. It is easy to feel creeped-out or unsettled when entering a virtual environment, and we do not want to be trying to teach people when they're feeling ill-at-ease. So making a smooth, natural experience is a top priority of ours. Let me know if you have any other questions. I am excited to learn more about your work!

  • Icon for: Sarah Haavind

    Sarah Haavind

    Facilitator
    Senior Research Project Manager
    May 14, 2019 | 07:02 p.m.

    Hello Lorna and visitors,

    I have to say I am "over the moon" with the concept of this project! Congratulations on an obviously terrific start to your work in this fascinating area. I agree with Kate that your video makes it obvious how cool the technology you are adapting will be for the work, both for your design team and eventually for learners. Surely the sign-native users you are filming are excited to see the outcome of their investment as well. Can you tell us more about the sorts of topics for learning you have in mind beyond the ASL alphabet and counting? I am envisioning that once they have the fundamentals, your avatars might be far more flexible and customizable "teachers" than humans who have the steep task of memorizing mixed in to learning a new language. Will it be possible for avatars to become like "native" speakers more quickly than a human might? Hummm, does time taken with programming and film editing make up for time memorizing? My mind is bending a little. I look forward to your thoughts. 

     

     
    1
    Discussion is closed. Upvoting is no longer available

    Lorna Quandt
  • Icon for: Lorna Quandt

    Lorna Quandt

    Lead Presenter
    Assistant Professor
    May 15, 2019 | 08:04 a.m.

    Hello Sarah! I'm loving your enthusiasm!

    The fingerspelling and counting were actually just demos that we used for this video :-) In this version of SAIL we're working on now, the ASL teacher will teach the user about 30 signs--things like BACON, EGGS, MILK while standing in a kitchen environment, and SWING, PLAY, JUGGLE, while doing a games-related activity. We have structured 4 brief ASL lessons, each of which contains a few target ASL signs that are loosely grouped by theme. 

    Our avatars currently produce ASL which has been pre-recorded from native signers in Motion Capture suits, but in the future, we hope to use the avatars to piece together signs and produce new content, sort of like you're describing. In the further out future, an avatar could use machine learning to draw upon their knowledge base and produce new content in an adaptive fashion, sort of like how Alexa and Siri can respond creatively to questions using their knowledge bases. Hope that helps you envision the future of our project! It sure is wild to think of what this type of technology may be able to do someday, in the hopefully-not-too-distant future! 

     

     
    1
    Discussion is closed. Upvoting is no longer available

    Sarah Haavind
  • Icon for: Sarah Haavind

    Sarah Haavind

    Facilitator
    Senior Research Project Manager
    May 15, 2019 | 10:10 p.m.

    Oh my gosh you must be so excited to get up and go to work every morning!! :-D It IS wild to envision, I agree - almost as good as in the 90s envisioning this type of conversation on the new Internet haha - appreciate all the time you are taking responding to our questions and sharing more about the work. Congratulations!

     
    1
    Discussion is closed. Upvoting is no longer available

    Lorna Quandt
  • Icon for: Perla Myers

    Perla Myers

    Higher Ed Faculty
    May 15, 2019 | 01:59 p.m.

    This looks awesome! Thank you so much for sharing your work! Congratulations!!! The possibilities of this for the future seem amazing.

     
    2
    Discussion is closed. Upvoting is no longer available

    Sarah Haavind
    Lorna Quandt
  • Icon for: Lorna Quandt

    Lorna Quandt

    Lead Presenter
    Assistant Professor
    May 15, 2019 | 03:41 p.m.

    I am so glad you enjoyed the video! The future of learning is indeed and exciting thing to envision! 

  • Icon for: Amy Pate

    Amy Pate

    Manager of Instructional Design
    May 16, 2019 | 12:19 p.m.

    I'm looking at how avatars in simulations can be more inclusive, and we've already looked into accessibility issues with simulations, so your concept and research with ASL avatars is really exciting. I'm looking forward to hearing more about your work in the future.

     
    1
    Discussion is closed. Upvoting is no longer available

    Lorna Quandt
  • Icon for: Lorna Quandt

    Lorna Quandt

    Lead Presenter
    Assistant Professor
    May 17, 2019 | 09:02 a.m.

    Thanks, Amy! Signing avatars hold so much potential to increase the inclusivity and accessibility of these new technologies. I will make sure to check out your video.

  • Icon for: Sarah Haavind

    Sarah Haavind

    Facilitator
    Senior Research Project Manager
    May 16, 2019 | 07:22 p.m.

    One more note - the musical background to your video is also delightful. Thank you for such an immersive 2-D experience! :)

     
    1
    Discussion is closed. Upvoting is no longer available

    Lorna Quandt
  • Icon for: Lorna Quandt

    Lorna Quandt

    Lead Presenter
    Assistant Professor
    May 17, 2019 | 09:02 a.m.

    I am glad you enjoyed it!

  • Icon for: Daryl Pfeif

    Daryl Pfeif

    Researcher
    May 19, 2019 | 09:34 p.m.

    What a wonderful and exciting use of technology across disciplines! LOVE IT !! 

     
    1
    Discussion is closed. Upvoting is no longer available

    Lorna Quandt
  • Icon for: Lorna Quandt

    Lorna Quandt

    Lead Presenter
    Assistant Professor
    May 20, 2019 | 10:05 a.m.

    Thanks so much!

  • Icon for: Diana Bairakatrova

    Diana Bairakatrova

    Higher Ed Faculty
    May 20, 2019 | 11:22 a.m.

    Hi Lorna! Your project is fascinating and well done! I am curious how other institutions that offer ASL learning can have an assess to SAIL.

    Thank you!

     
    1
    Discussion is closed. Upvoting is no longer available

    Lorna Quandt
  • Icon for: Lorna Quandt

    Lorna Quandt

    Lead Presenter
    Assistant Professor
    May 20, 2019 | 11:27 a.m.

    Thanks so much, Diana! At this time, SAIL is being developed and we are working towards a proof of concept--showing that all these systems can work together and provide a learning experience for users. Once we have created the proof of concept, we certainly hope to continue to improve and develop SAIL, and in the long term, we would love to share the SAIL system with schools, individual users, and the Deaf and ASL-learning communities. Maybe in the future people can download SAIL onto their personal VR systems at home! 

  • Icon for: Diana Bairakatrova

    Diana Bairakatrova

    Higher Ed Faculty
    May 20, 2019 | 11:42 a.m.

    This is great! Thank you!

  • Further posting is closed as the event has ended.