NSF Awards: 1839379
2020 (see original presentation & discussion)
Adult learners
This video will introduce SAIL, an NSF-funded project housed within the Action & Brain Lab and Motion Light Labs at Gallaudet University. This project involves the development and testing of a new immersive American Sign Language learning environment, which aims to teach non-signers basic ASL. Our team created signing avatars using motion capture recordings of deaf signers signing ASL. The avatars are placed in a virtual reality environment accessed via head-mounted goggles. The user’s own movements are captured via a gesture-tracking system. A “teacher” avatar guides users through an interactive ASL lesson involving both the observation and production of signs. Using principles from embodied learning, users learn ASL signs from both the first-person perspective and the third-person perspective. The SAIL project draws upon the integration of multiple technologies: avatars, motion capture systems, virtual reality, gesture tracking, and EEG, with the goal of creating a new way to learn sign language. The video will highlight recent developments in the project, including the creation of the virtual signing human avatar, the building of the virtual environment, and pilot testing of the system.
Lorna Quandt
Assistant Professor
Welcome, visitors! Thank you for watching our video about the SAIL project at Gallaudet University. In the past year we have made major strides toward our goal, which is to create a proof-of-concept ASL learning experience in virtual reality.
We have completed the motion capture recordings of ASL content, created 3D signing avatars from the recordings and we've built interactive lessons of introductory ASL content. Next up, we will conduct an EEG cognitive neuroscience experiment that will help us see how the "embodied learning" aspect of SAIL influences ASL learning. Importantly, this work is conducted by a deaf-led team, and we are committed to holding deaf talent at the center of everything we are doing.
Through this work, we see great potential for an entirely new way option for learning ASL in the future--from a native ASL user in the comfort of your own home! We welcome questions and comments on any aspect of this project. And again, thank you for your interest.
Sasha Palmquist
Karl Kosko
Very interesting work!
I can see this being particularly useful now that hand tracking technologies are becoming more affordable and prevalent (i.e., Oculus Quest). Do you have any thoughts / plans on using this technology in a non-headset context? I can imagine that the VR headset is more immersive and effective, but wonder about programs that may not have access to such devices.
Judi Fusco
Lorna Quandt
Lorna Quandt
Assistant Professor
Hi Karl, Great question! Yes, that's definitely on our minds. If we can do this project well in VR, then we would also explore non-immersive options like app or web-based options. Also, we are interested in looking into this as an augmented reality experience as well. Many potential options--and we'd love to know how they all compare in terms of experience and efficacy. --Lorna
Karl Kosko
Karl Kosko
Thank you for the response!
I'm excited to share this with some of my colleagues in our ASL program.
Lorna Quandt
Andrea Nodal
Hi! This is super interesting and amazing especially right now with issues of social distancing. The recent switch to all online classes has made my ASL professors and other interpreting majors realize how difficult it is to learn this language 2 dimentionally and in a non-interactive setting. My question is, is the signer's signs being perceived by the headset itself? And if so how close to the body can it perceive signs. For example if you are signing PLEASE or CURIOUS would it be able to see those signs?
Sasha Palmquist
Lorna Quandt
Lorna Quandt
Assistant Professor
Hello, and thank you for this comment. You're totally right--with more an more instruction occurring online, exploring the potential of VR for learning seems very timely. We know that nothing can really compete with in-person classes with a great ASL teacher, but that is not always a feasible option.
In the current version of SAIL, users can see their hands represented in VR (by using LEAP motion tracking)--you can see this happening in a couple early scenes in our video. It does present a challenge for body anchored signs (e.g., PLEASE, CURIOUS), but we are aware of that challenge and will continue to work on it as the project continues. Just like with in-person learning, in our system, you wouldn't really be able to see your own production of CURIOUS, but you would be able to see the teacher producing the sign. Anyway--something we will continue to figure out!
Sasha Palmquist
Mitchell Nathan
Your team has done a really amazing job combining VR and embodiment for teaching and learning. I can see from the above that you are considering many future pathways. I wonder most about the technology demands for full motion capture as you consider how to scale this up for a broad user base. As a second question, why go with full VR versus AR that can allow the world around people to be in the image? I am truly asking since I don't yet have a clear answer and see many trade offs. This is something my colleagues and I are also pondering as we consider how to support embodied collaboration among learners and teachers.
Sasha Palmquist
Lorna Quandt
Lorna Quandt
Assistant Professor
Thank you! Yes, the pipeline for the motion capture process in quite intensive. One of our goals right now is to create very high-quality signing avatars, who can produce native-like ASL. We are now confident that we can do that quite well, but it requires a fair amount of man-power to produce even one lesson. So, as we look to expand the scope of the project, we will continue to confront the quality/efficiency tradeoff. Part of where we land on that will be determined by the purpose of the avatars. For instruction, it is critical that the signing is extremely natural. For other purposes, that may matter less, but flexibility and automation may matter more.
For now, we opted to go with VR because we were more comfortable developing in that arena, and interested in seeing the effects of the powerful, immersive experience. However, of course we also see the possibilities in AR--so many options we can explore!
Sasha Palmquist
Sarah Heuer
This is absolutely amazing! Firstly, that this is a deaf-led team is fantastic, and I'm glad to see these innovations driven by the people who understand these kinds of needs the most. Secondly, this idea is something I really connect with! My partner and I were talking yesterday about how they want to learn ASL but our University doesn't have classes. While I suggested some places for online learning, we did talk about the difficulty of learning such a three-dimensional language in an online setting, especially with covid19. Using motion capture technology toward this vision is fantastic and I hope to see continued application of this in the future!
Holly Morin
Lorna Quandt
Lorna Quandt
Assistant Professor
Thank you for your positive comments and encouragement! How cool that you just recently thought about this need in your own life. I love hearing that kind of anecdote.
Jacob Sagrans
Great video and project, and so important at this time especially when in-person learning isn't an option. I'm curious if you have any thoughts about how motion capture/VR/avatars like yours could extend beyond teaching ASL. I could see this technology being harnessed to teach all sorts of things virtually now. Maybe a virtual science lab, where students could manipulate virtual chemicals or other virtual materials safely from their own homes?
Sasha Palmquist
Lorna Quandt
Lorna Quandt
Assistant Professor
Yes--absolutely! There are some really cool projects out there in which you can interact with molecules or learn engineering principles in VR. The technology really does have an enormous amount of potential for learning. I think we're only at the tip of the iceberg here in 2020. It has an incredible ability to put you in an entirely new place, even one which is physically impossible. The possibilities are almost unlimited!
Judi Fusco
Jacob Sagrans
Sheryl Burgstahler
Great project. Thanks for sharing.
Lorna Quandt
Lorna Quandt
Assistant Professor
Thank you for stopping by, Sheryl! We appreciate the support.
Overtoun Jenda
Assistant Provost and Professor of Mathematics
This is an awesome project. Thanks for working on this. How are students working on your team recruited? Is it a summer project or do you work on this throughout the year? Do you have industry partners yet?
Lorna Quandt
Lorna Quandt
Assistant Professor
Thank you so much for your interest, Overtoun! We recruit student workers through on campus advertising and word of mouth. And we also have excellent PhD students in our Educational Neuroscience program, as well as a team of skilled staff members: 2D/3D artists, human computer engineer, Unity developer, etc. It takes a lot of work and we have assembled an amazing team!
We don't have any industry partners yet, but that is in our master plan :-)
Leigh Peake
This was really fascinating to watch for someone who knows neither ASL nor VR technologies ... Now I want to learn both! I wondered if you've interacted at all with Nick Giudice or Rich Corey at the VEMI lab at UMaine? Might be interesting to compare notes. Meanwhile, I'm interested in the questions above about the trade-offs around the investment needed for VR. That is an earnest project design struggle we all struggle with no matter what level of technology. Thanks for the interesting work & video.
Holly Morin
Lorna Quandt
Lorna Quandt
Assistant Professor
Hi Leigh, Thanks for the connections to the VEMI lab. I haven't been in touch with them but that looks like a great connection.
Yes--using VR is tricky because it may hold a ton of potential, but also requires a lot of work to create high-quality experiences within, and is also a non-trivial investment for users. It really is a universal struggle--quality vs efficiency, access vs immersion, cost vs richness of the experience. There's a lot to juggle.
Rebecca Ellis
Greetings! This looks like a very intense project, and you've made great strides! I really like how you modified the helmet to make it more usable and accessible for ASL.
I'm wondering, do you think you will be able to use this technology to assess how well the students are learning the language? Will students be able to film themselves signing and put it side-by-side with the avatars, or even be able to use tech like the new gaming consoles use (or the updated gloves you use to make the avatars) to judge their movements for accuracy? I'd love to be able to take a class this way, and then get scores based on the accuracy of my signing!
Sasha Palmquist
Lorna Quandt
Lorna Quandt
Assistant Professor
Hi Rebecca, great question. Yes, providing corrective feedback is at the top of our list for what comes next. There are many potential options for how to accomplish this, but none that really work well yet. It will take quite a bit of work--but we do hope to incorporate this into the system eventually. As you mentioned, it would be much more helpful to have a sense of your accuracy as you learn, much like you would in a real-life class.
Rebecca Ellis
Sasha Palmquist
Senior Manager of Community
This is a truly impressive project! Your video was a fantastic snapshot of the exceptional team and the quality of your work to date. As a DC local, I can't wait until I can stop by your lab for an in-person tour :-) Since that is likely to be a while, I am very interested in the range of age groups with which you have tested this interface. What might the challenges and opportunities be for elementary youth learning as compared to HS or undergraduate populations?
Lorna Quandt
Lorna Quandt
Assistant Professor
Thank you so much for the kind words, Sasha! You are welcome to come visit when the time is right :-)
We are using this with adults right now, so that we can develop a good proof-of-concept first, and then later turn towards what might need to be adapted for use with younger ages. At the elementary level, we would definitely think about the appearance of the avatar changing (who do children learn best from?), and the speed of presentation. Not to mention, the use of VR with young children might have powerful effects that we're not yet fully aware of. We are tuned into that research and continuing to think about that for future phases of the project.
DIANA MAY
Such very important work, Lorna and team. Well done all round! What a shame that sign-language is not the same all over the English-speaking world. Will you be able to use that technology with BSL?
Diana May, London
Lorna Quandt
Lorna Quandt
Assistant Professor
Thank you for the comment and support, Diana! The same technology could be used with any sign language in principle, but the team creating the content would have to re-film all of the motion capture data with signs from that language. We hope one day that this concept can be applied to any other sign language.
Catherine Stimac
Love this! What a creative project all around!
Lorna Quandt
Lorna Quandt
Assistant Professor
Thank you for the kind words, Catherine.
Lorna Quandt
Catherine Stimac
Nick Lux
What a captivating project with a truly impressive collaboration between different disciplines.
Lorna Quandt
Assistant Professor
Thank you! Yes, we are so fortunate to have a wide range of expertises and skills on our team--it allows for so much creativity.
Sheryl Burgstahler
Agreed. Nice video presentation.
Lorna Quandt
Jeanne Reis
Director
Hi Lorna! Good to see you in this forum again. Last year, I shared the ASL Clear project in the showcase, and we chatted about neuroscience terminology.
Great video, excellent work, by a fabulous group!
It looks like your team is tackling two major computer science challenges related to signed languages: making avatars that express ASL in a natural way from head to torso, and computer recognition / translation of the signer's utterances by an avatar in a VR environment. Pretty amazing!
Which of those elements has been the most challenging to develop?
Have you considered applications such as creating signing characters in video games?
You mentioned that you'd be seeking industry partners at some point. Does your lab work in partnership with any other universities or institutions?
Lorna Quandt
Lorna Quandt
Assistant Professor
Hi Jeanne. Yes, I remember our discussion about ASL Clear--fantastic project! Thank you for your support.
The computer recognition/translation part of what you mentioned has been much harder to crack--in fact, we are not actively working on that piece yet. However, it will be critical to have recognition and feedback for ASL learners as we continue to develop the SAIL system. It's a tricky problem! There are hardware solutions which show some promise but they often involve bringing special gloves, sensors, and other physical items into the equation. There are computer-vision type approaches, but we're not aware of any which are hugely successful yet. There is a LOT of work left to do in this arena.
We actually just were awarded an NSF INCLUDES Planning grant to try to foster greater collaboration in "sign-related technology" which would include what I described above, and maybe would also be relevant to ASL Clear. I'll add you to the contact list for that project so you may hear from us about that in the near future!
Jeanne Reis
Director
Please do add me to the contact list, thanks Lorna!
Another question just came to mind! In the ASL-learning environment, real-time feedback is important to novice signers, and usually comes in the form of the facial expressions of conversational partners (either comprehension or confusion). Does your team feel that type of feedback need to be included in the design of the VR context? And if so, have you explored any options - even fairly 'low-tech' options like showing students a video stream of themselves signing?
Lorna Quandt
Jeremy Roschelle
Hi Lorna, all of us at CIRCL remain big fans of your work! Really enjoyed the video.
Judi Fusco
Lorna Quandt
Lorna Quandt
Assistant Professor
Thanks so much, Jeremy! I really appreciate the support. I'm so glad you enjoyed our video!
Annette Walsh
I have been fascinated with the people who are signing on television news reports for various medical authorities or national figures. Their role in providing information for the hearing impaired is so important. I enjoyed Signing Avatars. .
Lorna Quandt
Holly Morin
This is truly fascinating and an amazing project- thank you for sharing this innovative project!
Lorna Quandt
Assistant Professor
Thank you so much, Holly!
Judi Fusco
I had to stop by and say hi and see the new video. Really clean explanations. Thanks and hope you all are well!
Lorna Quandt
Assistant Professor
Hi Judi! I am happy to see you here--thanks for stopping by. I am glad you enjoyed the video! I hope all is well with you and everyone at CIRCL.
Jeffrey Ram
What a neat video. Our most recent 4-day PD was signed the entire way through.
Michael I. Swart
Great work. Can't wait to see this become a tool that every child uses in elementary so that more of our communities can speak ASL. This technology looks very cool, very detailed and very labor intensive. We also work using motion capture, and are moving towards AR as well, and it was great to find this project amongst this years S4A showcase: https://stemforall2020.videohall.com/presentations/1705. They seem to be developing AR that can leverage phones. This would be huge in our work in high schools where all the students have phones, and maybe there is some work they are doing that could complement this great work too.
Further posting is closed as the event has ended.