2063 Views (as of 05/2023)
  1. Robb Lindgren
  2. http://education.illinois.edu/faculty/robblind
  3. Assistant Professor
  4. Presenter’s NSFRESOURCECENTERS
  5. University of Illinois Urbana-Champaign, Concord Consortium
  1. Jina Kang
  2. https://education.illinois.edu/faculty/jina-kang
  3. Assistant Professor
  4. Presenter’s NSFRESOURCECENTERS
  5. University of Illinois Urbana-Champaign
  1. TAEHYUN KIM
  2. https://taehyun-ls.weebly.com/
  3. Graduate Student Researcher
  4. Presenter’s NSFRESOURCECENTERS
  5. University of Illinois Urbana-Champaign
  1. NATHAN KIMBALL
  2. https://concord.org/about/staff/nathan-kimball/
  3. Curriculum Developer
  4. Presenter’s NSFRESOURCECENTERS
  5. Concord Consortium
  1. Emma Mercier
  2. http://www.emmamercier.com
  3. Associate Professor
  4. Presenter’s NSFRESOURCECENTERS
  5. University of Illinois Urbana-Champaign
  1. James Planey
  2. Graduate Student Researcher
  3. Presenter’s NSFRESOURCECENTERS
  4. University of Illinois Urbana-Champaign
  1. Robin Jephthah Rajarathinam
  2. Graduate Student Researcher
  3. Presenter’s NSFRESOURCECENTERS
  4. University of Illinois Urbana-Champaign
Facilitators’
Choice

Connections of Earth and Sky with Augmented Reality (CEASAR)

NSF Awards: 1822796

2022 (see original presentation & discussion)

Undergraduate

The CEASAR project aims to understand the affordances of immersive augmented reality technologies for supporting collaborative learning in STEM classrooms. To support this investigation we have developed a robust night sky simulation that can be accessed from both tablet computers and HoloLens 2 headsets. Undergraduate students work together in a multi-device environment to solve problems such as finding the location of a fallen satellite using only the view of the stars from the site of the crash. We have so far collected data from numerous classrooms at a partnering community college and a university. The multimodal data we have collected are helping us to understand how XR technologies can be effectively integrated into complex collaborative learning activities. This includes merging logfile data from the simulation with video of the collaborating students. Outcomes of the CEASAR project include design principles for creating effective multi-device STEM learning activities as well as methodological guidance for conducting research on technology-enhanced collaborative learning. 

This video has had approximately 349 visits by 260 visitors from 140 unique locations. It has been played 160 times as of 05/2023.
Click to See Activity Worldwide
Map reflects activity with this presentation from the 2022 STEM For All Video Showcase website, as well as the STEM For All Multiplex website.
Based on periodically updated Google Analytics data. This is intended to show usage trends but may not capture all activity from every visitor.
show more
Discussion from the 2022 STEM For All Video Showcase (18 posts)
  • Icon for: Robb Lindgren

    Robb Lindgren

    Lead Presenter
    Associate Professor
    May 9, 2022 | 04:03 p.m.

    Welcome! Thank you for checking out the CEASAR project. We are really proud of this partnership between the University of Illinois Urbana-Champaign and Concord Consortium. CEASAR is a multi-device simulation platform (AR headsets and tablet computers) that allows groups of students to work together on astronomy problem-solving tasks. We have brought CEASAR to numerous undergraduate classrooms so far, and we are currently analyzing video and logfile data to make some inferences about how this configuration of technology supports collaborative learning. Please let us know if you have any questions about the research or technology aspects of the project. We are excited to engage and think with you about other application areas for these kinds of digital learning tools. 

     
    1
    Discussion is closed. Upvoting is no longer available

    Lorna Quandt
  • Icon for: James Planey

    James Planey

    Co-Presenter
    Graduate Student Researcher
    May 10, 2022 | 12:41 p.m.

    Hi everyone! I just wanted to build off of Robb's message and let you all know if you'd like a bit more detail on the collaborative task or the CEASAR interface you see in the video, you can take a look at this poster we presented at AERA 2022: https://uofi.app.box.com/s/tkem8xcuczwb8bhcbag8...

     
    1
    Discussion is closed. Upvoting is no longer available

    Lorna Quandt
  • Icon for: Lorna Quandt

    Lorna Quandt

    Facilitator
    Asst. Professor, Educational Neuroscience
    May 10, 2022 | 10:29 a.m.

    Hi Robb! I hope you are doing well! Nice to "see" you here. I am curious to learn more about how you make the CEASAR content accessible both in AR and also on tablets, etc? What are some of the biggest challenges in making the content compatible with multiple platforms, and how have you addressed those challenges so far?

    I see that in some of your demos, the students are in small groups where 1-2 students have AR headsets, and the others don't. How does this work in practical terms? I am curious if the students share the headset, or if there are any structured instructions for the students about how to share information from AR with the students who aren't seeing the AR content? 

    As a last point--I just love seeing how much more the students are gesturing when they are using (or have just used?) the AR glasses! The possibilities there are really profound. I am excited to see more work from you and I hope we can be in touch in the future! 

  • Icon for: James Planey

    James Planey

    Co-Presenter
    Graduate Student Researcher
    May 10, 2022 | 12:02 p.m.

    Hi Lorna,

    Thanks for these questions! All of the groups you see in the video are working on a problem-solving task we co-created with the astronomy educators called "Lost at Sea". Within the problem they are tasked with determining the location of a space capsule that has splashed down in an unknown location using their access to the night sky information provided by the CEASAR system. They must first determine the hemisphere they are located in, then identify four constellations to assist with orientating themselves, and finally measure the motion of stars to determine an estimated latitude and longitude. 

    In regard to the sharing of technology, we've facilitated this in two ways. First, we were able to come in and provide all students with an introduction to the AR headsets and training on the gesture-based input system before the lab session focused on our task, this made sure all students had a similar base level of familiarity with the technology. Then, during the lab time with our task, students were free to leverage the AR headset as they saw fit (no specific requirements or group assignments). Some groups rotated use of the headset while others settled into roles with one student being the primary AR user. Second, the CEASAR system allows users to share their interactions across devices in several ways. Any time a user selects a constellation, that selection is then made visible to all other users across tablet and AR platforms. In addition, all users have the ability to annotate the sky with line drawings, that again are synchronized across devices for everyone to view. Finally, we added the ability for one user to jump to the date/time/location of another, so for example if the AR user finds a star of interest, the tablet users can select their username from within the software and it will take them to where the AR user is making the observation (or vice versa).

     
    1
    Discussion is closed. Upvoting is no longer available

    Lorna Quandt
  • Icon for: Robb Lindgren

    Robb Lindgren

    Lead Presenter
    Associate Professor
    May 10, 2022 | 12:11 p.m.

    Great questions Lorna! Good to see you too! I'm glad you noticed the gesturing that the students were doing. We've actually been finding that the opportunity to use the AR headsets gets students gesturing in really impressive ways, and that that gesturing seems to persist even after they've taken the headset off. We see it as a good sign that these technologies are empowering expressive gesturing. 

    Let's definitely connect again soon!

     
    1
    Discussion is closed. Upvoting is no longer available

    Lorna Quandt
  • Icon for: NATHAN KIMBALL

    NATHAN KIMBALL

    Co-Presenter
    Curriculum Developer
    May 10, 2022 | 11:03 a.m.

    Hello Lorna, Let me jump in as some of your questions are about the technical development of the CEASAR cross-platform program as that work was done at Concord Consortium. The CEASAR program was developed in Unity that, in principle, eases cross-platform work. We found that Unity was very useful for holding the "model" of the solar system--the celestial sphere and the sun and moon that all move according the the correct date, time, and location on the Earth's surface. However, because the user interface elements are so different between the HoloLens 2 and the web implementation that, within Unity, there needs to be independent versions of the program for each platform that tap into the solar system model. During the course of the project, owning to the difficulty of obtaining the HoloLens 2, we also developed a VR version (that ran on a Quest) that advanced our knowledge of developing for a 3D "surround" model and bridging between 3D and flat devices. The difficulty of juggling the code between the several platforms required that when the HoloLens became available, we just focused on the two platforms.  In addition to the UI and keeping the visible elements in step, there is a server program that manages the communication and state between the platforms. All this made for some challenging dev work. 

     
    1
    Discussion is closed. Upvoting is no longer available

    Lorna Quandt
  • David DeLiema

    Researcher
    May 10, 2022 | 03:38 p.m.

    Loved this chance to learn more about the CEASAR project! Really exciting work. Hoping to float a question past the team about movement/gesture. In the video clips, one student wearing the headset is gesturing/interacting with the simulation while other students (not wearing the headset) are watching without access to the visual simulation. Am I reading that correctly? If so, how do you see students collaborating in these moments? Is it perhaps the case that the partial access to the visual simulation is generative (e.g., the student wearing the headset puts into words what they are seeing in ways that might be beneficial to other members of their group)? Or does this create a chasm between members of the group that is difficult to cross? I'm curious about the conversational dynamics here because they would have big implications for how many AR headsets you would need in a classroom. Would love to hear any of your thoughts on this. 

  • Icon for: James Planey

    James Planey

    Co-Presenter
    Graduate Student Researcher
    May 10, 2022 | 04:21 p.m.

    Hi David,

    The students outside of AR have access to the same set of simulation data, just presented within the limitations of the touch tablet interface. Actions are networked, so if the AR or tablet users select or annotate that is visible to the whole group across devices. The interesting thing about when the AR users are gesturing to communicate their immersive view, is that in groups that leverage the AR headset frequently even non-AR users begin to slowly adopt the "room-scale" mental map of the simulation, pointing to areas of the room as they look at their tablet screen and discuss constellation locations. I think the initial presentation of this information by the AR user (the generative actions as you describe) are critical for getting this ball rolling.

    As a steppingstone to this AR implementation we did a pilot in VR, and there you very much did see the divide between the immersive user and the group (the VR user often went off on their own, or engaged but never attempted to share a verbalization of their experience). With AR, while some of the users still kept their experience to themselves, we saw a noticeable increase in AR users who wanted to communicate their observations to the group, and while the software has some methods to assist with that, gesture was often the tool of choice.

     
    1
    Discussion is closed. Upvoting is no longer available

    Lorna Quandt
  • Icon for: Dan Roy

    Dan Roy

    Facilitator
    Research Scientist, Interest-based Learning Mentor, Learning Game Designer
    May 12, 2022 | 01:59 a.m.

    Thanks for sharing. Looks like a great project. I'm curious about the differences between the VR and AR versions, and any possible conclusions about what kinds of collaboration they facilitate. Do you think if the VR version had everyone else in VR along with some representation of themselves (e.g. avatars) you'd see as much interaction and collaboration as with AR + tablet?

    Also, did all the students get the same information? We did a cross-platform project (VR + tablets) and noticed the early versions didn't have much collaboration because the VR players could solve the puzzle on their own. We then separated which roles get which information so they depend on each other, and that prompted more communication between roles. Maybe an approach like that could increase collaboration in your application even in a VR + tablet case? For reference, this is the project I'm talking about: https://education.mit.edu/project/clevr/.

    Do you think if everyone were in AR the communication would change a lot? How?

  • Icon for: Robb Lindgren

    Robb Lindgren

    Lead Presenter
    Associate Professor
    May 13, 2022 | 11:27 a.m.

    Thanks Dan! I am excited to learn more about CLEVR; I agree that there is a lot that these projects can learn from each other. Just to build a bit on Nathan's reply...

    In previous projects with immersive tech I too have found that role distribution can be an effective way to engender collaboration in multi-device environments. I think the reason we haven't gone there (yet) with CEASAR is that with this particular task there seems to be a benefit of giving students a fairly even perspective split (i.e., AR users get an immersive view of the night sky from the Earth's surface; tablet users get as much immersion as you can get from a tablet). So in a way they do have different roles based on the perspective affordances the tech gives them, we just haven't separated the actual task (e.g., figuring out longitude and latitude). As Nathan alluded to, we think this perspective distribution works better in AR than VR because there is more impetus for participants to communicate cross-platform in AR. We've seen evidence of a persistent group representational scheme emerge in the AR groups, and that's been very exciting to observe. 

     
    1
    Discussion is closed. Upvoting is no longer available

    Dan Roy
  • Icon for: NATHAN KIMBALL

    NATHAN KIMBALL

    Co-Presenter
    Curriculum Developer
    May 12, 2022 | 11:26 a.m.

    Hi Dan. Thanks for adding your questions. It seems that you have done some very related work. A complete collaborative VR version is intriguing and especially offers viable remote collaborative possibilities with avatars as you suggest. It also raises some challenges if some specific learning objectives are required, say, for a course. In that case, ways of structuring tasks and collaboratively assembling information or outcomes requires specific design and programming. (Yes, an intriguing possibility.) Going into this project, we imagined all users with AR headsets, but the reality of our test subjects--labs in intro astronomy classes--required that all students participate so we went with the cross-platform approach. This serves a very real practical purpose. We made the decision that the experience was a source of information to be drawn on, a tool--it is simply the stars, sun, and moon and methods to change day, time, and location. Beyond the initial "play" time, getting to know the affordances, the tasks and recording of outcomes were on paper which provided data on the effectiveness of the intervention.

    As James has noted above, VR users often went into their own worlds, and did not communicate their observations, whereas in AR verbal communication was far more frequent and useful to the group. So, for face-to-face groupings, this seems more effective. We did consider giving users on the two platform different affordances based on strengths of each device, but it did not seem necessary, and given the constraints of the project this was not done. Yes, if this were done, I do see a differentiation of roles, but I'm not sure that would be desirable in a rather time-constrained classroom situation. 

    Thanks again. 

     
    1
    Discussion is closed. Upvoting is no longer available

    Dan Roy
  • Icon for: Cynthia Orona

    Cynthia Orona

    Program Coordinator
    May 12, 2022 | 02:40 p.m.

    Thank you for sharing.  I find your comment on AR requiring communication important.  It is difficult for everyone to have headsets, especially when there are a class of students.

  • Icon for: NATHAN KIMBALL

    NATHAN KIMBALL

    Co-Presenter
    Curriculum Developer
    May 12, 2022 | 03:31 p.m.

    Hi Cynthia, Sorry I wasn't clear about why we did not have headsets for everyone. In theory, it not difficult for everyone to have the AR headsets, and that would have been the ideal, IMO. For the AR equipment we were using, however, it would have been prohibitively expensive. (The HoloLens 2 is a remarkable device, but it runs $3500.) In general, our test classrooms had about 20 students, so you see the problem. We felt it was important to have the involvement of students engaged in real course work, not a small clinical study where we could have afforded devices for small study groups.  Our approach also helped us engage the astronomy teachers for input on design.

  • Icon for: Marcelo Worsley

    Marcelo Worsley

    Facilitator
    Assistant Professor
    May 12, 2022 | 03:06 p.m.

    Really enjoyed this video and seeing the students in action. In addition to looking at their gestures, I curious if you all are looking at their spatial language or use of multimodal gestures (connections between what they say and the gestures that they make).

  • Icon for: Robin Jephthah Rajarathinam

    Robin Jephthah Rajarathinam

    Co-Presenter
    Graduate Student Researcher
    May 13, 2022 | 12:34 p.m.

    Hi Marcelo,

    Yes, we are looking at how students use their gestures to discuss and collaborate with each other. Specifically, we are looking at the role these gestures play on how knowledge is introduced, modified, accepted or rejected and how this can lead to either co-construction of knowledge or confusion. We noticed that students using AR devices leveraged the 3D virtual space and used unique gestures relevant to that 3D space onto the physical space they were in to share knowledge with their peers. We are currently working on exploring the nuances of these gestures with regards to how they leverage different spaces and it’s role on collaboration. 

  • Icon for: Patrik Lundh

    Patrik Lundh

    Researcher
    May 13, 2022 | 01:19 p.m.

    Really great project. I see so much appeal and potential in the application of this technology, even beyond the STEM classroom. My question is about collaboration and teamwork. We know how challenging it can be to set students up to be able to collaborate constructively (sometimes someone "does all the work," others are reticent to work on a team, student internalization of individual accountability and rewards in school may undermine the idea of a team culture, or assignments do not logically lend themselves to collaboration). Do you see the use of this technology and the way assignments are designed as facilitating collaboration, or do you still have to provide other supports for teachers and students to be able to collaborate well?

  • Icon for: Robb Lindgren

    Robb Lindgren

    Lead Presenter
    Associate Professor
    May 16, 2022 | 09:47 a.m.

    Thanks for the question Patrik. I enjoyed your CPR2 project video as well! Facilitating collaboration was the explicit goal of CEASAR, even though the technologies that we were using (AR headsets, tablets) are more frequently associated with individual activities. Particularly with AR, it is not going to be feasible to provide all individual students with high-end tech, so we felt it was important to figure out how to configure the activities such that it distributed the task across devices and across people. We think we made great headway on this by creating a shared simulated night sky environment with multiple entry points. This created a space where students could see the effects of other students' actions, but often still necessitated verbal communication to make the interpretations of those actions clear. The other key to engendering collaboration was task design. When students were simply exploring the night sky using the various tech, there was unsurprisingly not much evidence of collaboration. But given the parameters of the "lost at sea" navigation task they were given, working with each other and communicating across the different perspectives was really essential. To be sure, not all groups collaborated successfully, but we feel like we're learning a lot about what was happening when collaboration was achieved.  

  • Icon for: Joelle Molloy

    Joelle Molloy

    Graduate Student
    May 16, 2022 | 02:18 p.m.

    This is incredibly exciting work. Well done to you all.

  • Further posting is closed as the event has ended.