2718 Views
  1. Elizabeth Rowe
  2. Director of Research
  3. Presenter’s NSFRESOURCECENTERS
  4. TERC, Landmark College, Massachusetts Institute of Technology
  1. Jodi Asbell-clarke
  2. https://edge.terc.edu/
  3. Director, EdGE at TERC
  4. Presenter’s NSFRESOURCECENTERS
  5. TERC
  1. Ibrahim Dahlstrom-Hakki
  2. Director
  3. Presenter’s NSFRESOURCECENTERS
  4. Landmark College
  1. Kelly Paulson
  2. Presenter’s NSFRESOURCECENTERS
  3. TERC

Revealing the Invisible

NSF Awards: 1417967

2018 (see original presentation & discussion)

Grades 9-12, Undergraduate, Informal / multi-age

Researchers from EdGE at TERC, Landmark College, and MIT have come together to Reveal the Invisible. As part of a collaborative partnership resulting from an NSF Ideas Lab, our team is creating tools and methods that will transform how learning can be measured. Challenged with the question, “What is the most audacious question you could answer about education with big data?, we answered “We want to watch implicit learning happen in process.”

Implicit knowledge is knowledge that can be demonstrated through activity and behaviors but is not necessarily expressed in words. Rather than relying on tests with written and verbal expressions of knowledge, our study of implicit learning draws uses methods and instruments from multiple disciplines—science education, game-based learning, educational data mining, cognitive psychology, and neuroscience—to build multimodal models of learning.

We are collecting data from neurotypical learners and learners with cognitive differences who play the physics game Impulse while wearing an eye-tracking device so we can record their gameplay and eye movements simultaneously. We then create a visualization of the game from the players’ data logs, incorporating detected behaviors (from data mining models built in previous research) that are indicators of implicit knowledge of physics in Impulse gameplay. Finally, we layer the visualization with the eye-tracking data to show what parts of the screen (objects in the game) the player is attending to while demonstrating the physics learning behaviors (or not). This allows us to study the relationship between visual attention and learning.

The data architecture we have built for this work, DataArcade, is powerful and can be repurposed to collect a variety of types of multimodal data and synchronize the streams to a precision of 10 ms, which is required for multimodal analytics of learning in a fast action game. Our team is preparing to include additional data stream, such as EEG, physiological, and other sensor data, to build comprehensive multimodal models of implicit learning.

This video has had approximately 234 visits by 205 visitors from 130 unique locations. It has been played 143 times.
activity map thumbnail Click to See Activity Worldwide
Map reflects activity with this presentation from the 2018 STEM for All Video Showcase: Transforming the Educational Landscape website, as well as the STEM For All Multiplex website.
Based on periodically updated Google Analytics data. This is intended to show usage trends but may not capture all activity from every visitor.
Public Discussion
  • Post to the Discussion

    If you have an account, please login before contributing. New visitors may post with email verification.


    For visitors, we require email verification before we will post your comment. When you receive the email click on the verification link and your message will be made visible.



    Name:

    Email:

    Role:
    NOTE: Your email will be kept private and will not be shared with any 3rd parties