NSF Awards: 1908159
2022 (see original presentation & discussion)
Undergraduate
The goal of this project is to examine how representations of practice can facilitate preservice teachers' professional knowledge for teaching fractions and multiplication/division. We specifically use 360 video and examine teachers' embodied professional noticing through their viewing practices with VR headsets and on flat-screen devices. We have found that by encouraging more teachers to (quite literally) shift to a student-centered focus (positioning students physically in the center of their field of view for longer durations) teachers begin to better integrate their professional knowledge into their noticing behaviors.
This presentation includes examples of our findings across the first 3 years of the project, and invites participants to 'try' the technology for themselves. We present examples from:
The outcomes of this project seek to aid the field in connecting the explicit professional knowledge teachers learn in their education programs with the tacit knowledge of attending and interpreting within the classroom.
Zenon Borys
This is fantastic! Really interesting and compelling. I know it may be a ways off, but I can I also see potential uses for coaching interactions. Often times when I'm coaching a teacher, we have rich discussions about what we noticed, because we noticed different details. And this makes those details more accessible. On a more logistic level, I find myself wondering about the audio. Does moving the view angle also change/enhance certain sound directions? I'm imagining instances where it would be great to be able to focus in on a small group discussion but have always had issues filtering out other background sounds. Great project!
Karl Kosko
Karl Kosko
Associate Professor
Great thoughts and questions!
Lorna Quandt
Asst. Professor, Educational Neuroscience
Hi team! I really enjoyed this video and I'm intrigued by the idea of using 360 video to support teachers' observational behaviors. I would like to know more about what patterns of noticing you see in new preservice teachers, compared to more experienced expert teachers. What are some of the differences you see in the data, and how do they connect to real classroom experiences and interactions? For the eyetracking study, do you pre-define ROIs and then calculate time spent in each ROI, or is your approach more observational?
Karl Kosko
Karl Kosko
Associate Professor
Hi Lorna,
We've looked at preservice teachers at a few different points in their program and have observed a few patterns (some more informal some published or in review). Much of this supports the prior scholarship on teacher noticing and eye-tracking with video generally. First, the less experienced (i.e., experience with actual students), the more people tend to look around (almost as if they are in awe of being in a classroom). Oddly, many of these individuals report a higher sense of presence than others with more experience (not that the latter reports low presence, but the former reports a sense of 'hyper-presence'). As folks gain more experience, they tend to slow down their head (or Field of View) movements and begin focusing more and more on teachers and students, before transitioning to students particularly. One interesting observation we made (just by looking at the x- and y- coordinates of the gaze data) is that experienced teachers look lower with their eyes than novices. This is because they look at the children's work more whereas novices who have gotten into being more student-centered look at the children's faces more than the tables where they are working, writing, counting, etc.
For the eye-tracking, we are working to combine observational with a form of AOI (areas of interest / regions of interest if you prefer) that uses machine learning. Right now, we have it where we can distinguish between teachers and students, but are fine tuning the AI to differentiate between specific students. This is also important down the road as we hope to use this with multi-perspective 360 video (so we can see if they are looking at the same or different students at different positions in the classroom). Our plan is to use a few different statistics with the student gaze distributions (including the Unalikeability coefficient - we find it to be slightly better than Gini, Hoover, or Theil). For using eye-tracking in an observational way, we are in the midst of qualitative analysis of those videos and hope to provide a better description of what the statistics tell us (based on the qual).
I will note that we have used pre-defined AOIs when we analyze 360 viewings with the field of view (no eye-tracking). For example, our platform gathers this data for when people watch on their laptops at home. It's not as precise as eye-tracking, but does give a very nice approximation (particularly when considering most of a person's eye gaze tends to be within a particular region of the center of their field of view).
Dan Roy
Research Scientist, Interest-based Learning Mentor, Learning Game Designer
It sounds like you have both a headset and flat screen interface for viewing the 360 videos. Have you noticed any differences in looking behavior across those hardware differences? I wonder if you'd see more looking around while wearing a headset due to the movement being more fluid and natural.
One issue I've seen with 360 videos in general is the IPD (interpupillary distance) of the camera recording the videos doesn't quite match that of the viewers, making the world feel distorted (too big, too small). Have you noticed that issue in your testing or found a way to manage it?
I wonder if the increased presence you noticed with the novice teachers may be in part due to age or comfort with technology. I wrote a report that summarized some of the research on this called Learning Across Realities (https://education.mit.edu/publications/learning...).
Have you considered approaches that are more interactive than 360 video, or would that introduce too many variables into your study?
Karl Kosko
Karl Kosko
Associate Professor
Great questions!
We tend to notice more movement with the flat screen than the headsets (see Kosko et al., 2021 in Journal of Teacher Education). We have one paper we are finishing that does a similar comparison and confirms this a bit. Part of it is that there is a greater sense of immersion with the headsets than flat screen and we do not tend to look around as haphazardly if we are more immersed. The big question for us has been how much the benefit of the headset matters (regarding cost, etc.). No clear answer yet there, but technology advances may make the question moot.
We haven't noticed anything with IPD, but none of our 360 videos are 3D. I wonder if that is an aspect to consider.
Regarding presence, we had wondered about this but there aren't any observable differences across all the data we have collected (we did gauge perceived tech savviness, prior use with VR, etc.). Tech savviness sometimes came out as a factor but not consistently across studies.
Regarding approaches other than 360 video, yes they are considered but some of the costs need to come down to make them a viable alternative for us. There are scholars doing great work with VR and digital agents, and we are dabbling with holograms, but nothing significant to report just yet.
Dan Roy
Marcelo Worsley
Assistant Professor
This project sounds really fascinating. The platform for teacher learning reminds me a the DIVER project from many years ago, but not with the option for much more immersive viewing experiences. I am curious to know more about the features of Praxi, and the types of annotations and noticing practices that it supports.
On a completely different strand of thinking, I wondered about also capturing teacher affect through electro-dermal activity (using Empatica E4) while teachers are viewing classroom video. This could give teachers a window into their own affective responses when they see different things in the classroom.
Lastly, really enjoyed seeing the eye tracking data, and the skeletal tracking overlays. How is skeletal tracking used in this project?
Karl Kosko
Karl Kosko
Associate Professor
Lots of great thoughts here that overlap with some of our own wonderings.
We are setting up a separate server for Praxi so we can offer access to more people and are going to submit a Large Scale version of our current grant to hopefully widen access further.
In terms of annotations, we began to steer towards machine learning as a main feature because of how we have used 360 in our courses, but eventually hope to incorporate annotations similar to what you may see with ThingLink and similar platforms.
We had thought about electro-dermal activity and some other physiological data. We are in the process of analyzing some physiological data, but nothing currently with skin (beyond electrodermal, I know changes in skin temperature has come into account in some scholarship).
Darryl Yong
Thanks for your interesting work. Have you observed differences between the ways that beginning and experienced teachers notice? What about teachers who have a demonstrated success with teaching marginalized populations of students? Do they notice different things?
Karl Kosko
Karl Kosko
Associate Professor
We are working on data from an eye-tracking study now that shows the differences between experienced and novice. We also see differences between preservice teachers with and without certain field experiences (essentially upper elementary vs not as that relates to how they look at children's fraction reasoning).
As far as differences between folks who have background teaching marginalized populations - we have not looked at this, but it would be an excellent study. I could see how the technology (and Praxi as a research tool) could be used to examine differences in where people turn the perspective, how they frame students, etc. Although we did not conduct a study to look at such factors, we did find evidence of gendered noticing (unexpectedly as it wasn't a focus of the study) in our Kosko et al. (2022) paper in Computers & Education.
Zohreh Shaghaghian
That's very interesting! I enjoyed the video presentation.
I was wondering if you have assessed or explored the correlation between students' eye-tracking and behavior to their learning gain in the classroom.
Karl Kosko
Associate Professor
Hi Zohreh,
That's an interesting question. Currently we are focusing on teachers' professional noticing, which is a skillset involving being situationally aware to the point of attending to certain events and actions in-the-moment, interpreting what is occurring with professional knowledge, and deciding how to respond. This is also something we work on in our teacher ed program. We do see an association between how well someone engages in noticing and their physical actions when viewing 360 video. We do not have longitudinal data with eye-tracking though.
The key association we see across our work with either field of view (FOV) or eye-tracking is that teachers who discuss students' mathematical reasoning will focus on students more (for longer durations) than teachers who discuss students' procedures (but not their conceptual reasoning). Those who talk about more generic pedagogy are often the messiest group in terms of how they go about viewing a 360 video (at least in terms of FOV - we are still analyzing eye-tracking data).
With eye-tracking studies, there are typically very small samples because of the difficulty in collecting the data (and expense of the equipment). We have been using the Neo Pico Eye headsets which, although expensive, are MUCH cheaper than standard eye-tracking equipment. Our platform, Gaze XR, allows for multiple headsets to be used at once (our data collection sessions in April resulted in sessions of 4-7 at a time). Our next challenge is getting this logistically implemented into a methods course.
Zohreh Shaghaghian
Kelly Costner
Karl--So great to see you involved in this truly innovative approach to learning about how teachers learn. And thanks for making your alma mater proud!
I think I'm understanding through your team's video and the discussion here that Praxi was developed and is currently being used as a tool to investigate, essentially, how teachers learn to teach through what might be termed "metainteractions"--interacting with video in which they see themselves interacting with students/content/pedagogy.
What I'm wondering now (and don't think you've mentioned) is whether this might eventually be a tool for teacher candidate or inservice teacher use. In other words, what promise does this tool have for direct use by teachers?
Karl Kosko
Karl Kosko
Associate Professor
Hi Kelly! Great question. Currently we are focusing on 360 video cases only (and not self-recorded videos). This is primarily because we are being conservative on server and storage capacity. Eventually we hope to support including recordings of one's own practice. With that said, I know of several folks that have used 360 video in this way. Kaltura, as an example, supports 360 video (single-perspective) and ThingLink does as well (the latter having better annotation tools for 360). Of course, neither of these support multi-perspective 360 nor do they report data on "where" someone looked.
So to get at the heart of your last question - eventually we hope this tool can be used for both 360 video cases and 360 videos of one's own teaching. We would like the current lineup of tools to be combined with others to support such discussions. However, some of this will take time (primarily for capacity). We are definitely hoping to work on this in a large-scale version of this project though!
Further posting is closed as the event has ended.