7528 Views (as of 05/2023)
Icon for: Cynthia D'Angelo

CYNTHIA D'ANGELO

SRI International

Speech-Based Learning Analytics for Collaboration

NSF Awards: 1432606

2016 (see original presentation & discussion)

Grades 6-8

Developing tools and analytics to automatically rate the quality of student collaboration using speech data.

This video has had approximately 455 visits by 338 visitors from 75 unique locations. It has been played 207 times as of 05/2023.
Click to See Activity Worldwide
Map reflects activity with this presentation from the NSF 2016 STEM For All Video Showcase website, as well as the STEM For All Multiplex website.
Based on periodically updated Google Analytics data. This is intended to show usage trends but may not capture all activity from every visitor.
show more
Discussion from the NSF 2016 STEM For All Video Showcase (18 posts)
  • Icon for: William Finzer

    William Finzer

    Senior Scientist
    May 16, 2016 | 11:17 a.m.

    Wow, what an amazing idea! I love that you can gauge levels of collaboration from speech without having to understand the words. It makes sense!

    The “entropy” graph is particularly interesting to me because of an entropy concept we’re using in a project. Can you say a bit about what it is measuring?

  • Icon for: Cynthia D'Angelo

    Cynthia D'Angelo

    Lead Presenter
    Senior Researcher
    May 17, 2016 | 01:49 p.m.

    Thanks Bill! The entropy graph that I briefly show at the end is a bit technical in nature, but basically it is looking at the distribution of a given variable (e.g., the total duration of a students’ speech) across the group of three students. We were trying to produce a single value (the Shannon entropy) to describe the distribution, and tried entropy as that measure, but it seems to be turning out to only be useful for differentiating groups at the end of the spectrum and not in the middle (since the entropy value does not distinguish between different types of groups well there). We think we need two values, not one, to describe the distribution better and help distinguish the groups (which is still better than the original three values I suppose).

  • Icon for: Miriam Gates

    Miriam Gates

    Facilitator
    Researcher
    May 16, 2016 | 01:35 p.m.

    This is a great idea — and will begin to address some of the struggles for teachers in group work. I’m curious to know about the items that you are using in the initial phase of data collection. Are these items based on existing curricula or were they written for this project? If they are based on existing curricula, how did you go about adapting them for this purpose?

  • Icon for: Cynthia D'Angelo

    Cynthia D'Angelo

    Lead Presenter
    Senior Researcher
    May 17, 2016 | 01:52 p.m.

    The items we are using are from the Cornerstone math curriculum (based on SimCalc) that Jeremy Roschelle and other colleagues at SRI have been developing for over a decade now. We had to do a bit of adaptation, but not too much. Basically, we created an iPad app that allowed students to use remote controls to register their answers. (Part of the reason why was so we could log their actions better and part was so that we could try to get each student to be in charge of their answer – in other forms, like just using a keyboard, some students would easily just take over answering for another student.) So, the form of their interaction was changed, but the content of the items was not.
    Thanks for your comment!

  • Icon for: Miriam Gates

    Miriam Gates

    Facilitator
    Researcher
    May 18, 2016 | 12:58 a.m.

    I was interested to note that your items were designed for groups of 3. Was there a reason that you wanted 3 students to work together? Could adaption be done for groups of 2 or 4?

  • Icon for: Cynthia D'Angelo

    Cynthia D'Angelo

    Lead Presenter
    Senior Researcher
    May 18, 2016 | 12:44 p.m.

    The items we were using were already designed for groups of 3, so that required the least amount of content adaptation. But, we did purposely want to avoid groups of 2 because that is a different dynamic in many cases. (We have some data on pairs using the system where they share the responsibility of the third response, but I don’t think we’ll have enough of that situation to make any strong claims about differences.) I think that groups of 4 would be more similar to the groups of 3 in how they distribute and spread out the intellectual work of the group, so it could probably be easily adapted to that size.

  • Icon for: E Paul Goldenberg

    E Paul Goldenberg

    Facilitator
    Distinguished Scholar
    May 16, 2016 | 05:00 p.m.

    The technical challenge is fascinating and the goal could be a real boon to classrooms. I realize this is early in the project, still too early even to say what will ultimately become possible (with current technology), but I have a question that might be answerable even at this early stage. As I understand the technique, you are hand coding features of collaboration that you care about, machine-detecting features of language (timing, prosody, etc.) and using machine learning to find relationships between the two that will (eventually) allow the machine to make reliable claims, useful to teachers, about the former based on the latter. Fully understanding that you cannot yet make such claims, and are not sure yet even what claims you will be able to make, what /kinds/ of claims are you imagining could become possible?

  • Icon for: Cynthia D'Angelo

    Cynthia D'Angelo

    Lead Presenter
    Senior Researcher
    May 17, 2016 | 02:02 p.m.

    It is still early in the process, as we are currently working on full analysis of the first phase of data collection, but basically we are trying with this exploratory project to see whether or not there is any linkage or relationship between the machine-detectable features of language and the quality (and/or features) of collaboration. I’m hoping by the end of the project we can produce a usable prototype system that will be able to give a teacher some kind of sense of how students were collaborating in their small groups and whether or not they need to intervene. We might also, depending on the results, be able to give specific feedback about what a particular group could do to improve their collaboration (e.g., ask each other more questions or make sure all three people are contributing to the discussion). I think we will be able to say groups with more ‘good collaboration’ are characterized by [these] speech features and groups with less collaborative activity typically have [these] kinds of features.
    Thanks for your question!

  • Icon for: E Paul Goldenberg

    E Paul Goldenberg

    Facilitator
    Distinguished Scholar
    May 17, 2016 | 10:03 p.m.

    Fascinating! I’m really eager to hear what you learn from this.

  • Icon for: Cynthia D'Angelo

    Cynthia D'Angelo

    Lead Presenter
    Senior Researcher
    May 17, 2016 | 02:04 p.m.

    Hi everyone! Thanks for stopping and watching this video. I am really excited about sharing this project with you and hearing what you think about it.
    I especially would love to hear from teachers and educators about how this could help you incorporate more collaboration in your classroom and what kind of feedback from a system like this you would want. When you tell your students to collaborate in class, what do you think that means to them?

  • Icon for: Courtney Arthur

    Courtney Arthur

    Facilitator
    May 17, 2016 | 04:57 p.m.

    This is such a great application to gather useful data for teachers. I wonder what grade levels this has been used with and whether there is a way to gauge how/when answers are changed based on the conversations students have within their groups? It would be interesting to capture some of the reasoning behind their answers.

  • Icon for: Cynthia D'Angelo

    Cynthia D'Angelo

    Lead Presenter
    Senior Researcher
    May 17, 2016 | 05:03 p.m.

    Right now we are using it with middle school students.
    We are collecting the log data from the application so we know every time a student changes their answer, as well as when the student presses their button to submit an answer choice. Additionally, one of the things we are manually coding is whether or not they are giving an explanation about their answer choices to the fellow students. Early results seem to show that groups that have more explanations tend to have better collaborations.

  • Icon for: Roger Taylor

    Roger Taylor

    Assistant Professor
    May 20, 2016 | 05:26 p.m.

    Hello Cynthia, this is one of the most create educational technology project I’ve seen. I’m very much looking forward to hearing more about your project in the future.

  • Icon for: Cynthia D'Angelo

    Cynthia D'Angelo

    Lead Presenter
    Senior Researcher
    May 20, 2016 | 05:52 p.m.

    Thanks so much!

  • Icon for: Jacqueline Barber

    Jacqueline Barber

    Director of the Learning Design Group
    May 21, 2016 | 06:11 p.m.

    I really enjoyed learning about this project, and look forward to hearing about your progress!

  • Icon for: Cynthia D'Angelo

    Cynthia D'Angelo

    Lead Presenter
    Senior Researcher
    May 23, 2016 | 02:15 p.m.

    Thanks!

  • Icon for: Susan Doubler

    Susan Doubler

    Senior Researcher
    May 23, 2016 | 11:50 a.m.

    Cynthia, The possibility door is just opening for speech recognition and for video analytics. While you are working with speech, we are working with video and asking some similar questions —e.g., length of speech turn, number of turns at talk in an exchange. We’d love to be able to identify individual speakers, but discovered that children’s voices aren’t as differentiated as adults. For now, we are relying more on gesture, body position, and gaze. Imagine a future when we bring advances in speech recognition and video analytics together! Thanks

  • Icon for: Cynthia D'Angelo

    Cynthia D'Angelo

    Lead Presenter
    Senior Researcher
    May 23, 2016 | 02:18 p.m.

    Yes, there is a big issue with being able to distinguish childrens’ voices. Even when we’re watching the video it is sometimes hard to tell who is talking. This is one of the reasons why we collected individual audio channels for each student. But we also collected a single audio stream of all three in each group, and so one thing we’ll be looking at is whether or not we can distinguish them and also whether or not we need to. In many cases, it seems possible that there are other features of speech, besides who is talking, that can give us enough information about the collaboration.
    Thanks for your comment! Good luck with your project.

  • Further posting is closed as the event has ended.