7205 Views (as of 05/2023)
  1. Eric Greenwald
  2. Director of Assessment and Analytics
  3. Presenter’s NSFRESOURCECENTERS
  4. University of California, Berkeley, Lawrence Hall of Science
  1. Megan Goss
  2. Middle School Literacy Director
  3. Presenter’s NSFRESOURCECENTERS
  4. Lawrence Hall of Science, University of California, Berkeley
  1. Kathryn Quigley
  2. https://www.linkedin.com/in/kathryn-chong-quigley-786b6943?trk=nav_responsive_tab_profile
  3. Producer and Media Lead
  4. Presenter’s NSFRESOURCECENTERS
  5. Lawrence Hall of Science

Supporting Teacher Practice to Facilitate and Assess Oral Scientific Argument...

NSF Awards: 1621441, 1621496

2017 (see original presentation & discussion)

Grades 6-8

In the first year of a 4-year project, researchers and curriculum developers from the Lawrence Hall of Science and Arizona State University are investigating the usability and impact of a tablet-based formative assessment tool designed to help middle school science teachers monitor and support oral argumentation in their classrooms. The DiALoG (Diagnosing the Argumentation Levels of Groups) formative assessment tool, which has shown promise in preliminary studies, offers a novel and practical way to analyze students’ collective oral argumentation practice, as well as support teachers in customizing instruction based on this analysis (Pearson, Knight, Cannady, Henderson, & McNeill, 2015). The project is enabling the refinement and expansion of the DiALoG tool and evaluation of its impact on teacher pedagogical content knowledge and formative assessment practices in widespread classroom use.

This video has had approximately 452 visits by 372 visitors from 133 unique locations. It has been played 232 times as of 05/2023.
Click to See Activity Worldwide
Map reflects activity with this presentation from the 2017 STEM for All Video Showcase: Research and Design for Impact website, as well as the STEM For All Multiplex website.
Based on periodically updated Google Analytics data. This is intended to show usage trends but may not capture all activity from every visitor.
show more
Discussion from the 2017 STEM for All Video Showcase (20 posts)
  • Meg Bates

    Researcher
    May 15, 2017 | 11:13 a.m.

    Very interesting project!  What resources did you draw upon to design the intervention lessons for various levels of argumentation? What feedback have you gotten from teachers on these lessons?

  • Icon for: Megan Goss

    Megan Goss

    Co-Presenter
    Middle School Literacy Director
    May 15, 2017 | 06:55 p.m.

    Hi Meg,

    Thanks for writing.

    We start by thinking about what a teacher's understanding of her class would be if she were to give a score (0-2) for any area (claims, reasoning, etc), then consider what experiences and information students would need to get in order to support or scaffold their understanding of that particular dimension. We then do two things: We go to the literature and find out what, if anything, is out there that would help us to forma direction to take in providing support and secondly we go to our own, robust curricular work from at least a decade of working on and refining lessons about argumentation and explanation.   We then work to create lessons with this combination of information. 

     

    Megan

     
    1
    Discussion is closed. Upvoting is no longer available

    Heidi Larson
  • Icon for: Megan Goss

    Megan Goss

    Co-Presenter
    Middle School Literacy Director
    May 16, 2017 | 12:00 p.m.

    Meg, I forgot to answer your second question! We are currently piloting both the app and the RMLs in 5 classrooms and look forward to hearing what the teachers think about each aspect.  We also held two focus groups with teachers in February, where we showed them both the tool and the RMLs and asked for feedback. At the time, teachers had never actually taught them so their initial ideas were cursory but in general they all liked the idea that there were many options for teachers to choose from and the fact that the RMLs were so pointed and focused on particular aspects of oral argumentation. 

     
    1
    Discussion is closed. Upvoting is no longer available

    Heidi Larson
  • Icon for: Rachel Shefner

    Rachel Shefner

    Higher Ed Faculty
    May 15, 2017 | 02:50 p.m.

    This tool is very interesting to me. In our work we utilize the claims, evidence and reasoning framework, as well as the 9 Talk Moves for classroom discussions; and are always looking for different ways to marry these strategies. Your project does this in a very user-friendly way. I am wondering about the RMLs. You say that these align with different levels that your rubric measures, but do they also speak specifically to NGSS content pieces (DCIs)? I heard you refer to supporting teacher PCK in the video, but am wondering where the content-specfic supports come in.

  • Icon for: Eric Greenwald

    Eric Greenwald

    Lead Presenter
    Director of Assessment and Analytics
    May 16, 2017 | 12:39 p.m.

    Hi Rachel, that's a great question. Our current focus for the RMLs is supporting NGSS practices through content-embedded learning experiences--but we've been thinking about it in terms of what is foregrounded/backgrounded versus what is present/absent. In developing the tool and RMLs, we're trying to strike a balance between being contextualized (so that the practices aren't divorced from the science content that makes engagement in the practice meaningful) and being somewhat content-agnostic (so that teachers can use the RMLs in conjunction with whatever content they are teaching). The nice thing about this project is that we get to spend the first phase of the study piloting and revising the tool and the RMLs to get this balance right. Our expectation is that the curriculum used in conjunction with this tool and RMLs will include additional supports for content learning. For example, for the RCT phase of the study, we are using the tool in conjunction with Amplify Science, which includes  formative assessments, and suggestions for instructional adjustments, that foreground the DCIs as students engage in oral argumentation. 

     
    1
    Discussion is closed. Upvoting is no longer available

    Heidi Larson
  • Icon for: Rachel Shefner

    Rachel Shefner

    Higher Ed Faculty
    May 18, 2017 | 04:26 p.m.

    Megan and Eric, that does make sense. We also started our project in a content-agnostic way by first focusing on the SEPs. But as we focused more on argumentation in our formative assessment work we found that we saw some content misconceptions emerging and students providing evidence for an incorrect claim. That's where it becomes difficult to get the balance right. I am also wondering what your work with teachers on this was like. Did you do anything with calibrating among your group of teachers? We have found that "consensus discussions" among teachers are very revealing, and I would think that as you scale this up you might want to get convergence on what the levels mean. Or maybe that does not matter? Anyway, I would love to have your take on our project video as there is some intersection in what we are looking at. 

  • Icon for: Eric Greenwald

    Eric Greenwald

    Lead Presenter
    Director of Assessment and Analytics
    May 19, 2017 | 01:59 p.m.

    Thanks, Rachel--the "consensus discussions" you describe are very much in line with efforts Bryan Henderson at ASU engaged in with the tool's initial development, and we're currently engaging pilot teachers in something similar (both as part of the PD to use the tool, and as part of the practitioner review/feedback cycles we'll be engaged in as we iterate the tool for the RCTs in the next phase. Along those lines, and coming from a formative assessment background, the learning you describe (for teachers, and for us as researchers) really resonates--I'm reminded of Beverly Falk and Suzanna Ort's, Sitting Down to Score (1997). 

    And I will definitely take a look at your project video!

  • Icon for: Rachel Shefner

    Rachel Shefner

    Higher Ed Faculty
    May 19, 2017 | 02:16 p.m.

    Great! I look forward to your feedback. Thanks for the connections, too.

  • Icon for: Megan Goss

    Megan Goss

    Co-Presenter
    Middle School Literacy Director
    May 15, 2017 | 07:04 p.m.

    Hi Rachel,

    Thanks for your question. 

    The RMLs are designed to cover any grade level, 6-8, for which they are needed by a given teacher; because of this, we made them fairly content neutral. In addition, we didn't want content knowledge, or lack of it, to interfere with any student's successful participation in any RML, since the goal of each RML is to practice a particular aspect of argumentation.  Because of these considerations, we needed to prioritize the argumentation over any particular NGSS piece. However, we do try to make almost all RMLs focus lightly on science content and ideas, without making that content difficult to access. Hope this makes sense -- please feel free to write back if you want to talk more! 


    Megan

     
    1
    Discussion is closed. Upvoting is no longer available

    Heidi Larson
  • Icon for: Steven Rogg

    Steven Rogg

    Facilitator
    Associate Professor of Education - STEM
    May 15, 2017 | 10:35 p.m.

    This is intriguing! I love your aim to advance PCK in scientific argumentation. I can see that the model and the tool should have strong potential for informing practice. Could you say something about the research base for the tool and your logic model for teacher change? I'm reminded of classical  work such as Bales (1970) interaction process analysis and wait time (Budd Rowe, 1972). Such potential!

  • Icon for: Eric Greenwald

    Eric Greenwald

    Lead Presenter
    Director of Assessment and Analytics
    May 16, 2017 | 12:57 p.m.

    Thanks, Steven! Development of the tool itself was guided by a body of literature similar to those you cite, in particular, we drew on the work of Michaels, O’Connor, and Resnick (2008) on Accountable Talk. The Michaels et al. (2008) notions of Accountability to the Standards of Reasoning and Accountability to Knowledge guided the assessment of argument products. That is, to evaluate the substantive content of oral arguments, we created items measuring the degree to which students were accountable for both the logical requirements of a valid argument, in addition to the scientific accuracy and relevance of their utterances. As for assessment of argumentation processes, items were guided by the Michaels et al. notion of Accountability to the Learning Community, which emphasizes respect for, and critical attention to, the contributions of others so that ideas can be built upon one another.

    In thinking about how the tool will advance PCK in particular, we drew heavily on the formative assessment literature, recognizing that supporting formative assessment of oral argumentation provides particular leverage for improving science teaching and learning. The effort to design a teacher tool that facilitates the use and usefulness of formative assessment is motivated by the well-documented positive impacts of formative assessment on student learning (Black, Harrison, Lee, Marshall, & Wiliam, 2004; Black & Wiliam, 1998; Fuchs, Fuchs, Hamlett, & Stecker, 1991; Hattie, 2009; Marzano, 2004; McTighe & Brown, 2005; and many others!). In particular, we're proposing that providing teachers a means of formatively assessing science discourse can provide a shift from an authoritative classroom discourse (Nystrand, et al., 1997), where the teacher presents science from a place of authority, to a more dialogic (Chinn, O’Donnell, & Jinks, 2000; Smart & Marshall, 2012) discourse that allows students to play a more active role in the direction and goals of the discussion. This expectation is consistent with the pedagogical shifts associated with formative assessment, including a shift toward a more constructivist learning model and one in which students play a more active role in their own learning (Black & Wiliam, 1998; Perrenoud, 1991; Shepard, 2000; Torrance & Pryor, 2001; Turnstall & Gipps, 1996; Whiting, Van Burgh, & Render, 1995, April). Such shifts suggest that tools to support formative assessment are likely to impact teacher practice and professional knowledge as well. In fact, improvements in teacher PCK and Pedagogical Learner Knowledge (PLK) (Grimmett & MacKinnon, 1992) associated with formative assessment may be core to the mechanism through which student effects are realized (here, we're building on work such as: Darling-Hammond, Ancess, & Falk, 1995; Falk & Ort, 1998; Franke, Carpenter, Levi & Fennema, 2001; Goldberg & Roswell, 2000; Kornhaber & Gardner, 1993; Puttick & Rosebery, 1998; Whiting, Van Burgh, & Render, 1995). 

     
    1
    Discussion is closed. Upvoting is no longer available

    Heidi Larson
  • Icon for: Jennifer Yurof

    Jennifer Yurof

    Facilitator
    May 16, 2017 | 09:47 a.m.

    Thanks for sharing your work! Are there plans to make the app accessible to other tablet platforms? From your video, the app appears to be extremely user-friendly and I appreciate the links to the RMLs. As a classroom teacher, I would have found the RMLs to be one of the most beneficial aspects of the app. Have you considered allowing for student self-assessment from the app as well? 

  • Icon for: Eric Greenwald

    Eric Greenwald

    Lead Presenter
    Director of Assessment and Analytics
    May 16, 2017 | 01:10 p.m.

    Hi Jennifer, building on Megan's comment, a big part of the current phase of the study is to get teacher feedback and suggestions for how to make the tool as user-friendly as possible. While the iPad version is the current prototype, we definitely intend to create the tool so that it is platform-agnostic (likely, web-based and operable through any browser), once we've had a chance to iterate a bit on the design. 

     

     
    1
    Discussion is closed. Upvoting is no longer available

    Heidi Larson
  • Icon for: Megan Goss

    Megan Goss

    Co-Presenter
    Middle School Literacy Director
    May 16, 2017 | 11:56 a.m.

    Hi Jennifer,

    Yes, there are plans to do more with the app --- I'll let my colleague, Eric Greenwald, address that aspect.  As for the RMLs, I agree -- the formative assessment aspect of the app is really only as powerful as the interventions that you offer afterwards. I love the idea of a student self-assessment -- thank you for that! It is always on our minds as we create each RML that it is important for the teacher to be able to explain why she is having the students participate in each chosen RML -- a student self-assessment would really help with that! 

  • Icon for: Jeremy Roschelle

    Jeremy Roschelle

    Researcher
    May 16, 2017 | 11:20 p.m.

    Hi Eric,

    Nice to "see" you -- and a cool tool. Sounds like you may end up with a huge dataset of "oral arguments" that have been scored. Ever think about using that data set to train speech recognition? would be a compelling Cyberlearning proposal....

    jeremy

     

  • Icon for: Eric Greenwald

    Eric Greenwald

    Lead Presenter
    Director of Assessment and Analytics
    May 17, 2017 | 07:36 p.m.

    oh, now that's a fun idea!

    one thing we've struggled with for the current research effort is the technical difficulty of accurately transcribing (or even recording with fidelity) the audio of class-level student discourse. We have a nice set of classroom videos that we use to train scorers (and some that we use as part of the PD with teachers who are using the tool), but we have found audio files alone difficult to work with. I'd certainly love any advice you might have on capturing multi-voiced audio!

     

  • Icon for: Heidi Larson

    Heidi Larson

    Facilitator
    Project Director
    May 17, 2017 | 05:24 p.m.

    Thanks for sharing your video! I'm wondering if you will be able to measure long-term impacts of this app, and if you are planning on scaling it up. 

  • Icon for: Eric Greenwald

    Eric Greenwald

    Lead Presenter
    Director of Assessment and Analytics
    May 17, 2017 | 07:48 p.m.

    Yes to scaling up! We will be researching the tool in relation to its use as part of the Amplify Science middle school curriculum, which includes argumentation-rich "Science Seminars" (modeled after the Socratic seminar) in each unit. The ultimate goal is to develop the tool for wide-scale use, and a big part of our initial work is working with teachers to make it as useful and useable as possible. 

    Our current study will feature an RCT covering a few months of instructional use of the tool, for which we will be examining things like changes in teacher PCK in relation to using the tool (and the Responsive Mini-Lessons meant to accompany the tool), but a more longitudinal study is outside of the current research plan. That would be super interesting, though, and I'd love to think about ways to make that happen with the next study of DiALoG!

     

     
    1
    Discussion is closed. Upvoting is no longer available

    Heidi Larson
  • Icon for: Steven Rogg

    Steven Rogg

    Facilitator
    Associate Professor of Education - STEM
    May 18, 2017 | 06:47 a.m.

    Eric, I much appreciate your response. It is possible that you cited every relevant paper in my Zotero library, and pointed me to some I need to read! I'll need to come back to this. Thanks! To return the favor, I should call your attention to an app that shares some of the aims in support of teacher inquiry - http://lessonnote.com/

  • Icon for: Eric Greenwald

    Eric Greenwald

    Lead Presenter
    Director of Assessment and Analytics
    May 19, 2017 | 02:01 p.m.

    that looks like a super cool tool--thanks for the tip!

     

  • Further posting is closed as the event has ended.