829 Views
  1. Jianlan Wang
  2. Assistant professor
  3. Presenter’s NSFRESOURCECENTERS
  4. Texas Tech University
  1. Stephanie Hart
  2. Director of OnRamps program
  3. Presenter’s NSFRESOURCECENTERS
  4. Texas Tech University
  1. Beth Thacker
  2. Associate professor
  3. Presenter’s NSFRESOURCECENTERS
  4. Texas Tech University
  1. Kyle Wipfli
  2. Research Assistant
  3. Presenter’s NSFRESOURCECENTERS
  4. Texas Tech University

Measuring and improving Pedagogical Content Knowledge of student assistants i...

NSF Awards: 1838339

2020 (see original presentation & discussion)

Undergraduate

With the ever growing classroom sizes and the advent of non-traditional teaching methods being used more at the undergraduate level, many large classrooms are turning to the help of student assistants (SAs) consisting of mainly graduate and undergraduate teaching assistants. However, how do you evaluate their effectiveness in the Inquiry-based classroom?

Our goal is to design and validate an instrument to measure pedagogical content knowledge of questioning (PCK-Q) by capturing real life interactions between SAs and students and using their interactions to develop questions for the instrument. In addition to measuring SA’s pedagogical content knowledge, the instrument would also require them to think critically about realistic classroom situations.

To build the instrument, we are recording both SA interactions with students and SA training sections. The interactions between SAs and students are in inquiry-based classrooms with around 60 students per section and about a 1:15 SA to student ratio at Texas Tech University. We will use the videos to collect authentic scenarios to use as context in the development of the PCK-Q instrument. We will use this instrument to study the impact of SAs’ PCK-Q on college students’ conceptual understanding of physics and critical thinking skills.

Our project aims to "open the black box" and examine the function of SAs’ pedagogical content knowledge in their interactions with students. We will develop an instrument to assess PCK-Q skills and work with SAs to improve the questioning skills needed to be a highly effective SA.

This video has had approximately 194 visits by 137 visitors from 68 unique locations. It has been played 60 times.
Click to See Activity Worldwide
Map reflects activity with this presentation from the 2020 STEM For All Video Showcase website, as well as the STEM For All Multiplex website.
Based on periodically updated Google Analytics data. This is intended to show usage trends but may not capture all activity from every visitor.
show more
Original Discussion from the 2020 STEM For All Video Showcase
  • Icon for: Karl Kosko

    Karl Kosko

    Higher Ed Faculty
    May 4, 2020 | 04:36 p.m.

    Very interesting project!

    In studying PCK of Questioning, are you looking at any sub-constructs (such as Ball et al.'s Knowledge of Content & Teaching / Knowledge of Content & Students)? 

  • Icon for: Jianlan Wang

    Jianlan Wang

    Lead Presenter
    Assistant professor
    May 4, 2020 | 05:22 p.m.

    Thanks. Yes, we are looking at sub-constructs of PCK by synthesizing two models: Magnusson et al.'s model for science education (Orientation, knowledge of curriculum, knowledge of students, knowledge of assessment, and knowledge of instructional strategy) and Ball et al.' model for math education (knowledge of curriculum, KCS, and KCT). So far, we have four sub-constructs: 1) awareness/preference of using questions to respond to students; 2) knowledge of the curriculum used by students; 3) knowledge of students' understanding/struggles/difficulties about a specific concept; 4) knowledge of appropriate questions that could potentially be effective in guiding students. 

     
    1
    Discussion is closed. Upvoting is no longer available

    Stephen Alkins
  • Icon for: Jianlan Wang

    Jianlan Wang

    Lead Presenter
    Assistant professor
    May 5, 2020 | 01:07 p.m.

    Student assistants (SA), including graduate and undergraduate teaching/learning assistants, are pivotal to non-traditional physics instruction in large classrooms. Despite its effectiveness, little is known about how SAs’ Pedagogical Content Knowledge (PCK) affects SA-student interactions and how those interactions promote students’ learning. We are particularly interested in SA’s PCK of questioning (PCK-Q) skills. In this workshop, we will present a multi-level coding scheme to analyze SA support in different vignettes of SA-student interactions in class videos. The frequency of certain levels in multiple vignettes could suggest a measure of SA’s performed PCK-Q. We will also present a written instrument with open-ended questions assessing SAs’ narrated PCK-Q in given situations which are drawn from vignettes of authentic SA-student interactions. We will demonstrate the process of developing and validating the coding scheme and written instrument and their use in studying SAs’ impact on students’ conceptual understanding of physics and critical thinking skills. More information can be found on our webpage, www.pck-q-ttu.com

     
    1
    Discussion is closed. Upvoting is no longer available

    Stephen Alkins
  • Icon for: Stephen Alkins

    Stephen Alkins

    Facilitator
    May 6, 2020 | 12:25 p.m.

    Just as a note, the link you provided seems to be broken or does not lead to a functional website.

  • Icon for: Jianlan Wang

    Jianlan Wang

    Lead Presenter
    Assistant professor
    May 6, 2020 | 12:29 p.m.

    Thanks for the note. I will fix it soon. 

  • Icon for: Stephen Alkins

    Stephen Alkins

    Facilitator
    May 6, 2020 | 12:36 p.m.

     This is a great framework that could be used as professional development for SAs to know how they should approach students!  One aspect that I would add to the criteria for assessing SA efficacy would be an empathy/emotional intelligence.  Being able to gauge the emotional state of the student helps inform the proper approaches to use.  Additionally, flexibility of knowledge in the content area may be another criterion.  The ability for an SA to perceive and explain multiple applicable contexts a physics concept demonstrates flexibility with the material and a greater ability to support a range of students (i.e. meeting the students where they are in their understanding).  Perhaps this is covered in your fourth sub-construct, however.

    Thank you for the great work!

  • Icon for: Jianlan Wang

    Jianlan Wang

    Lead Presenter
    Assistant professor
    May 6, 2020 | 12:59 p.m.

    Those are GREAT suggestions. Thanks a lot. I totally agree that emotional scaffolding is critical for students in a self-paced lab-oriented learning environment. This is the advantage of SAs because they are more knowledgeable peers who have experienced the same process of physics learning and are more likely to reach "resonance" with students. On the other hand, SAs are less prepared or skilled in dealing with emotionally hard time from students, so they are more likely to be an answer fairy. It's challenging for SAs to find a balanced point between shielding students from overwhelmingly negative feelings and encouraging them to struggle through difficulties prior to the uh-huh moment. I think this is also related to the "flexibility of knowledge" that you mentioned. It's definitely worth of attention, but sometimes it's hard to capture students' or SAs' emotions. More accurately speaking, it's hard to infer their emotions from video cues like their actions or facial expressions, and transfer that information into quantitative data. Any further suggestions would be appreciated. 

     
    1
    Discussion is closed. Upvoting is no longer available

    Stephen Alkins
  • Icon for: Wendy Smith

    Wendy Smith

    Facilitator
    May 6, 2020 | 07:34 p.m.

    This is a very interesting project. I work on a research project that includes student assistants in mathematics, so have two questions for you: 

    --what training do the student assistants get? (in your case, presumably in questioning strategies)

    --How much is your PCK-Q going to be tied to physics content, or do you think the PCK-Q is somewhat content-independent, and similar questioning knowledge and skills would be applicable in other (STEM) disciplines?

     
    1
    Discussion is closed. Upvoting is no longer available

    Stephen Alkins
  • Icon for: Jianlan Wang

    Jianlan Wang

    Lead Presenter
    Assistant professor
    May 7, 2020 | 03:26 a.m.

    Thanks. To your first question, we have several candidate strategies suggested by research, such as instructor modeling appropriate questions, SAs role play, and SAs reflecting on their own videos. Actually, we have been trying some of them. The difficulties are: 1) The time of SA-training sessions is limited. SAs, especially undergraduate learning assistants (LA), may not have sophisticated content knowledge. Thus, the priority is to walk SAs through the curriculum, which leaves limited time for pedagogical training. 2) We are developing the instrument to measure SAs' PCK-Q, without which, it's hard to reflect on the efficacy of certain strategies. To your second question, we will develop two instruments to measure SAs' PCK-Q, i.e. coding schema to analyze SAs'  performed knowledge of questioning and written tests to assess SAs' narrated knowledge of questioning, which will hopefully validate each other. Both are content-dependent because that is the nature of PCK, and both contain elements that are applicable in other STEM disciplines. For the former, it is the types of questions and patterns of SA-student interaction (e.g. students' lab result does not match a theory, what should a SA do?) that are transferable. For the latter, what transferable is the process of developing quizzes from authentic classroom scenarios and the context or background information provided in question stems.

     
    1
    Discussion is closed. Upvoting is no longer available

    Stephen Alkins
  • Icon for: Feng Liu

    Feng Liu

    Facilitator
    May 6, 2020 | 10:25 p.m.

    Thanks for sharing this interesting project! It is important to have a reliable and valid instrument to measure student assistants (SAs) effectiveness in order to conduct a rigorous study on SAs’ impact on student outcomes. As you mentioned, the instrument is designed to measure four constructs in relation to SAs’ pedagogical content knowledge of questioning. I would like to know more about the validation process. What type of validity evidence are you going to collect? Internal structure, and construct validity, concurrent validity, or all of them? What approach are you going to use for the validity study, classical testing theory (e.g., factor analysis) or item response theory (e.g., 1PL IRT/Rasch modeling)?

     
    2
    Discussion is closed. Upvoting is no longer available

    Michael I. Swart
    Stephen Alkins
  • Icon for: Jianlan Wang

    Jianlan Wang

    Lead Presenter
    Assistant professor
    May 7, 2020 | 04:03 a.m.

    This is a good question. Validity and reliability of instruments are our top concerns. We will have two instruments, coding schema for classroom video analysis and written tests. For coding schema, we will check theoretical validity (whether codes are supported by educational theories), predictive validity (practice coded at different time points are consistent with each other or match a reasonable pattern of practice development), convergent validity (refer to narrated knowledge measured by written tests, correlation and multi-linear regression will be used. Correlation is for the assumption that knowledge and practice are parallel to each other, and regression is for that knowledge determines practice.), inter-rater reliability (ICC probably will be used). For written tests, we will check face validity (feedback from the external reviewer and test respondents),  construct validity (Rasch model for item difficulty and SA ability, and 2-parameter or 3-parameter IRT after we convert open-ended test questions to multiple-choice ones), theoretical validity, predictive validity, convergent validity (refer to coding schema), inter-rater reliability (ICC again for open-ended test questions), and test-retest reliability (the same SAs take tests several times? but we are not sure as tests may be edited). Concurrent validity will not be considered because to our knowledge, there has been no such a tool to quantitatively measure SAs' PCK-Q. Please correct us if we are wrong, and inform us of the existing tools.

     
    2
    Discussion is closed. Upvoting is no longer available

    Michael I. Swart
    Stephen Alkins
  • Icon for: Feng Liu

    Feng Liu

    Facilitator
    May 11, 2020 | 10:48 p.m.

    Thanks for sharing these details, Jianlan. I am impressed that you would collect such amount of evidence of reliability and validity. I think these will definitely help with the future adoption of the instruments by a broader audience!

  • May 12, 2020 | 11:43 a.m.

    A cool project.  Thank you for articulating a bit about the PCKQ framework.  As Feng mentioned, am looking forward to hearing more about the validation and reliability metrics of this framework, as well as what research questions you are answering with the robots cameras documenting group collaboration and learning.

  • Icon for: Jianlan Wang

    Jianlan Wang

    Lead Presenter
    Assistant professor
    May 12, 2020 | 11:54 a.m.

    Thanks for the encouragement. 

  • Further posting is closed as the event has ended.

Multiplex Discussion
  • Post to the Discussion

    If you have an account, please login before contributing. New visitors may post with email verification.


    For visitors, we require email verification before we will post your comment. When you receive the email click on the verification link and your message will be made visible.



    Name:

    Email:

    Role:
    NOTE: Your email will be kept private and will not be shared with any 3rd parties