NSF Awards: 1440753
2016 (see original presentation & discussion)
Grades 6-8, Grades 9-12
The ASSISTments TestBed gives educational researchers easy access to an online learning environment (ASSISTments) in which they can test their hypotheses via randomized controlled trials that tap into a large sample pool of students from around the country and to retrieve vast, organized datasets.
E Paul Goldenberg
Distinguished Scholar
You seem to be making two kinds of claims—use of ASSISTments was linked to an average 10 point gain, for students; and the purpose of ASSISTments was to help researchers with RCT. I tried, first, to understand the connection by thinking that the first is a result (of some intervention) that was revealed by the RCT facilitated by ASSISTments, but you seem to be saying that ASSISTments /was/, in that case, the intervention. Could you clarify?
Korinn Ostrow
Graduate Research Assistant
Thanks for your interest in our project!
Yes, technically there are two stories we are presenting in this video. The primary story is that the ASSISTments TestBed, a framework that relies on the ASSISTments tutoring system for it’s subject pool and easy authoring tools, allows researchers to conduct randomized controlled trials within authentic learning environments. The secondary story comes from the desire to support the promotion of the TestBed with recent findings on the Efficacy Study of ASSISTments. As it is the platform driving TestBed research, it is helpful for those bringing their work to us to know that ASSISTments is actually an effective learning environment, and that any changes they may make to content or delivery within our system will be compared to effective practice.
More on the Efficacy Study:
The Efficacy Study conducted by SRI revealed a ~10 point gain in students’ state test scores following a large, multi-year randomized controlled trial that was conducted at the school level. The sample was comprised of a series of schools in Maine. These schools were paired for similarity and the pairs were randomly split to either receive ASSISTments immediately (year 1), or to have to wait until year 2 to use the system. Teachers were trained on how to use the system (with respect to each cohort’s timing) and following the first year, schools that were using the system saw the ~10 point gain, while those waiting saw no improvements above and beyond expected yearly increases. So yes, these findings were the result of a large scale, school level randomized controlled trial.
Please let us know if you have any other questions!
Miriam Gates
Researcher
I’m curious about the intervention and research tool too.
1. Can you discuss a bit more about the use of ASSISTments as a teaching tool (as in the efficacy study described above)? I believe they were used as a formative tool in the classroom? What content or mathematical practices were targeted in this case?
2. For individuals who are interested in this as a research tool, I believe that there are items that already embedded that can be used. Can you describe how this would be used in a RCT for example? That is, would the ASSISTments intervention be occurring in conjunction with the new intervention?
Korinn Ostrow
Graduate Research Assistant
Thanks for the curiosity!
Yes, in the efficacy study ASSISTments was used as a teaching tool in the classroom for formative assessment – this is what the system is all about. Even the name ASSISTments is meant to blend ‘assistance’ for the student and ‘assessment’ for teachers. ASSISTments has various content. First, we have mastery problem sets called “Skill Builders” that are aligned to the Common Core State Standards. These sets require that students continue solving problems on a certain skill (i.e., Pythagorean theorem) until they are able to accurately answer three consecutive problems. Three is our default but teachers (and researchers) can change this setting to make the sets harder or easier. There are also a series of static problem sets that offer additional content aligned to the Common Core, but that require students to solve all problems in the set (or select problems as chosen by the teacher). The third main set of content within ASSISTments is what we call “Book Work.” Essentially, grant funding has allowed us to provide correctness feedback, and in some cases richer tutoring, for 20+ of the top textbooks used in middle schools around the country. Teachers can find their book and have access to organized assignments that align with the book’s content. This is done without infringing on copyright, as the students still need to consult their textbooks. A problem in ASSISTments would show, “Page 304, problem 2.” The student then enters their answer and receives immediate feedback regarding their accuracy as opposed to waiting until the next day in class. This is one of our primary talking points for ASSISTments! Immediate feedback improves learning by allowing students to self-correct and know where their strengths and weaknesses are. All of this information is logged for the teacher, making paper grading a thing of the past. Teachers are able to look across the performance of their class and notice where students had similar misconceptions (what we call ‘Common Wrong Answers’) and can easily log student grades and drive classroom discussion using a variety of robust reports. In many cases, teachers actually involve students in writing mistake messages for future students who may share similar misconceptions, making the entire process learning fueled. So… long story short, ASSISTments is a tool for formative assessment, and the mathematics content used in the efficacy study included bookwork and any additional content aligned to the Common Core that the teacher assigned throughout the school year in which they partook in the ASSISTments ‘treatment.’ All work followed their traditional curriculum (ASSISTments is not its own curriculum like some other popular tutoring systems), and essentially provided the benefits of immediate feedback, tutoring where possible, and formative assessment for teachers.
For those interested in conducting research within ASSISTments, all of the ‘Skill Builder’ and ‘Static’ content (as discussed above) can usually be used and manipulated within research designs. In the case of Skill Builders, researchers can build their designs into mastery based assignments that teachers naturally assign to their students. Log files that are normally logged by the system then reflect the experimental conditions, and researchers are able to analyze their data accordingly. Using Skill Builders tends to capture the most authentic learning environments and samples grow slowly over time as more teachers assign the problem set with the embedded experiment. Other researchers opt for more orchestrated designs that are run in static problem sets. Usually these include more formal pre/post testing and have a longer intervention period, or may span multiple problem sets. These types of studies require working with teachers directly and result in smaller sample sizes.
ASSISTments is a learning platform. There is not necessarily an ‘ASSISTments Intervention,’ except for in the case of the efficacy study. Any research brought to the system occurs in conjunction with other research, and with larger tests of efficacy or effectiveness. Teachers can access a vast amount of content, some of which includes experiments that run concurrently. A teacher may assign multiple experiments to their students (usually without students knowing that an experiment is taking place – they see it as a math assignment), and some of the content assigned to students during the efficacy study surely had embedded experimental content. — Hopefully this answers the final portion of your question?
Thanks for giving us an opportunity to explain more about the platform!
Miriam Gates
Researcher
This is all really helpful. Do researchers generally work in schools where students and teachers are familiar with the platform or do they tend to bring it to the schools where they are doing research?
Korinn Ostrow
Graduate Research Assistant
It seems most common that researchers coming to conduct studies within the platform are working within schools in which students and teachers are already using the system. In the efficacy study, most if not all of the teachers were new and went through teacher training which is offered by our team. Studies that are deployed broadly to our user population are picked up by those already using the system with differing levels of experience. In some cases we do have more complicated studies in which researchers or members of our team will work with teachers and classrooms, but this is also often done with those familiar to the system. I think much of this has to do with the fact that one of the enticing parts about conducting research in the platform is tapping into our pre-existing user population.
It is likely that more and more researchers will be working with their own sample from their own schools, or may even use sources like Mechanical Turk to gather users. We have a new way of delivering ASSISTments content that does not require teachers to have a fromal account and offers a lightweight version of working with the platform – this is called ASSISTments Direct. Researchers can use this tool to provide links for their users to easily access experimental content.
Courtney Arthur
I am curious if there were any challenges that you encountered with this work?
Korinn Ostrow
Graduate Research Assistant
That is a great question. As a PhD student working with the ASSISTments team, I think much of what we do involves hitting obstacles and finding ways to work through them (our platform is largely built and sustained by graduate students at Worcester Polytechnic Institute). I think it is safe to say there are obstacles in all areas of working with technology in education. As a teacher tool, or in the context of the efficacy study, challenges range from helping teachers and students use the platform (we offer training, we have a ‘help desk’ for teachers and students to reach out to us with issues or questions, and we are always expanding the system with new features or fixing bugs), to trying to maintain fidelity in large scale studies when not all teachers at all schools may be willing to infuse technology into their classrooms (school level randomization carries greater obstacles than student level randomization, which is our usual approach with smaller scale RCTs that can be conducted by researchers using the TestBed). The design of experiments can pose challenges, as researchers bring ideas to us that still have to work in the context of the platform. Researchers ask great questions and sometimes it is a matter of figuring out what we need to implement within the system to be able to actually run a study, and whether or not those changes are feasible. The data itself can also pose challenges. We are trying to establish the Assessment of Learning Infrastructure (ALI) as a universal tool for researchers to use to access preliminary analyses and multiple formats of organized and preprocessed data. This is no small task when experiments differ dramatically and each researcher has different needs and a different level of experience with our system or with big data in general. Plus, we can pull a lot of information from our database and we are in an ongoing conversation about which variables to include as the most powerful covariates. Finally, promoting open science is difficult but growing easier thanks to calls like that from the Open Science Framework (https://osf.io) that requires researchers to preregister their experimental designs, promotes replication, and combats the file drawer problem in which null results are hidden away to the detriment of others who may eventually try to ask the same questions.
Challenges are readily available in most of STEM and education based research, especially when technology is involved. They make the job worth doing! If you have more questions about specific areas of our work I can try to point out additional challenges or limitations – let us know!
E Paul Goldenberg
Distinguished Scholar
Thanks, Korinn, for your reply. That clarified completely.
Further posting is closed as the event has ended.