NSF Awards: 1934151
2021 (see original presentation & discussion)
Grades 6-8, Grades 9-12, Informal / multi-age
Artificial intelligence (AI) tools and technologies are increasingly prevalent in our daily lives, from the cars we drive to the media we consume. Many teens now interact with AI devices on a daily basis but more educational materials about how AI works, including how to design AI to be less biased, are needed. It is critical to increase youth’s understanding of AI and cultivate ethical awareness. Equally important is the need to inspire and support diverse youth to pursue and persist in computer science, to help ensure future development of more equitable AI technologies. Drawing on project-based learning principles and universal design, our interdisciplinary team of computer science, science education, and literacy teachers and researchers developed a modular AI ethics program for secondary students. Although we initially designed the program to be integrated into robotics summer camps, we shifted to a virtual format in 2020, due to COVID-19. Here, we share our experiences remotely teaching two one-week summer camps and two 6-week English Language Arts classroom units, involving 75 students from suburban and rural settings. To appeal to students with diverse backgrounds and interests, we integrated literacy and computer science through team-authored stories with embedded ethical dilemmas, immersion in AI media and simulations, and student-created design projects (including comics, videos and chatbots). Across four iterations, we found students to be highly engaged in the material and invested in learning about AI and its potential societal impacts. Daily observations and pre-post surveys demonstrate students’ increased understanding of AI and ethics. The five stories tested were effective in raising awareness and prompting critical discussion about issues such as fairness, transparency, privacy, and security.
Stacey Forsyth
Director, CU Science Discovery
Thanks for stopping by to check out our video! Our STEM+C project is inspired by the need for educational materials that support youth in exploring ethical issues related to artificial intelligence (AI). As AI technologies become increasingly prevalent in our daily lives, it is critical to educate and empower youth to be both critical consumers as well as ethical designers in our digital world.
Although this project was originally designed to be integrated into robotics summer camps, we shifted our plan last spring, due to COVID-19. When the pandemic limited our ability to run in-person summer camps, we transitioned to offering online camps (for middle and high school students) that integrated literacy (short stories), computer science (online AI demonstrations and activities) and multimedia design (comics, videos and chatbots), developing new activities that could work well in a virtual setting. Following our positive experience last summer, we then tested the materials in two 9th grade English Language Arts classes, held online in fall 2020.
We are currently analyzing data collected during these four programs, including pre-/post- surveys and AI drawings, as well as student artifacts and interviews. We’re interested in hearing from other educators working to bring AI education into K-12, as well as those integrating computer science (CS) with other disciplines. How have you approached exploring ethical issues related to AI and related technologies? What topics or projects have been most interesting to your students? What challenges have you faced in tackling some of these issues in formal and/or informal learning settings?
Andres Colubri
Assistant Professor
Thanks for a great video, and for taking on such an important problem! The use of stories to engage students with the challenges posed by the the widespread use of AI in society sounds like a great idea. A couple of questions: first, are you anticipating creating new stories, perhaps contributed by the students themselves? Second, you mention in the video that students will build their own AI systems. That's very interesting, but also brings a whole other element of complexity into the project, how are you planning to do it? Do you have specific software, tools in mind?
Stacey Forsyth
Director, CU Science Discovery
Great questions, Andres! Yes, we're certainly planning to create additional stories for the project, and we currently have a new story (about algorithmic bias in the criminal justice system) in review that we're hoping to test with youth soon. Working directly with teens and learning about the issues that most resonate with them helps provide ideas for new stories -- but if anyone has ideas for other stories that they'd like to see, please let us know! We hope to get to the point where students are contributing their own short stories to the project, but in this first year we focused on supporting students in telling their own stories through comics (using the comic-making software, Pixton EDU). This enabled students to create their own stories about issues of interest or concern, but in an approachable way that seemed to be really fun and engaging for most students (and, added bonus, worked well over Zoom, too!).
In terms of designing their own AI systems, we used Google's Teachable Machine to introduce students to machine learning and in particular, to highlight the impacts of biased training data. Teachable Machine is easy for students to use and understand, regardless of their prior experience with computer programming, which made it a good fit for our purposes. Some other online tools, like App Inventor and Machine Learning for Kids, offer similar functionality but provide opportunities to integrate some basic programming as well.
One challenge we face in our project, particularly in our online summer camps (due to time limitations), is finding the right balance between time dedicated to exploring the technology and tinkering with the technical/AI tools and time spent diving into the relevant ethical issues. It can be a tricky balance, allocating sufficient time for each component, so that students are able to develop their understanding about how AI works, while also having time to reflect about its broader societal impacts.
Andres Colubri
Assistant Professor
People at the Processing Foundation (https://processingfoundation.org/), including Daniel Shiffman who runs the popular Coding Train channel in Youtube are very interested in AI Ethics. That'a a community that could be interested in your project, and in user-contributed stories.
Stacey Forsyth
Director, CU Science Discovery
Thanks for the suggestion, Andres - we'll look into that!
Michael Chang
Postdoctoral Research
I loved the opening and closing with the Echo (?) device. I appreciate the idea of stimulating young people’s imaginations with stories featuring AI-based ethical dilemmas. One question I had was how you chose those stories, and whether there was a particular focus in mind towards highlighting specific ways that AI bias manifest (e.g., racial bias). The other question I had was regarding whether young people were able to extract core concepts of AI and identify how AI bleeds into their day-to-day lived experiences — and consequently, critiques and reimaginations of how those AI applications could be reformed for the better.
Stacey Forsyth
Director, CU Science Discovery
Thanks, Michael – glad to hear you enjoyed the Alexa opening. : ) (With so few photos and videos from this year’s remote programming, we had to get a little creative with the video!)
In developing this first set of short stories (which were authored by Ellie Haberl, a member of our team), we initially identified key AI ethics issues that we wanted to highlight, including algorithmic bias, data privacy, etc. Ellie then worked her magic to create original short stories, including realistic fiction and speculative/dystopian fiction, that served as an anchor for class discussion, activities and reflection. (Unfortunately, we had to trim from the video some of Ellie’s discussion about the story development process, due to time limitations; hopefully, she’ll be able to chime in here with some additional detail!)
And yes, I think the experience was quite eye-opening for the teens who participated, many of whom came into the class without a clear idea of how AI was relevant to their lives. Students frequently referenced the stories when discussing different issues and in some cases, commented that a story had inspired them to make some type of change in their use of technology (e.g., checking their privacy settings, turning off notifications, etc.).
Bridget Dalton Dalton
Associate Professor
HI Michael,
I'm on the team as well, and wanted to add a bit about our story development process. We intentionally made the stories short (tried to keep at 1000 words, although in a few cases the length might be 1200 words), featured teens as the protagonists who experience an AI related ethics issue in a strong plot, balanced gender in characters, made them open ended so as not to suggest solutions, since that is part of the conversation that follows). We also checked the readability of the texts so that they were appropriate for middle and high school students. The stories are open education resources, and can be accessed at our website. We would love to hear from you if you decide to use the stories or have suggestions for us.
Chelsea LeNoble
I absolutely love the rich engagement of students throughout the learning process. I feel like this project really exemplifies the humanization of STEM education by using stories and various forms of media to teach about this concept of ethics & AI--which has absolutely critical implications for our lives now and in the future.
Similar to Michael, I'm curious about the various forms of learning content that was used to engage students. How did your team decide on the different assignments of stories, comics, drawings, etc.? Are you finding that some of these are more effective than others or if equally effective, are there some that seemed more popular with students or were easier for instructors to implement?
Stacey Forsyth
Director, CU Science Discovery
Thanks, Chelsea! To some degree, the different projects are a result of our team’s collective expertise (which includes computer science, STEM/maker education, and multimodal literacy) and the pandemic. Our original plan was to integrate the AI ethics modules into robotics summer camps, but when COVID-19 restricted our ability to run in-person programs, we had to find other options that would allow students to explore and reflect on these critical issues, even when participating remotely. It was important to our team that we preserve the creative making and design aspects of the project, as we wanted to ensure that the program effectively engaged middle and high school youth (most of whom were already feeling burned out by online school) and provided students with opportunities to reimagine AI and design new solutions that addressed the critical issues we were discussing.
Comics provided a way for students to contribute their own stories in a way that was creative and fun and not intimidating. Pixton EDU was a great tool for this, as it provides all the art assets that students need so that the focus is on the storytelling process, rather than on drawing the comics themselves. It also allowed students to create avatars, which worked well for e-introductions and getting to know each other in a virtual space. We used Adobe Spark (and in the ELA class, WeVideo) to create posters and videos, again because these tools were easy for students to access and use online while still offering a lot of room for creative expression. We selected Juji as a chatbot platform because it aligned with our class goals while not requiring extensive previous technical experience.
At the end of each camp or class, students selected one of the tools that we had worked with to create a final project and they were fairly evenly divided across the three tools, which was interesting. Pixton and Spark were easy to facilitate online, but we did run into some technical glitches with the chatbot software (Juji) initially. Fortunately, the developer was fairly responsive and we had a better experience working with that program in the fall classes. We’re planning to test some additional tools this summer and based on that experience, we hope to add some new resources to our website in the coming months.
Chelsea LeNoble
Pati Ruiz
Phillip Eaglin, PhD
Hi, are the Ethics in AI curriculum and the stories available to share? How are the course materials culturally relevant for Black and Hispanic youth? Thx.
Stacey Forsyth
Director, CU Science Discovery
Great question, Phillip! One of the key issues we focus on in the class is algorithmic bias. There are numerous examples of biased decision-making by algorithms across different fields, including education, health care, employment/HR, the criminal justice system, etc. AI is playing an increasingly important role in decisions like who is eligible for medical treatment, a mortgage, or parole, and in many cases, these biased algorithms disproportionately impact people of color. In the class, we dive into this in a few different ways. For example, a story that introduces algorithmic bias (in a future dystopian world) is complemented by a video and news stories related to racial and gender bias in facial recognition systems. (If you haven’t already seen it, I recommend viewing the film Coded Bias, now on PBS, which focuses on Joy Buolamwini’s research on bias in facial recognition technologies. In the class, we show a shorter video about her work, called Gender Shades.) Students then work with Google’s Teachable Machine to see how biased training data impacts the accuracy of their models. This fall, we’re hoping to test a new story (currently being reviewed) that addresses algorithmic bias in the criminal justice system.
You can find the stories and some related resources on our website (https://www.colorado.edu/project/imagine-ai/), but it’s still a work in progress. We’ll be adding additional resources to the site over the next few months.
Romelia Rodriguez
I really enjoyed watching your video. It is great to see how you addressed such an important topic in a simple but transcendent way. I wonder what the criteria for selecting the readings are?
I invite you to provide feedback to our project: https://videohall.com/p/2139
Jeremy Roschelle
Executive Director, Learning Sciences
This is great stuff. As others have said, the issues are SOOO important. And you are combining consciousness-raising with student learning about AI -- so they learn what it is along with what the problems are. I just want to know MORE! Especially about the last bit -- how students are responding. I couldn't easily infer from the drawings shown that students' understanding was getting richer. Please say more about how you are studying student conceptual change and how you are making inferences about what students are learning. What makes you most confident that students are learning? And what have you observed about who isn't learning as well as you'd like -- and why might that be?
p.s. great production values on the video as well. Congrats!
Ben Walsh
Graduate Research Assistant
Thanks for the question, Jeremy. We are building on work by researchers who have used pre and post drawings to understand shifts student understanding of science concepts and another group of researchers using pre post drawings to understand shifts in how students think about social problems. Because of the interdisciplinary nature of AI ethics, both of these lines of inquiry are relevant to our work.
In addition to the drawings our analysis will look across multiple data sources including pre and post surveys, student created comics, videos and chatbots, chat transcripts from the Zoom sessions, field notes, and interviews. We're in the middle of data analysis now (and we have a lot of data) and have two papers in the works. We're not ready to share our findings just yet, but I'd be happy to contact you when those papers are ready.
Bridget Dalton Dalton
Associate Professor
HI Jeremy,
Thanks for your interest and questions! In addition to Ben's information about our varied data sources, we're also exploring using/adapting existing instruments assessing conceptual change in the core AI concepts that are being developed. Do you have any recommendations for us? Thanks!
Jeremy Roschelle
Executive Director, Learning Sciences
I wish I had a recommendation for you -- but you are in a cutting edge area, so I don't. I guess I'd suggest you look up Shuchi Grover at Stanford and ask her -- she's a leader in assessment of computation thinking concepts.
Jessica Sickler
This is a really interesting discussion! I'm curious to hear more about how you -- and your teacher-partners -- are grappling with the balance (or is it a push-pull?) between the learning goals of the technology side (what it is, how it works, tinkering with it) and the learning goals around the critical thinking around the ethical debate. I can see both being super-compelling to youth, and both are important, but in limited time, what are the critical choices?
Great work!
Ben Walsh
Graduate Research Assistant
Great question, Jessica. I think that balance you speak of is very much context specific. I believe that AI ethics is a topic that belongs in a Social Studies class or Language Arts class as much as it does in a CS class. But it can't be taught the same way across disciplines. In a CS class, more time spent experiencing and developing an understanding of the underlying technology makes sense. In a Social Studies class there would likely be an emphasis on the positive and negative influences AI is having on our society (and will increasingly have in the future). In an ELA class I would expect more focus on how AI is influencing the way we communicate and consume and create media. There can't be just one way to teach this topic.
But a complete treatment of the topic in any discipline requires us to move between disciplines. We recognize teachers as experts in their own spaces, but they may need help delving into topics outside their individual comfort zones. This is why we are creating AI ethics modules focused around stories that can be adapted to different contexts, rather a single fixed curriculum. We hope these resources can support, for example, the humanities teacher who has never had to explain algorithms or machine learning to students, or the CS teacher who has never facilitated conversation about racial bias.
Kara Dawson
What an awesome idea to engage students in AI through stories. I also enjoyed this discussion thread as it answered many of the questions I had while watching the video. Although our content is quite different we are also trying to engage students via story - in our case a comic book about diverse characters who must learn about cryptology and cybersecurity in order to escape from an unexpected cyber adventure. Kudos to you and your team!
Bridget Dalton Dalton
Associate Professor
HI Kara,
We would love to hear more about your project (I'm going to look at your video now!). Let's stay in touch and perhaps we can have our teams meet via zoom to share what we're learning about the power of stories in engaging with AI.
Kara Dawson
Bridget - This sounds wonderful. You can reach me via email at: dawson@coe.ufl.edu. Have a great weekend -Kara
Bridget Dalton Dalton
Associate Professor
Thanks, Kara, we will be in touch soon!
Further posting is closed as the event has ended.