Designing Assessments in the Age of Remote Learning
As we start to ramp up our blog series via CSEdResearch.org, we reached out to Miranda Parker to learn about what she’s researching these days in K-12 CS Education. Her work is both timely and…well, read on to learn more!
Currently, I’m working as a postdoctoral scholar with a team at University of California, Irvine on two projects: NSF-funded CONECTAR and DOE-funded IMPACT. These projects aim to bring computational thinking into upper-elementary classrooms, with a focus on students designated as English Learners. Our work is anchored in Santa Ana Unified School District, in which 96% of students identify as Latino, 60% English Language Learners, and 91% of students on free and reduced lunch. There’s a lot of fantastic research that’s come out of these projects, notably the works of my colleagues at UCI that are worth a look.
My primary role in the project is to help with the assessments in the project. There are many interesting challenges to assessing computational thinking learning for upper-elementary students, which had grown more challenging by the time I started in April with emergency remote teaching. I want to share some challenges we’ve faced and are considering in our work, in part to start a conversation with the research community about best-practices in a worst-case learning situation.
The confounding variables have exponentially expanded. We always had to consider if assessment questions on computational thinking were also measuring math skills or spatial reasoning. Now we also have to wonder if our students got the question wrong not because they don’t understand the concept, but maybe their sibling needed to use the computer and so the student had to rush to finish, or there were a lot of distractions as their entire family worked and schooled from home.
Every piece of the work is online now. An important part of assessment work is conducting think-aloud interviews to check that the assessment aligns with research goals. This becomes difficult with a remote learning situation. You can no longer entirely read the body language of your participant, you have to contend with internet connectivity, and you may be in a situation that is not the ideal one-on-one environment for think-alouds.
Human-centered design has never been more critical. It’s one thing to design a pen-and-paper assessment to be given to fourth grade students in a physical classroom, where a teacher can proctor and watch each student and answer questions when needed. It’s a totally different thing to design an online survey to be taken by students asynchronously or possibly synchronously over a Zoom call with their teacher, who can’t see what their students are doing. Students know when they’re done with a test in person, but how do you make sure that nine-year-old’s finish an online survey and click that last button, thereby saving the data you’re trying to gather?
On the bright side, these challenges are not insurmountable. We did design an assessment, conduct cognitive interviews, and collect pilot study data. Our work was recently accepted as a research paper, titled “Development and Preliminary Validation of the Assessment of Computing for Elementary Students (ACES),” to the SIGCSE Technical Symposium 2021. We’re excited to continue to grow and strengthen our assessment even as our students remain in remote learning environments.
For more CS education insights, view our blog.
Miranda Parker is a Postdoctoral Scholar at the University of California, Irvine, working with Mark Warschauer. Her research is in computer science education, where she is interested in topics of assessment, achievement, and access. Dr. Parker received her B.S. in Computer Science from Harvey Mudd College and her Ph.D. in Human-Centered Computing from the Georgia Institute of Technology, advised by Mark Guzdial. She has previously interned with Code.org and worked on the development of the K-12 CS Framework. Miranda was a National Science Foundation Graduate Research Fellow and a Georgia Tech President’s Fellow. You can reach Miranda at miranda.parker@uci.edu.
Comments are closed