Category Archive: Research

A Conversation with Dr. Satabdi Basu

This post features Dr. Satabdi Basu, a Senior Education Researcher at SRI International, an independent, nonprofit research institute.


Dr. Basu joined SRI International in 2016 after receiving her Ph.D. from Vanderbilt University in Computer Science (CS) with a specialization in AI for Education. At SRI, she leads projects in K-12 CS and AI education that give her the opportunity to work with school districts and focus on assessment and curriculum design and teacher professional development. She currently is working on several projects, including developing middle school CS teachers’ understanding of CS standards and formative assessment practices at Milwaukee Public Schools and bringing the SPICE (Science Projects Integrating Computing and Engineering) project to Metro Nashville Public Schools and Charlottesville City Public Schools. She is also working on an international project with school systems in Hong Kong that are focused on promoting computational thinking in primary grades. 

Her research work also includes published articles summarized on our site:

Dr. Basu found the resources at CSEdResearch.org about 4 or 5 years ago when searching for survey instrumentation. She has used it ever since. She says, “I use the filters to find what I need, especially when writing literature reviews or designing instruments.” Internally at SRI, she endorses the resources on CSEdResearch.org and is always looking through the site to find new and updated information.

When asked why she uses CSEdResearch.org, she responded, “If it wasn’t there, I would be using Google Scholar to find papers, search for the instruments used, reach out to authors – that takes a lot of time. Using the site is easier and cuts down on the time of putting together instruments for projects or literature reviews for papers.”

In addition to the website, she also enjoys the tweets CSEdResearch.org provides that focus on brief informative guidance on instrument creation.

 


 
Dr. Satabdi Basu is a Senior CS Education Researcher at SRI International. She has published numerous articles on CS education research, particularly focused on computational thinking and K-12 students. She has presented at national and international conferences, and also been invited as a keynote speaker.

A New Model for Inclusive Computer Science Education

Our post today is a guest post by Carol L. Fletcher, Ph.D., Director, Expanding Computing Education Pathways (EPIC), The University of Texas Austin. This post originally appeared in Google’s The Keyword and is reprinted with permission.

In this post, Dr. Fletcher explains the CAPE framework that she and Dr. Jayce Warner (also at the University of Texas Austin) developed. Along with several of our collaborators, our project teams at CSEdResearch.org have used CAPE to help understand how a school’s Capacity for CS education impacts who takes CS and their experiences–all with an equity lens. It has been a critical model for us, and we believe it can be a useful model for other education researchers and evaluators. We are also recipients of a 2020 Google CS-ER Award, from which Fletcher and Warner’s work is also funded.


The lack of diversity in the computing education pipeline has been a remarkably persistent problem. Something that’s stalled progress in addressing disparities is that there’s largely been a focus on individuals, such as teachers and students, rather than on how equity plays out across multiple levels of the computer science (CS) education ecosystem. This is why our work at the University of Texas since 2014 focuses on understanding the root causes of inequities in the CS education pipeline and how every level of the system influences equity.

With the support of a CS-ER (computer science education research) grant from Google, my colleague Jayce Warner and I developed a framework for thinking about equity across the CS education ecosystem. We began this work after digging into data in Texas in 2014 and finding that only about a quarter of Texas high schools offered any kind of CS course and fewer than 3% of Texas students were taking a CS course each year. The students enrolled in CS courses were also not reflective of the student population in our diverse state. We launched what became the WeTeach_CS professional development program, with the ultimate objective of seeing equitable enrollment in CS courses in Texas. To achieve this goal, we first had to improve access to CS courses and increase the number of CS-certified teachers in the state.

At the time, we thought equity had to wait until we had solved the capacity, access and participation challenges. But as we began thinking more deeply about this model and asking our colleagues in the Expanding Computing Education Pathways (ECEP) Alliance for feedback, we realized several things:

True Equity is about more than just diversity in the classroom, and just because something is available to everyone doesn’t mean that everyone can or will benefit. Also, education is very complex and the things we can easily measure (such as AP class participation) may not be the best indicators of change or success.

We developed a new framework that reflects how things connect at different levels of CS education. Most importantly, this model helps us better understand how equity plays out at each level. We’ve called it the CAPE framework and it consists of four interdependent components: capacity for CS education, access to CS education, participation in CS education and experience of CS education.

Each level affects the next. For example, if we want students to have equitable experiences in CS, we first need to make sure they’re participating equitably. Equitable participation relies on equitable access and equitable access relies on equitable capacity.

CAPE is represented as a triangle with four levels. Capacity for CS Education is the foundational level of the triangle, with access to CS education above that, participation in CS education above that, and experiences of CS education at the top. Example questions that can be asked at the Capacity level address teachers, funding and policies such as Do districts in all areas have the resources to offer CS and to train and certify teachers? Access questions deal with course offerings such as Are CS courses offered in low-income schools at similar rates to other schools? Questions at the participation level address student enrollment such as Which subgroups are underrepresented in CS courses and to what extent? Experience level questions can address student outcomes such as How does instruction and learning differ across student subgroups and do all students feel a sense of belonging in CS?

The CAPE Framework helps the entire CS education community think about the systems they work in and the types of questions they should ask to ensure equity and inclusion in computing. One example is Jackie Corricelli, a PreK-12 CS Curriculum Specialist in West Hartford Public Schools (CT), who’s used the CAPE framework to evaluate her district’s K-12 CS program. In another example, Bryan Cox, Computer Science Specialist at the Georgia Department of Education, is building a public dashboard to track access and participation in K-12 CS education in Georgia. In Texas, we’ve used CAPE to frame our state and regional CSEd Profiles and recently released a new interactive visualization to explore capacity, access and participation across the state’s 1,200 school districts and more than 2,000 high schools.

Google supported these efforts with a CS-ER grant awarded to UT Austin, which was instrumental in the development and evolution of the CAPE framework. In 2021, Google awarded seven new CS-ER grants. This year’s grant awardees are: Amy J. Ko, University of Washington; Derek Aguiar, University of Connecticut; Jean Ryoo, University of California, Los Angeles; Jennifer Parham-Mocello, Oregon State University; Joshua Childs and Tia Madkins, The University of Texas at Austin; Melanie Williamson and Audrey Brock, Bluegrass Community & Technical College; and Mounia Ziat, Bentley University.

For more information about each of the recipient’s projects, or to submit an application to be considered for future cohorts, you can visit Google Research’s Outreach page.

SIGCSE Technical Symposium 2021 – See you there!

This week’s post features Monica McGill (CEO, CSEdResearch.org) and the ongoing work at CSEdResearch.org that will be presented at the 2021 SIGCSE Technical Symposium.


This year’s SIGCSE Tech Symposium will be quite different than last year’s (it will happen!) and prior years (virtual, rather than in person). But that doesn’t mean we will be less active in presenting highlights from some of our work.

If you haven’t heard of the SIGCSE Technical Symposium (TS), let me introduce you. This symposium (or conference, if you prefer) was established over 50 years ago and is a conference for computer science educators who teach Pre-school on up. It’s a great place to meet people and hear about research and experiences of computer science educators.

This is our first official presentation at SIGCSE as an official non-profit organization–and we are happy to share some of our work.

  • Workshop – Sign up for Workshop #102 Efficient, Effective, and Ethical Education Research Data Management and Sustainability to hear me, Stacey Sexton (SageFox Consulting Group), Alan Peterfreund (SageFox Consulting Group), and Maria Praetzellis (University of California, Office of the President) discuss data management of your education research data.
  • Birds of a Feather – I will be joined by Sloan Davis (Google) for a discussion around Improving K-12 CS Education Research via Tools and Resources for the Community. We’ll cover some known tools, and we’ll have an active discussion with participants to hear your pain points in research and what tools and resources might be useful to you to make your life easier.
  • Paper Presentation – Come here about our work Piloting the Air Force JROTC Cyber Academy for High School Students with my co-authors Sarah Lee (University of Southern Mississippi), Litany Lineberry (Mississippi State University), John Sands (Moraine Valley Community College) and Leigh Ann DeLyser (CSforALL). This experience report discusses the United States Air Force Junior Reserve Officer Training Corps (JROTC) piloted a Cyber Academy to teach cybersecurity skills and career awareness to high school JROTC cadets.
  • Demonstration – Join Emily Schroeder (Knox College) and me in a demonstration of The REDCap Survey Platform: Using Standardized Instruments on a Single Platform for Collecting Research Study Data.
  • Panel – Join Thomas Price (North Carolina State University), Baker Franke (Code.org), Shuchi Grover (Looking Glass Ventures & Stanford University) and me for an engaging discussion on Using Data to Inform Computing Education Research and Practice. This panel brings together four panelists at various stages of work collecting and analyzing large datasets in different fields of computing education. The panelists will each discuss their current work, the unique aspects of their data, and how that data fits into the larger landscape of computing education.
  • Google CS-ER supported work – Join me as I discuss a study we are currently involved in with the Computer Science Teachers Association (CSTA) to examine the process of creating teacher practice briefs with teachers and researchers with a focus on equity. This qualitative study will see the production of three teacher practice briefs targeting different problems of practice for middle school teachers and will lay the foundation for creating more briefs in the future.

I’ll also be at the ACM-W booth on Monday afternoon, so if you’re interested in learning more about ACM-W and what we do, feel free to stop by and chat.

Times and dates for each of these talks are provided below. I hope to see you there!

 

Presentation Date and Time
Workshop #102 Efficient, Effective, and Ethical Education Research Data Management and Sustainability Saturday, March 13, 9am ET / GMT-5
ACM-W Booth Monday, March 15, 3pm-8pm ET / GMT-5
Birds of a Feather – Improving K-12 CS Education Research via Tools and Resources for the Community Tuesday, March 16, 4pm ET / GMT-5
Paper Presentation – Piloting the Air Force JROTC Cyber Academy for High School Students Wednesday, March 17, 1pm ET / GMT-5
Demonstration – The REDCap Survey Platform: Using Standardized Instruments on a Single Platform for Collecting Research Study Data Thursday, March 18, 8pm ET / GMT-5
Panel – Using Data to Inform Computing Education Research and Practice Friday, March 19, 11am ET / GMT-5
Google CS-ER Project: Engaging Teachers and Researchers to Create Teacher Practice Briefs Friday, March 19, 5:30pm ET / GMT-5

 
 
Image of Monica McGillMonica McGill is the CEO & Founder of CSEdResearch.org. She has worked in industry, for the government, and in academia over the last 30 years and now focuses on computer science education research, mostly at the K-12 level. She is also the Chair of ACM-W North America and a CSTA board member. Learn more about the history of CSEdResearch.org here.

Block-based Programming in Computer Science Classrooms

This week’s post features David Weintrop and his research on block-based programming. He shares three key points his research has discovered so far.


The first time I saw Scratch, I thought, “Wow! How clever! Is this the end of missing semi-colons errors!?” It was clear to me how the shape of the blocks, their easily understood behaviors, and the Sprites they controlled all worked together to make programming more accessible and inviting.

With my background in computer science, I could also see how foundational programming concepts were also present. I started with Scratch but then discovered a whole host of other environments, like Snap!, MIT AppInventor, Pencil Code, and Alice, that used a similar block-based approach. This got me thinking – do kids learning computer science with block-based tools? Should it be used in the Classroom? If so, what is the role of the Teacher? And finally, will block-based help kids learn text-based programming languages like Java and Python? My research seeks to try and answer these questions. Here is a bit of what I have found.

Kids think block-based programming is easier than text-based programming.

As part of my research on block-based programming in K-12 classrooms, I asked students what they thought about block-based programming. For the most part, students perceived block-based programming to be easier than text-based programming. They cited features such as the “browsability” of available commands, the blocks being easier to read than text-based programming, and the shape and visual layout of the blocks. It is also worth noting that some students viewed block-based programming as inauthentic and less powerful than text-based programming.

Kids do learn programming concepts with block-based tools.

My research found that students do in fact learn programming constructs when using a block-based tool. In fact, students who learned to program using a block-based tool scored higher on programming assessments compared to students who learned with a comparable text-based tool. I found a similar result in a different study looking at the AP Computer Science Principles (CSP) exam, which asked students questions in block-based and text-based pseudocode.

Block-based programming may help kids learn text-based languages, but it is not automatic.

I also investigated the transition from block-based to text-based programming in high school computer science classrooms. I found that there was no difference in student performance in learning text-based programming based on prior experience with block-based or text-based programming. In other words, students performed the same regardless of how they had learned programming up to that point. One thing to note is that in my study, the teacher provided no explicit supports to help students make connections between their block-based experience and the text-based language. I mention this only to say that there is still research to be done into how best to support the blocks-to-text transition.

Overall, my research is finding that block-based programming should have a role in K-12 computer science education. While there is still work to be done, what we know so far suggests that block-based programming can serve as an effective introduction to the field of computer science.

 

David WeintropDavid Weintrop is an Assistant Professor in the Department of Teaching & Learning, Policy & Leadership in the College of Education with a joint appointment in the College of Information Studies at the University of Maryland. His research focuses on the design, implementation, and evaluation of accessible, engaging, and equitable computational learning experiences. His work lies at the intersection of design, computational science education, and the learning sciences. David has a Ph.D. in the Learning Sciences from Northwestern University and a B.S. in Computer Science from the University of Michigan.

 

Longitudinal Trends in K-12 Computer Science Education Research

In this post, Bishakha Upadhyaya provides highlights of our SIGCSE 2020 paper on trends in K-12 CS Education research (co-authored with Monica McGill and Adrienne Decker). For more details, watch her talk or read the paper.  


 

Research in the field of Computer Science education is growing and so is the data and results obtained from it. Without a comprehensive look at them collectively, it can be difficult to understand the current trends in the field. In order to identify the trends in the K-12 computing education research in the US, we conducted a longitudinal analysis of data collected from five different publication venues over the course of 7 years.

For the purpose of this analysis, we looked at the manually curated dataset on csedresearch.org with over 500 articles that focused on K-12 computing education from years 2012 to 2018. As the majority of the articles in the dataset were from the US, we only looked at research papers whose participants were also from the US. We then ran SQL queries on the dataset in order to extract the subsets of data that were later analyzed in Tableau and presented visually using graphs and tables.

Some of the major trends that we were interested in examining were:

  • Locations of students/interventions studied
  • Type of articles (e.g., research, experience, position paper)
  • Program data (e.g., concepts taught, when activity was offered, type of activity, teaching methods),
  • Student data (e.g., disabilities, gender, race/ethnicity, SES)

Results revealed that there has been an increasing shift in classroom activities from informal curriculum to formal curriculum. This shift suggests that more research is being conducted within classes offered during school hours, increasing the reach to more students with the availability of more labs, lectures and other teaching methods.

Trends also revealed that the majority of the research papers had student participants based in California. While this may seem reasonable given California is the most populous state in the US, this trend doesn’t follow for Texas, the second most populous state. There were only 4 papers that represented participants from Texas. This suggests that policies and other standards may have an influence over the computing activities and research in the state.

Locations of participants in research studies
Locations of the student participants studied.

Our analysis also revealed various disparities in reporting the student demographics particularly the socio-economic status (SES) of the students. For the purpose of this analysis, we considered information about free/reduced lunch as low SES if not explicitly reported in the paper. Only 32 of the articles analyzed reported information about students’ SES. Despite previous evidence showing that the SES of the student affects their academic achievement, the underreporting suggests that it is still not being considered in many research studies.

Socio-economic status among participants
Socio-eonomic status as reported in studies. Low SES reflects students from low income households and/or qualifying for Free/Reduced lunch at school.

In a field with increasing efforts to increase inclusion of students from different backgrounds,
our research has shown considerable disparity in the research landscape of computing education. The lack of reporting makes it difficult for everyone from researchers and educators to policymakers to understand the results of these efforts, especially what needs improving. It is crucial to see how different interventions play out amongst different populations in order to implement and achieve the goals of CS for All.

 

Bishakha UpadhyayaBishakha Upadhyaya is a Senior at Knox College, majoring in Computer Science and minoring in Neuroscience. She was the President of ACM-W chapter at Knox for 2019/2020 school year and served as the CS Student Ambassador. She was involved in this research as a part of her summer research project. As a part of her senior research project, she was involved in exploring the enacted curriculum in Nepal, Pakistan, Bangladesh and Sri Lanka. She will be joining Bank of America as a Global Technology Analyst after graduation in Spring 2021.

 

 

 

Computer science education still has diversity gaps

Google’s Vice President of Education and Research, Maggie Johnson, recently published this blog on the latest Gallup poll investigating diversity gaps in computer science education. We’ve reprinted this with permission. You may also access statistics from the Gallup poll here.

 


Jobs in the computing field are expected to grow by 13 percent between 2016 and 2026, a rate that’s faster than the average growth rate for all occupations. But the latest research shows that not all K-12 students have the same access to, or perceptions of, computer science (CS) education—especially girls and Black students. COVID-19 has only exacerbated existing gaps, underscoring the need for more creative solutions to ensure all students receive the education they deserve today to succeed tomorrow, according to additional research.

To better understand these gaps and where we can focus on finding solutions, we’re continuing our funding support of Gallup’s comprehensive, multi-year research on the K-12 computer science education landscape. Today, we’re releasing Gallup’s latest findings, “Current Perspectives and Continuing Challenges in Computer Science Education in US K-12 Schools.” This report represents Gallup’s analysis of over 7,000 interviews with U.S. educators, parents, administrators and students. It is accompanied by four supplemental reports highlighting equity gaps among different segments of the population, including Black, female, Hispanic and rural students.

The research uncovered four key themes:

1. There are still gaps in access to computer science education between Black, Hispanic and white students.

Consistent with the 2016 study, in 2020, Gallup found only 46 percent of Black students and 46 percent of Hispanic students indicate that they have classes dedicated to computer science at their high school, compared to 52 percent of white students.

An infographic showing the percentages of students by race who say their schools offer a computer science class

 

2. There’s still a significant gender gap, too.

Seventy-three percent of boys say they are confident they can learn computer science, compared with 60 percent of girls, a gender gap similar to the one observed in 2016.

A graphic that shows how many students say they are confident about learning computer science

 

3. Computer science is a top priority for superintendents, but that same prioritization hasn’t made it to the classroom yet.

In 2020’s report, nearly six in 10 superintendents (58 percent) agree that computer science is currently a top priority in their districts. However, there appears to be a disconnect between administrators and teachers and principals, because just 18 percent of public school teachers and 28 percent of principals say computer science education is treated as a top priority at their schools.

A graphic that shows how many superintendents say computer science is a priority.

 

4. Students are generally unconvinced that computer science is important for them to learn.

Female students are particularly skeptical about the importance of learning computer science education, with just 31 percent of them saying CS is important for them to learn, compared with 49 percent of male students.

A graphic that shows more boys than girls think computer science is important to learn

Interventions from parents, educators, community leaders, policymakers, nonprofits and the technology industry are needed to encourage girls, Black students and Hispanic students to take computer science courses and ensure that when that interest exists, it’s matched with high quality learning opportunities. These students also need to be shown how CS knowledge can help them meet their goals in a variety of fields including the humanities, medicine and the arts.

With over $80 million in funding from Google.org, and a variety of programs as part of Code with Google, we are committed to closing equity gaps in CS education. For example, Code Next is a free computer science education program that meets Black and Latino high school students in their own communities, and Grasshopper is an app-based program for coding beginners to learn Javascript skills directly from their mobile phones and browsers. As part of our Google.org funding, we also gave a $3 million grant to The Kapor Center to establish the Equitable Computer Science Curriculum initiative. This effort brings together leaders in education equity, inclusive teaching practices and CS education, along with teachers and students to improve CS curricula and resources to increase racial and gender equity in CS classrooms.

No organization can increase access or improve perceptions of computer science education alone. We’re enthusiastic about all the work from nonprofits who have developed and share culturally-relevant learning resources, educators who support all of their students with skills they need to succeed, technology companies who have dedicated resources and governments who have created new policies to address CS learning gaps over the past five years. But we at Google believe there’s more work to be done in this complex field, and we hope publishing these reports helps the entire education community continue to advocate for and support underserved students. All of this research is fully accessible and for use in presentations.

(A virtual panel discussion was held on September 30, 12 p.m. Pacific/ 3 p.m. Eastern discussing the report’s key takeaways with Stephanie Marken, Gallup’s Executive Director of Education Research, and Dr. Alexis Martin, the Director of Research Partnerships at Kapor Center.)

 

Maggie JohnsonMaggie Johnson is Director of Education and University Relations for Google. She manages all technical education, content development, and information management programs for Google engineers and operations staff, as well as Google’s K12 educational programs in STEM and computer science (CS).

 
 
 

How to Attract the Girls: Gender-specific Performance and Motivation to learn Computer Science

In this blog post, Peter Hubwieser summarizes his work (with co-authors Elena Hubwieser and Dorothea Graswald) that was published in a 2016 journal article. Here, he highlights research exploring the importance of reaching girls earlier through motivation.


 

The attempt to engage more women in Computer Science (CS) has turned out to be a substantial challenge over many years in many countries. Due to the obvious urgency of this problem, over the last decades many projects have been launched to motivate women to engage in Computer Science. Yet, as already very young girls seem to have different attitudes toward CS compared to even-aged boys, all attempts to influence adult women might come too late.

Potentially, the international Bebras Challenge could provide a facility to arouse the enthusiasm of girls for Computer Science, aiming to promote Informatics and Computational Thinking. During the most recent event, Bebras attracted nearly 3 Million participants from 54 countries, separated in different age groups.

As shown e.g. by Deci and Ryan in 1985, motivation is likely to correlate with the personal experience of competency. Although most participants of Bebras are encouraged to participate by their teachers, they solve the tasks individually or in pairs. Therefore, the individual motivation of the students might play a dominant role in their performance. In addition, the Bebras challenge will have a positive impact on the children only if they are able to solve a satisfying number of tasks.

To find out if this is the case and to detect differences between boys and girls, we analyzed the outcomes of the 2014 challenge regarding the gender of the 217,604 registered participants in Germany. Additionally, we compared the average performance of boys and girls in every task.

The boys were more successful overall, and the differences increased dramatically with age. Nevertheless, it turned out that in the two younger age groups (grade 5-6 and 7-8 respectively), girls outperformed boys in several tasks. The analysis of these tasks demonstrated that in particular, girls can be motivated by the first three factors of the ARCS Model model of motivation (see Keller 1983): Attention, Relevance and Confidence.

After measuring the performance, we grouped the 27 tasks according to the difference in performance of boys and girls:

  • Girls’ Tasks: 7 tasks that were solved significantly better by single and paired girls,
  • Boys’ Tasks: 13 tasks that were solved significantly better by single and paired boys,
  • Neutral Tasks: 7 tasks, either without any significant gender difference or showing such a difference in only one case (singles or pairs).

Assuming that each task needs to attract attention, its first impression is likely to be crucial. Therefore, its graphical elements like pictures or diagrams are highly relevant. Looking at the graphical elements of the Girls’ Tasks, we found that these were mostly representing animals, jewelry or food (see Fig. 1). Regarding the Boys’ tasks, the dominating elements were mostly abstract rectangular figures, graphs or technical apparel. The Neutral Tasks had an appearance that is more or less similar to the Boys Tasks (see Fig. 2).

Girls' Tasks Pictures

Figure 1. Girls’ Tasks Pictures

Boys' Tasks Pictures

Figure 2. Boys’ Tasks Pictures

In this context, relevance is likely to be determined by the probability that the participants or their friends have experienced or will experience a similar situation. Obviously, all of the Girls Tasks showed a certain relevance, e.g. how to identify your own bracelet. On the other hand, all of the Boys Tasks lacked this relevance at least for girls, e.g. how can a robot cross a labyrinth. Also the Neutral Tasks missed relevance more or less, except a few that didn’t look attractive or were too difficult (see below).

Third, the apparent difficulty (as assessed by the participants) of a task will influence the motivation or the confidence to solve it. Indeed we found that the girls tended to perform significantly better than the boys in tasks of low or medium difficulty. The explanation might be that the self-efficacy of girls was lower compared to the boys. On the other hand, the boys seemed to show higher willingness to deal with challenging problem solving activities by trial and error.

Yet, we should keep in mind that these results emerged in a contest, where the decision to work on a certain task was voluntary, in contrary to compulsory assignments in the classroom. Nevertheless, CS educators might use these findings to construct tasks that motivate particularly younger girls:

  1. Look for a situation that is likely to occur in girls’ everyday life,
  2. Construct a task for this situation that is not too difficult, and
  3. Draw a nice picture that contains a person, an animal or other lovely objects to attract attention.


Peter HubwieserPeter Hubwieser, Technical University of Munich (Germany), taught math, physics and computer science at high schools until 2001. In 1995 he completed his doctoral studies in physics. In 2000 he acquired his postdoctoral teaching qualification (habilitation). In 2002 he was appointed to a professorship position at TUM. He has worked as visiting professor in Austria (Klagenfurt, Salzburg and Innsbruck), France (ENS in Paris and Rennes) and Michigan (MSU). His research activities focus on the empirical investigation of learning processes in computer science. His novel didactical approach triggered the introduction of computer science as a compulsory subject at Bavarian Gymnasiums in 2004.

Elena Hubwieser and Dorothee Graswald completed their teacher education at TUM, where they have conducted this research in collaboration with Peter Hubwieser. Currently they are teaching math and computer science at Bavarian Gymnasiums.


The original paper was published in a Springer Proceedings Volume:  Hubwieser, P., Hubwieser, E., & Graswald, D. (2016). How to Attract the Girls: Gender-Specific Performance and Motivation in the Bebras Challenge. In A. Brodnik & F. Tort (Eds.), Informatics in Schools: Improvement of Informatics Knowledge and Perception: 9th International Conference on Informatics in Schools: Situation, Evolution, and Perspectives, ISSEP 2016, Münster, Germany, October 13-15, 2016, Proceedings (pp. 40–52). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-46747-4_4

A preprint version can be downloaded from https://www.edu.tum.de/ddi/publikationen/


Designing Assessments in the Age of Remote Learning

 

As we start to ramp up our blog series via CSEdResearch.org, we reached out to Miranda Parker to learn about what she’s researching these days in K-12 CS Education. Her work is both timely and…well, read on to learn more!

 


Currently, I’m working as a postdoctoral scholar with a team at University of California, Irvine on two projects: NSF-funded CONECTAR and DOE-funded IMPACT. These projects aim to bring computational thinking into upper-elementary classrooms, with a focus on students designated as English Learners. Our work is anchored in Santa Ana Unified School District, in which 96% of students identify as Latino, 60% English Language Learners, and 91% of students on free and reduced lunch. There’s a lot of fantastic research that’s come out of these projects, notably the works of my colleagues at UCI that are worth a look.

My primary role in the project is to help with the assessments in the project. There are many interesting challenges to assessing computational thinking learning for upper-elementary students, which had grown more challenging by the time I started in April with emergency remote teaching. I want to share some challenges we’ve faced and are considering in our work, in part to start a conversation with the research community about best-practices in a worst-case learning situation.

The confounding variables have exponentially expanded. We always had to consider if assessment questions on computational thinking were also measuring math skills or spatial reasoning. Now we also have to wonder if our students got the question wrong not because they don’t understand the concept, but maybe their sibling needed to use the computer and so the student had to rush to finish, or there were a lot of distractions as their entire family worked and schooled from home.

Every piece of the work is online now. An important part of assessment work is conducting think-aloud interviews to check that the assessment aligns with research goals. This becomes difficult with a virtual learning situation. You can no longer entirely read the body language of your participant, you have to contend with internet connectivity, and you may be in a situation that is not the ideal one-on-one environment for think-alouds.

Human-centered design has never been more critical. It’s one thing to design a pen-and-paper assessment to be given to fourth grade students in a physical classroom, where a teacher can proctor and watch each student and answer questions when needed. It’s a totally different thing to design an online survey to be taken by students asynchronously or possibly synchronously over a Zoom call with their teacher, who can’t see what their students are doing. Students know when they’re done with a test in person, but how do you make sure that nine-year-old’s finish an online survey and click that last button, thereby saving the data you’re trying to gather?

On the bright side, these challenges are not insurmountable. We did design an assessment, conduct cognitive interviews, and collect pilot study data. Our work was recently accepted as a research paper, titled “Development and Preliminary Validation of the Assessment of Computing for Elementary Students (ACES),” to the SIGCSE Technical Symposium 2021. We’re excited to continue to grow and strengthen our assessment even as our students remain in remote learning environments.

 


 

MirandaParkerMiranda Parker is a Postdoctoral Scholar at the University of California, Irvine, working with Mark Warschauer. Her research is in computer science education, where she is interested in topics of assessment, achievement, and access. Dr. Parker received her B.S. in Computer Science from Harvey Mudd College and her Ph.D. in Human-Centered Computing from the Georgia Institute of Technology, advised by Mark Guzdial. She has previously interned with Code.org and worked on the development of the K-12 CS Framework. Miranda was a National Science Foundation Graduate Research Fellow and a Georgia Tech President’s Fellow. You can reach Miranda at miranda.parker@uci.edu.

 

Pressing Research Questions from Practitioners

During EdCon 2019 held in Las Vegas, Chris Stephenson, Head of Computer Science Education Strategy for Google, met with a group of practitioners and policy makers to learn what research questions they would like to see answered.

First, they asked the group to brainstorm questions for which they need answers and share all of their questions. Then they asked them to pick a most important one (as a group) for each category. The most important questions appear in bold below.

Most Pressing Research Questions Sourced from Practitioners and Policy Makers
EdCon 2019, Las Vegas

These research questions were collected from practitioners, researchers, and policy makers who attended the CS-ER session led by the Google CS Education Strategy Group at EdCon 2019:

Teachers

What are the best practices for CS PD in terms of positively impacting student learning?
Are there unique pedagogical approaches that best support CS learning?
How do we sustain the CS teacher pipeline?
How do we measure effective CS teaching?
● What core skills must teachers have to teach CS?
● How do we get more K-8 teachers involved in CS education?
● What is the resources model for sustained PD?
● How do we ensure the PD focuses on pedagogy?
● How do we continue/sustain CS training for teachers?
● What PD options are available for teachers and what are the most effective?

Learning

How does early CS experience impact future interest in CS?
What models of CS content delivery provide the largest impact on students learning (after school, in school discrete courses, in school integrated into other disciplines)?
What data exists currently that demonstrates that the CS curriculum /instruction currently being delivered truly moves the needle?
● Is there a correlation between student engagement in CS and success in other academic areas?
● How does learning CS lead to learning in other disciplines (especially math and science)?
● What are the best IDEs for supporting student learning?
● Are there different pedagogies more suitable/effective for direct instruction versus integrated instruction?
● How does exposure to CS in school impact students who are non-CS majors in university?
● How do industry mentors and the pipeline issues affect CS student identity?

Integration

● What does CS immersion look like in K-5?
● What is the best way to integrate CS in K-12?
● What are the implementation options for CS in elementary school?

Equity

What are the best methods for scaffolding CS concepts for students with academic deficiencies or disabilities?

Policy

● Which policy items provide the best environment for additional progress (which is the firestarter)?
● How do we scale CS in a way that avoids the unexpected negative consequences of some policies?

Advocacy

Why do some districts jump on board with CS while others hold back?
How do we reach a common definition of computer science that resonates for school districts and states?
● Who/what are the gatekeepers to CS and how do we remove them?
● How do we help school leadership successfully advocate for CS?
● How do we inform parents (especially from underrepresented populations) about the importance of CS?
● How do we develop parent advocates?
● How can higher ed support extended learning/expansion of content for CS?

To cite this reference, please use:

Stephenson, Chris. “Most Pressing Research Questions Sourced from Practitioners and Policy Makers from EdCon 2019.” CS Education Research Resource Center, CSEdResearch.org, 15 July 2020.