Category Archive: Research

How to Attract the Girls: Gender-specific Performance and Motivation to learn Computer Science

In this blog post, Peter Hubwieser summarizes his work (with co-authors Elena Hubwieser and Dorothea Graswald) that was published in a 2016 journal article. Here, he highlights research exploring the importance of reaching girls earlier through motivation.


The attempt to engage more women in Computer Science (CS) has turned out to be a substantial challenge over many years in many countries. Due to the obvious urgency of this problem, over the last decades many projects have been launched to motivate women to engage in Computer Science. Yet, as already very young girls seem to have different attitudes toward CS compared to even-aged boys, all attempts to influence adult women might come too late.

Potentially, the international Bebras Challenge could provide a facility to arouse the enthusiasm of girls for Computer Science, aiming to promote Informatics and Computational Thinking. During the most recent event, Bebras attracted nearly 3 Million participants from 54 countries, separated in different age groups.

As shown e.g. by Deci and Ryan in 1985, motivation is likely to correlate with the personal experience of competency. Although most participants of Bebras are encouraged to participate by their teachers, they solve the tasks individually or in pairs. Therefore, the individual motivation of the students might play a dominant role in their performance. In addition, the Bebras challenge will have a positive impact on the children only if they are able to solve a satisfying number of tasks.

To find out if this is the case and to detect differences between boys and girls, we analyzed the outcomes of the 2014 challenge regarding the gender of the 217,604 registered participants in Germany. Additionally, we compared the average performance of boys and girls in every task.

The boys were more successful overall, and the differences increased dramatically with age. Nevertheless, it turned out that in the two younger age groups (grade 5-6 and 7-8 respectively), girls outperformed boys in several tasks. The analysis of these tasks demonstrated that in particular, girls can be motivated by the first three factors of the ARCS Model model of motivation (see Keller 1983): Attention, Relevance and Confidence.

After measuring the performance, we grouped the 27 tasks according to the difference in performance of boys and girls:

  • Girls’ Tasks: 7 tasks that were solved significantly better by single and paired girls,
  • Boys’ Tasks: 13 tasks that were solved significantly better by single and paired boys,
  • Neutral Tasks: 7 tasks, either without any significant gender difference or showing such a difference in only one case (singles or pairs).

Assuming that each task needs to attract attention, its first impression is likely to be crucial. Therefore, its graphical elements like pictures or diagrams are highly relevant. Looking at the graphical elements of the Girls’ Tasks, we found that these were mostly representing animals, jewelry or food (see Fig. 1). Regarding the Boys’ tasks, the dominating elements were mostly abstract rectangular figures, graphs or technical apparel. The Neutral Tasks had an appearance that is more or less similar to the Boys Tasks (see Fig. 2).

Girls' Tasks Pictures

Figure 1. Girls’ Tasks Pictures

Boys' Tasks Pictures

Figure 2. Boys’ Tasks Pictures

In this context, relevance is likely to be determined by the probability that the participants or their friends have experienced or will experience a similar situation. Obviously, all of the Girls Tasks showed a certain relevance, e.g. how to identify your own bracelet. On the other hand, all of the Boys Tasks lacked this relevance at least for girls, e.g. how can a robot cross a labyrinth. Also the Neutral Tasks missed relevance more or less, except a few that didn’t look attractive or were too difficult (see below).

Third, the apparent difficulty (as assessed by the participants) of a task will influence the motivation or the confidence to solve it. Indeed we found that the girls tended to perform significantly better than the boys in tasks of low or medium difficulty. The explanation might be that the self-efficacy of girls was lower compared to the boys. On the other hand, the boys seemed to show higher willingness to deal with challenging problem solving activities by trial and error.

Yet, we should keep in mind that these results emerged in a contest, where the decision to work on a certain task was voluntary, in contrary to compulsory assignments in the classroom. Nevertheless, CS educators might use these findings to construct tasks that motivate particularly younger girls:

  1. Look for a situation that is likely to occur in girls’ everyday life,
  2. Construct a task for this situation that is not too difficult, and
  3. Draw a nice picture that contains a person, an animal or other lovely objects to attract attention.

 

Peter HubwieserPeter Hubwieser, Technical University of Munich (Germany), taught math, physics and computer science at high schools until 2001. In 1995 he completed his doctoral studies in physics. In 2000 he acquired his postdoctoral teaching qualification (habilitation). In 2002 he was appointed to a professorship position at TUM. He has worked as visiting professor in Austria (Klagenfurt, Salzburg and Innsbruck), France (ENS in Paris and Rennes) and Michigan (MSU). His research activities focus on the empirical investigation of learning processes in computer science. His novel didactical approach triggered the introduction of computer science as a compulsory subject at Bavarian Gymnasiums in 2004.

Elena Hubwieser and Dorothee Graswald completed their teacher education at TUM, where they have conducted this research in collaboration with Peter Hubwieser. Currently they are teaching math and computer science at Bavarian Gymnasiums.

 

The original paper was published in a Springer Proceedings Volume:  Hubwieser, P., Hubwieser, E., & Graswald, D. (2016). How to Attract the Girls: Gender-Specific Performance and Motivation in the Bebras Challenge. In A. Brodnik & F. Tort (Eds.), Informatics in Schools: Improvement of Informatics Knowledge and Perception: 9th International Conference on Informatics in Schools: Situation, Evolution, and Perspectives, ISSEP 2016, Münster, Germany, October 13-15, 2016, Proceedings (pp. 40–52). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-46747-4_4

For more CS education insights, view our blog.

Designing Assessments in the Age of Remote Learning

 

As we start to ramp up our blog series via CSEdResearch.org, we reached out to Miranda Parker to learn about what she’s researching these days in K-12 CS Education. Her work is both timely and…well, read on to learn more!

 


Currently, I’m working as a postdoctoral scholar with a team at University of California, Irvine on two projects: NSF-funded CONECTAR and DOE-funded IMPACT. These projects aim to bring computational thinking into upper-elementary classrooms, with a focus on students designated as English Learners. Our work is anchored in Santa Ana Unified School District, in which 96% of students identify as Latino, 60% English Language Learners, and 91% of students on free and reduced lunch. There’s a lot of fantastic research that’s come out of these projects, notably the works of my colleagues at UCI that are worth a look.

My primary role in the project is to help with the assessments in the project. There are many interesting challenges to assessing computational thinking learning for upper-elementary students, which had grown more challenging by the time I started in April with emergency remote teaching. I want to share some challenges we’ve faced and are considering in our work, in part to start a conversation with the research community about best-practices in a worst-case learning situation.

The confounding variables have exponentially expanded. We always had to consider if assessment questions on computational thinking were also measuring math skills or spatial reasoning. Now we also have to wonder if our students got the question wrong not because they don’t understand the concept, but maybe their sibling needed to use the computer and so the student had to rush to finish, or there were a lot of distractions as their entire family worked and schooled from home.

Every piece of the work is online now. An important part of assessment work is conducting think-aloud interviews to check that the assessment aligns with research goals. This becomes difficult with a remote learning situation. You can no longer entirely read the body language of your participant, you have to contend with internet connectivity, and you may be in a situation that is not the ideal one-on-one environment for think-alouds.

Human-centered design has never been more critical. It’s one thing to design a pen-and-paper assessment to be given to fourth grade students in a physical classroom, where a teacher can proctor and watch each student and answer questions when needed. It’s a totally different thing to design an online survey to be taken by students asynchronously or possibly synchronously over a Zoom call with their teacher, who can’t see what their students are doing. Students know when they’re done with a test in person, but how do you make sure that nine-year-old’s finish an online survey and click that last button, thereby saving the data you’re trying to gather?

On the bright side, these challenges are not insurmountable. We did design an assessment, conduct cognitive interviews, and collect pilot study data. Our work was recently accepted as a research paper, titled “Development and Preliminary Validation of the Assessment of Computing for Elementary Students (ACES),” to the SIGCSE Technical Symposium 2021. We’re excited to continue to grow and strengthen our assessment even as our students remain in remote learning environments.

For more CS education insights, view our blog.

 


 

MirandaParkerMiranda Parker is a Postdoctoral Scholar at the University of California, Irvine, working with Mark Warschauer. Her research is in computer science education, where she is interested in topics of assessment, achievement, and access. Dr. Parker received her B.S. in Computer Science from Harvey Mudd College and her Ph.D. in Human-Centered Computing from the Georgia Institute of Technology, advised by Mark Guzdial. She has previously interned with Code.org and worked on the development of the K-12 CS Framework. Miranda was a National Science Foundation Graduate Research Fellow and a Georgia Tech President’s Fellow. You can reach Miranda at miranda.parker@uci.edu.

 

Pressing Research Questions from Practitioners

During EdCon 2019 held in Las Vegas, Chris Stephenson, Head of Computer Science Education Strategy for Google, met with a group of practitioners and policy makers to learn what research questions they would like to see answered.

First, they asked the group to brainstorm questions for which they need answers and share all of their questions. Then they asked them to pick a most important one (as a group) for each category. The most important questions appear in bold below.

Most Pressing Research Questions Sourced from Practitioners and Policy Makers

EdCon 2019, Las Vegas

These research questions were collected from practitioners, researchers, and policy makers who attended the CS-ER session led by the Google CS Education Strategy Group at EdCon 2019:

Teachers

What are the best practices for CS PD in terms of positively impacting student learning?
Are there unique pedagogical approaches that best support CS learning?
How do we sustain the CS teacher pipeline?
How do we measure effective CS teaching?
● What core skills must teachers have to teach CS?
● How do we get more K-8 teachers involved in CS education?
● What is the resources model for sustained PD?
● How do we ensure the PD focuses on pedagogy?
● How do we continue/sustain CS training for teachers?
● What PD options are available for teachers and what are the most effective?

Learning

How does early CS experience impact future interest in CS?
What models of CS content delivery provide the largest impact on students learning (after school, in school discrete courses, in school integrated into other disciplines)?
What data exists currently that demonstrates that the CS curriculum /instruction currently being delivered truly moves the needle?
● Is there a correlation between student engagement in CS and success in other academic areas?
● How does learning CS lead to learning in other disciplines (especially math and science)?
● What are the best IDEs for supporting student learning?
● Are there different pedagogies more suitable/effective for direct instruction versus integrated instruction?
● How does exposure to CS in school impact students who are non-CS majors in university?
● How do industry mentors and the pipeline issues affect CS student identity?

Integration

● What does CS immersion look like in K-5?
● What is the best way to integrate CS in K-12?
● What are the implementation options for CS in elementary school?

Addressing Disparities

What are the best methods for scaffolding CS concepts for students with academic deficiencies or disabilities?

Policy

● Which policy items provide the best environment for additional progress (which is the firestarter)?
● How do we scale CS in a way that avoids the unexpected negative consequences of some policies?

Advocacy

Why do some districts jump on board with CS while others hold back?
How do we reach a common definition of computer science that resonates for school districts and states?
● Who/what are the gatekeepers to CS and how do we remove them?
● How do we help school leadership successfully advocate for CS?
● How do we inform parents (especially from underrepresented populations) about the importance of CS?
● How do we develop parent advocates?
● How can higher ed support extended learning/expansion of content for CS?

To cite this reference, please use:

Stephenson, Chris. “Most Pressing Research Questions Sourced from Practitioners and Policy Makers from EdCon 2019.” CS Education Research Resource Center, CSEdResearch.org, 15 July 2020.

For more CS education insights, go to our blog.