Category Archive: Research

Engineering PLUS Program + Webinars

The last few years have seen significant changes in the higher education landscape, including new legislation in many states that affects diversity, equity, and inclusion efforts as well as the Supreme Court’s recent decision related to affirmative action.

These changes have left many in higher education wondering how to craft policies and programs that will encourage participation by all students – including those who have been historically marginalized – while following the new laws.

The Engineering PLUS Alliance – an NSF-funded project with the goal of improving representation in engineering – invites you to participate in a program designed to help with these questions. The program includes interactive webinars where participants can learn from expert guest speakers and from each other as they develop a plan tailored to their role and context. Benefits of participating include opportunities to learn from others facing similar challenges, access to a curated resource collection, feedback and guidance on an action plan, and support and community.

The webinars are scheduled for September 14th, October 12th, and November 9th, from 1pm to 2:30pm central time. We request that each participant complete ‘pre-work’ (which will take less than one hour) for each webinar.

If you are interested in participating, please register here. If you have any questions, please contact Julie M. Smith at [email protected].

“But They Just Aren’t Interested in Computer Science” (Part Three)

Written by: Julie Smith

Note: this post is part of a series about the most-cited research studies related to K12 computer science education.

It’s discouraging to learn that children as young as age six express the belief that boys are better than girls at programming and at robotics, and girls have less interest in or belief in their ability to succeed in computing.

But the good news from the study Programming experience promotes higher STEM motivation among first-grade girls is that it was, in their experiment, actually not that difficult to improve girls’ interest and belief in their self-efficacy: all it took was twenty minutes in the lab with a cute robot that they could program with a smartphone. After that intervention, their interest and self-efficacy were statistically indistinguishable from boys; the same was not the case for girls who engaged in another activity unrelated to technology. 

There’s a reason this article is one of the most commonly-cited in the computer science education literature: the representation rates of women in computing – from high school courses through college majors and into the workforce – remains stubbornly low. This article suggests that, while stereotypes are adopted early, a relatively simple intervention for young children could perhaps be enough to overcome the effects of those stereotypes on girls’ interest in computing.

Further Reading

 

“But They Just Aren’t Interested in Computer Science” (Part Two)

Written by: Julie Smith

Note: this post is part of a series about the most-cited research studies related to K12 computer science education.

The study’s title says it all: “Gender stereotypes about interests start early and cause gender disparities in computer science and engineering.” It’s worth noting that the careful design of their studies bolsters the case: this work includes both surveys and experiments, allowing the researchers to comment on causality. The combination of surveys and interventions make it possible to conclude that it is the stereotype driving the lower interest rate, not a student’s inherent lower rate of interest causing them to generate a stereotype by imputing their attitude onto others. Additionally, their diverse subject pool makes it more likely that their findings are widely applicable.

The researchers found that stereotypes suggesting that boys are more interested in computer science exist from at least the third grade. Further, these stereotypes make it less likely for girls to study computer science, an effect mediated by the girls’ decreased sense of belonging. 

Significantly, stereotypes about interest in computer science were a stronger predictor of a student’s intent to study computer science than stereotypes about ability. The authors do point out that there is a stronger cultural norm against expressing ability stereotypes than interest stereotypes, which may make it harder to root out the interest stereotypes. At the same time, the finding that student interest in studying computer science could be impacted by their experiences in an experiment imply that interventions designed to counteract stereotypes may very well be effective. 

The fact that this study is one of the most-cited K12 computer science education research studies suggests that its message of the importance of recognizing the role of interest stereotypes has resonated with many other researchers. The next step is to determine which types of interventions are most effective at breaking down interest stereotypes.

Further Reading

Emerging Promising Practices for CS Integration

Our recently accepted paper, Emerging Practices for Integrating Computer Science into Existing K-5 Subjects in the United States, will be presented at WIPSCE 2023 in Cambridge, England. 

This particular qualitative work, conducted by Monica McGill, Laycee Thigpen, and Alaina Mabie of CSEdResearch.org, included interviews with researchers and curriculum designers (n=9) who have engaged deeply in K-5 CS integration for several years. Their perspectives were analyzed and synthesized to inform our results.

Several promising practices emerged for designing curriculum, creating assessments, and preparing teachers to teach in a co-curricular manner. These include ways for teachers to vary instruction, integrating into core (and oft tested) language arts and mathematics, and simplifying assessments. Many of the findings are borne from the need to help new teachers become comfortable teaching a new subject integrated into their other subjects.

Generally, promising practices that emerged included adopting Universal Design for Learning practices, include ways for teachers to take the curriculum and vary instruction to fit their comfort levels as they learn to teach integrated CS, and co-design lessons with teachers. They also suggest capitalizing on integrating into language arts since it is a highly-tested and critical subject for learning.

Figure 1. General findings.

For more specific findings, the experts suggested integrating focusing on fractions for math, leveraging cause and effect in science to teach conditional logic, and reflecting upon how language is similarly used in English and in computing.

Subject Integration Findings across ELA, Math, Science, and Social Studies. For ELA, 1) use games and other tools and 2) reflect upon how language is used in English and in computing. For Math, go heavy on computational thinking, use virtual manipulatives, focus on fractions, and enhance learning with other tools. For science, leverage cause and effect in science with conditional logic. For social studies, incorporate cultural holidays into CS.

Figure 2. Subject specific findings.

You can read the full paper (including our methodology and profiles of our experts) here.

Monica M. McGill, Laycee Thigpen, and Alaina Mabie. 2023. Emerging Practices for Integrating Computer Science into Existing K-5 Subjects in the United States. In The 18th WiPSCE Conference on Primary and Secondary Computing Education Research (WiPSCE ’23), September 27–29, 2023, Cambridge, United Kingdom. ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/3605468.3609759 (effective after September 27th, 2023).

This material is based upon work supported by Code.org. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of Code.org.

We acknowledge and thank Brenda Huerta for her assistance with the literature review.

Conducting High-quality Education Research in Computing that is Designed to Support CS for All

Join us on Wednesday, March 20th, 2024, from 1-5pm PST, in Portland, Oregon, United States (the day before ACM SIGCSE) for a workshop on conducting high-quality, equity-enabling education research in computing!!!

Register using this form.

This event will be for computer science education researchers who want to learn more about:

  • Characteristics of high-quality education research,
  • How to conduct research that meets these characteristics, and
  • How to center the participants and their lived experiences throughout the research process.

The workshop will be held Wednesday, March 20th, 1-5pm PST at ACM SIGCSE Technical Symposium as an affiliated event*.

Participants will learn about the guidelines and associated resources, discuss their application to current or proposed research projects, and gain a new appreciation for how to embed equity perspectives in each phase of their research. Specifically, participants will develop personal positionality statements and improve their ability to write research questions, use theoretical frameworks, and develop instruments and protocols.

This interactive workshop will be geared towards those studying computing education and who want to learn more. We welcome submissions from those with any level of education research. For this particular event, graduate students will be prioritized for the limited spaces available.

This presentation is supported by a National Science Foundation grant. U.S. citizens, nationals, and permanent residents will receive a $150 stipend for participating.

Facilitators for this event will include:

  • Monica McGill, CSEdResearch.org
  • Sarah Heckman, North Carolina State University
  • Leigh Ann DeLyser, CSforALL
  • Jennifer Rosato, National Center for Computer Science Education
  • Isabella Gransbury, North Carolina State University

This workshop is based in part on guidelines for conducting education research that were created during a 2023 ITiCSE workshop (McGill, M. M., Heckman, S., Chytas, C., Diaz, L., Liut, M., Kazakova, V., Sanusi, I. T., Shah, S. M., & Szabo, C. Conducting High-Quality Equity-Enabling Computing Education Research. (Working Group Report).(accepted with revisions)

*This event is an in-person event only. We are aware that attending in-person is not feasible for all researchers. Therefore, future hybrid events and additional resources are being planned to meet the needs of all researchers and will be shared as they become available.

For questions about this event, please email [email protected].

Register using this form.

“But They Just Aren’t Interested in Computer Science” (Part One)

Written By: Julie Smith

Note: this post is part of a series about the most-cited research studies related to K12 computer science education.

When discussions about the lack of women in tech occur, it is sometimes observed that the disparities exist because girls just aren’t as interested in studying computer science in school and women just choose not to work in the tech industry. 

This sentiment is horribly misleading. It is true that research shows that girls and women are, on average, not as interested in studying or working in computing. But what is important to understand is that interest isn’t like eye color: it’s not an inherent, biological attribute that simply reflects human diversity. Rather, what we choose to be interested in is strongly influenced by what our culture conveys is appropriate for ‘people like us.’ This may seem to be an unusual way of thinking about the issue; we often assume that our interests are simply pure reflections of our personality and volition. But, at least for the case of interest in computer science, the evidence suggests otherwise.

In “Computing Whether She Belongs: Stereotypes Undermine Girls’ Interest and Sense of Belonging in Computer Science,” Allison Master, Sapna Cheryan, and Andrew N. Meltzoff describe two experiments which show how (dis)interest in computer science can be influenced by very simple interventions. They created photographs of stereotypical (think: Star Trek posters) and non-stereotypical computer science classrooms and showed them to high school students and asked which classroom they would prefer. Girls were significantly more likely to express interest in the course in the non-stereotypical classroom. (Boys’ interest was not impacted.). In their second experiment, the researchers provided participants with a written description of a computer science classroom, some stereotypical and some not. Again, girls  were much more interested in a course in the non-stereotypical classroom. 

These two experiments are important because they show that interest in computer science isn’t hard-wired. Rather, it appears to be strongly influenced by whether computing is presented as conforming to stereotypes that aren’t as welcoming to girls. For those of us concerned with the negative effects of the lack of women in tech – not just on the women themselves but on a society that is ever-increasingly shaped by technology – these results are good news because they show that relatively simple interventions can increase girls’ interest in the study of computing.

Further Reading

Shout out to our interns

Last week was National Intern Day, which gave me another reason to reflect on the many students I’ve worked with over the last seven years contributing to the K-12 CS Education Research Resource Center on our site.

The Resource Center has been under development since 2017. Originally funded under a National Science Foundation grant and now from Amazon Future Engineer, we owe much to the many interns who have worked on our project over the course of this time. This is true whether they provided support to our mission over one semester or three years. I’m also thrilled to say that I was fortunate enough to publish several articles together with one-third of our interns in an effort to engage them in computing education research. (Why, yes, that was my attempt to bring them to the dark side!)

It’s really incredible and I am personally grateful for their contributions and camaraderie. I’m also thankful that so many have stayed in touch with me after graduating and starting their post-college careers.

So, HUGE SHOUT OUT to all of you! And many, many thanks from me and a grateful research community who still uses your contributions today.

Monica McGill, President & CEO, CSEdResearch.org

 

Media

  • Emily Nelson, Undergraduate Student, Bradley University (current)

Data Curation (2017-2023)

  • Alia Saadi El Hassani, Undergraduate Student, Knox College
  • Alaina Mabie, Undergraduate Student, Bradley University
  • Arsalan Bin Najeeb, Undergraduate Student, Knox College
  • Ava Lu, Undergraduate Student, Knox College
  • Bishakha Awale, Undergraduate Student, Knox College
  • Bishakha Upadhyaya, Undergraduate Student, Knox College
  • Brenda Huerta, Undergraduate Student, Bradley University
  • Emily Schroeder, Undergraduate Student, Knox College
  • Jessica Potter, Undergraduate Student, Bradley University
  • Joey Reyes, Undergraduate Student, Knox College
  • Ma’Kiah Holliday, Undergraduate Student, Rochester Institute of Technology
  • Olivia Lu, Undergraduate Student, Bradley University
  • Sarah Wallenfelsz, Undergraduate Student, Knox College
  • Sean Mackay, Graduate Student, University at Buffalo
  • Shebaz Chowdhury⁺, Undergraduate Student, Knox College
  • Tavian James, Undergraduate Student, Knox College
  • Zachary Abbott, Undergraduate Student, Bradley University

Software Development (2017-2023)

  • Bishakha Upadhyaya, Undergraduate Student, Knox College
  • Hung Vu, Undergraduate Student, Knox College
  • Momin Zahid, Undergraduate Student, Knox College
  • Nate Blair, Graduate Student, Rochester Institute of Technology
  • Nhan Thai, Undergraduate Student, Knox College
  • Thu Nguyen, Undergraduate Student, Knox College
  • Trang Tran, Undergraduate Student, Knox College

 

⁺ Deceased

Learning Isn’t Observable. So How Do We Measure It?

Written By: Julie Smith

Note: This post is the second in an occasional series about learning analytics, based on the Handbook of Learning Analytics.

We measure learning constantly – think of grades on spelling quizzes, SAT scores, and the bar exam. But it’s worth remembering that learning itself cannot be observed in the same way that the growth of a plant can: when it comes to learning, we have to make decisions about how to measure something we can’t see. Gray & Bergner (2022) outline the choices that must be made when educational researchers operationalize a learning construct – such as a sense of self-efficacy, the ability to work productivity in a group, or subject matter knowledge.

First, the decisions: Gray & Bergner present a very helpful distinction between measurements designed to understand a construct and those designed to improve a construct. They aren’t the same: we might understand something (for example, that students who spend more time on discussion boards earn higher grades) without being able to improve it (if we direct students to spend more time on discussion boards, they may spend less time reviewing for an exam and get a lower grade).

Next, Gray & Bergner review the strengths and weaknesses of three kinds of data that can be used for educational measurement. Validated and reliable surveys exist, and they are easy to administer at scale. But they may suffer from various biases related to self-reporting. Trace data (such as keystroke data from students using an educational technology learning platform) can be gathered unobtrusively, but it can be difficult to draw conclusions from it. Text data can be a rich source of insight into a student’s thought processes, but training either machine learning models or humans to assess and code such data is tricky. 

Other pitfalls exist as well. One challenge with many forms of data is that it doesn’t capture change over time. The authors point out that previous research shows that “cycles between positive and negative emotions can have a positive impact on the learning process compared to maintaining a consistent emotion” (p24), which is precisely the kind of insight that can be lost from a data snapshot. Similarly, information can be lost when data is cleaned; for example, grouping students by their final letter grade may result in both overemphasizing the difference between a student who earned a high B and a low A as well as underemphasizing the difference between a student who failed with a 0 and one who failed with a 69.

Despite these challenges, Gray & Bergner aren’t discouraged about the potential for learning analytics to help understand and improve learning outcomes – in fact, their careful outline of the challenges facing various forms of data collection are a good step toward thoughtful, responsible data collection and use.

Further Reading

 

Learning Analytics: What Is It?

Written by: Julie Smith

Note: This post is the first in an occasional series about learning analytics, based on the Handbook of Learning Analytics.

Concepts that are difficult to define are sometimes compared to trying to nail Jell-O to a wall. That analogy could certainly apply to learning analytics – there’s no shortage of definitions of ‘learning’ or ways to measure and analyze it. So the response to the question ‘What Is Learning Analytics’ (Lang et al., 2022) is a welcome framing of a complex topic; the authors present learning analytics through four different lenses:

First, learning analytics is a concern or a problem. That is, modern educational methods generate big data which needs to be analyzed. Not only does that require technical skills grounded in a sound approach, but it also raises issues related to privacy, ethics, and equity. The authors point out a “tension between learning as a creative and social endeavor and analytics as a reductionist process that is removed from human relationships” (p9).

Second, learning analytics is an opportunity. The data generated by learning management systems (such as Canvas and Blackboard) has created the possibility of gaining insight into learning – particularly the process – as opposed to just the product – of learning. 

Next, learning analytics has become a field of inquiry. What distinguishes it from other uses of data to improve education? The authors point to the idea of a ‘human in the loop’ as central to the field. That is, the goal is not to replace instructors or curriculum designers but rather to provide information to augment their decision-making. As a field, learning analytics has grown exponentially since its inception a little over a decade ago.

And, finally, it is a community. Focused around the Society for Learning Analytics Research, academics, researchers, educators, practitioners, and industry representatives have formed a community of practice.

This framework for understanding learning analytics through four different lenses provides a balanced approach to the promise and peril of using big data in educational contexts. Future posts will explore various methods and applications of learning analytics.

 

Further Reading

Join us at AERA 2023!

Attending the AERA Conference in Chicago this month? Join us as we present at American Educational Research Association (AERA) 2023 on April 14th and 15th. 

Our panel, “Co-constructing Systemic Support for Sustaining Humanizing and Inclusive Computer Science Teacher Education”, will be presented on Friday, April 14th at 9:45 AM (1st Floor of the Swissotel Chicago, Montreux 3). Our co-panelists are Michelle Friend (University of Nebraska Omaha), Maya Israel (University of Florida), Janice Mak (Arizona State University), Amy Ko (University of Washington), and Monica McGill (CSEdResearch.org).

The goal of this presentation is to shed light on some of the equity-based issues that are relevant in the computer science education community and offer insightful solutions that can be implemented by educators and practitioners. Panelists will discuss  equitable practices in education research and teacher preparation programs. There will also be plenty of time for questions from the audience. 

We hope to see you there!