Category Archive: Research

Introduction to Learning Theories Series

Presented by Joe Tise, PhD, Educational Psychology & Senior Education Researcher at CSEdResearch.org

If data is a pile of bricks, theory is a building plan. Used together, a house can be built and a valid representation of truth can be uncovered. 

The traditional view of education research would say data without theory is no more useful than a pile of bricks without a building plan. This understanding is at the heart of traditional quantitative educational and psychological research. Quantitative educational researchers view theory as integral to the relationship between research and practice because it gives rise to causal hypotheses and, in turn, informs action. 

However, one may (convincingly) argue that recent developments in artificial intelligence (AI), data mining, machine learning (ML), and large-language modeling (LLM) uncover deep insights and relationships despite not being driven by a particular a priori theoretical perspective. While this is certainly true, I argue that researchers still must construct theoretical models (broadly construed) to make sense of the patterns and insights uncovered by these empirical methods. There is some inherent utility in using AI or ML to discover, for example, that students’ user log data in a learning management system can predict their eventual GPA or likelihood of dropping the course. But understanding why these relationships exist requires theoretical musing, which so far cannot be accomplished via AI or ML.

Further, as a qualitative researcher may be quick to point out–some research questions are simply too cutting-edge to be grounded in theory a priori. Save for truly exploratory research (where very little or even no prior research exists), educational researchers will tend to engage a theory either as a guide prior to data collection or explanatory mechanism after data analysis–whether that theory is robust with decades of empirical support or more fledgling and known only to the researcher. 

As one manifestation of educational research, computer science education (CSEd) research needs to be grounded in established educational theory and/or generate new theory where established theory falls short. Fortunately, nearly 100 years of educational research have already passed. The fruits of this research are four prominent theories about how learning occurs: Behaviorism, Information-processing, Social-cognitive, and Constructivism. 

In this four-part series, I introduce and briefly overview each theory and in doing so, I forward a paraphrased definition of the (nebulous) term “learning” associated with each theory, outline central assumptions, explicate strengths and limitations, and recommend several seminal works for each theory of learning. 

I encourage education researchers who wish to research the learning phenomenon to pay attention to each. If you have limited time, however, I suggest paying special attention to the posts and subsequent recommended readings on information-processing and social-cognitive theories, as these two theories undergird much of contemporary educational research (whether these theories are explicitly mentioned in publications or not) and have shown prowess in explaining the complicated web of influences on human learning. 

Series:

Reimagining CS Pathways: High School and Beyond

In the past four years, the proportion of US high schools offering at least one computer science (CS) course increased from one-third to one-half (source), and more growth is expected. Simultaneously, the field of computer science has shifted significantly and we have continued to learn more about what it means to teach computer science with equitable outcomes in mind. One challenge in CS education is ensuring that curriculum and pedagogy adapt to these shifting grounds; it is easy to imagine the frustration of a student who discovers that their high school CS instruction has left them poorly prepared for future opportunities to learn computer sciences. 

We are pleased to announce, in collaboration with the Computer Science Teachers Association (CSTA), our new NSF-funded project to address this issue. With Bryan Twarek (PI) and Dr. Monica McGill (Co-PI) at the helm, the Reimagining CS Pathways: High School and Beyond project has the long-term goal of articulating a shared vision for introductory high school CS instruction that could be used to fill a high school graduation requirement as well as the alignment between that content and the two AP CS courses and college-level CS courses.

Our work will not be in isolation. Reimagining CS Pathways: High School and Beyond includes three convenings of K12 teachers and administrators, instructors at 2- and 4- year colleges, curriculum developers, industry representatives, state CS supervisors, and other vested parties. Written reports of these convenings will be shared with the public. Additionally, the project will create:

  • Recommendations for the content of an introductory high school CS course
  • Descriptions of high school CS courses beyond an introductory course, including suggested course outcomes
  • Recommendations for possible adjustments to the CSTA standards and the AP program
  • A framework for the process of creating similar course pathways in the future

Undergirding this work is a commitment to more equitable CS instruction, ensuring that all students – including those who have historically been less likely to study CS – will have access to these CS pathways. A more coordinated approach to high school and college level CS instruction is also more likely to meet the needs of industry and society as a whole.

This project expects to have its recommendations and framework available in the summer of 2024. 

Click here to learn more about this project.

If you are interested in participating, please reach out via our contact form or, for more information, contact julie@csedresearch.org.

Engineering PLUS Program + Webinars

The last few years have seen significant changes in the higher education landscape, including new legislation in many states that affects diversity, equity, and inclusion efforts as well as the Supreme Court’s recent decision related to affirmative action.

These changes have left many in higher education wondering how to craft policies and programs that will encourage participation by all students – including those who have been historically marginalized – while following the new laws.

The Engineering PLUS Alliance – an NSF-funded project with the goal of improving representation in engineering – invites you to participate in a program designed to help with these questions. The program includes interactive webinars where participants can learn from expert guest speakers and from each other as they develop a plan tailored to their role and context. Benefits of participating include opportunities to learn from others facing similar challenges, access to a curated resource collection, feedback and guidance on an action plan, and support and community.

The Engineering PLUS webinars are scheduled for September 14th, October 12th, and November 9th, from 1pm to 2:30pm central time. We request that each participant complete ‘pre-work’ (which will take less than one hour) for each webinar.

If you are interested in participating, please register here. If you have any questions, please contact Julie M. Smith at julie@csedresearch.org.

“But They Just Aren’t Interested in Computer Science” (Part Three)

Written by: Julie Smith

Note: this post is part of a series about the most-cited research studies related to K12 computer science education.

It’s discouraging to learn that children as young as age six express the belief that boys are better than girls at programming and at robotics, and girls have less interest in or belief in their ability to succeed in computing.

But the good news from the study Programming experience promotes higher STEM motivation among first-grade girls is that it was, in their experiment, actually not that difficult to improve girls’ interest and belief in their self-efficacy: all it took was twenty minutes in the lab with a cute robot that they could program with a smartphone. After that intervention, their interest and self-efficacy were statistically indistinguishable from boys; the same was not the case for girls who engaged in another activity unrelated to technology. 

There’s a reason this article is one of the most commonly-cited in the computer science education literature: the representation rates of women in computing – from high school courses through college majors and into the workforce – remains stubbornly low. This article suggests that, while stereotypes are adopted early, a relatively simple intervention for young children could perhaps be enough to overcome the effects of those stereotypes on girls’ interest in computing.

Further Reading

Series

“But They Just Aren’t Interested in Computer Science” Part One

“But They Just Aren’t Interested in Computer Science” Part Two

“But They Just Aren’t Interested in Computer Science” (Part Two)

Written by: Julie Smith

Note: this post is part of a series about the most-cited research studies related to K12 computer science education.

The study’s title says it all: “Gender stereotypes about interests start early and cause gender disparities in computer science and engineering.” It’s worth noting that the careful design of their studies bolsters the case: this work includes both surveys and experiments, allowing the researchers to comment on causality. The combination of surveys and interventions makes it possible to conclude that the stereotype drives the lower interest rate, not a student’s inherent lower rate of interest causing them to generate a stereotype by imputing their attitude onto others. Additionally, their diverse subject pool makes it more likely that their findings are widely applicable.

The researchers found that stereotypes suggesting that boys are more interested in computer science exist from at least the third grade. Further, these stereotypes make it less likely for girls to study computer science, an effect mediated by the girls’ decreased sense of belonging. 

Significantly, stereotypes about interest in computer science were a stronger predictor of a student’s intent to study computer science than stereotypes about ability. The authors do point out that there is a stronger cultural norm against expressing ability stereotypes than interest stereotypes, which may make it harder to root out the interest stereotypes. At the same time, the finding that student interest in studying computer science could be impacted by their experiences in an experiment imply that interventions designed to counteract stereotypes may very well be effective. 

The fact that this study is one of the most-cited K12 computer science education research studies suggests that its message of the importance of recognizing the role of interest stereotypes has resonated with many other researchers. The next step is to determine which types of interventions are most effective at breaking down interest stereotypes.

Further Reading

Series

“But They Just Aren’t Interested in Computer Science” (Part One)

“But They Just Aren’t Interested in Computer Science” (Part Three)

Emerging Promising Practices for CS Integration

Our recently accepted paper, Emerging Practices for Integrating Computer Science into Existing K-5 Subjects in the United States, will be presented at WIPSCE 2023 in Cambridge, England. 

This particular qualitative work, conducted by Monica McGill, Laycee Thigpen, and Alaina Mabie of CSEdResearch.org, included interviews with researchers and curriculum designers (n=9) who have engaged deeply in K-5 CS integration for several years. Their perspectives were analyzed and synthesized to inform our results.

Several promising practices emerged for designing curriculum, creating assessments, and preparing teachers to teach in a co-curricular manner. These include ways for teachers to vary instruction, integrating into core (and oft tested) language arts and mathematics, and simplifying assessments. Many of the findings are borne from the need to help new teachers become comfortable teaching a new subject integrated into their other subjects.

Generally, promising practices that emerged included adopting Universal Design for Learning practices, include ways for teachers to take the curriculum and vary instruction to fit their comfort levels as they learn to teach CS integration, and co-design lessons with teachers. They also suggest capitalizing on integrating into language arts since it is a highly-tested and critical subject for learning.

Figure 1. General findings.

For more specific findings, the experts suggested integrating focusing on fractions for math, leveraging cause and effect in science to teach conditional logic, and reflecting upon how language is similarly used in English and in computing.

Subject Integration Findings across ELA, Math, Science, and Social Studies. For ELA, 1) use games and other tools and 2) reflect upon how language is used in English and in computing. For Math, go heavy on computational thinking, use virtual manipulatives, focus on fractions, and enhance learning with other tools. For science, leverage cause and effect in science with conditional logic. For social studies, incorporate cultural holidays into CS.

Figure 2. Subject specific findings.

You can read the full paper (including our methodology and profiles of our experts) here.

Monica M. McGill, Laycee Thigpen, and Alaina Mabie. 2023. Emerging Practices for Integrating Computer Science into Existing K-5 Subjects in the United States. In The 18th WiPSCE Conference on Primary and Secondary Computing Education Research (WiPSCE ’23), September 27–29, 2023, Cambridge, United Kingdom. ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/3605468.3609759 (effective after September 27th, 2023).

This material is based upon work supported by Code.org. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of Code.org.

We acknowledge and thank Brenda Huerta for her assistance with the literature review.

Conducting High-quality Education Research in Computing (2025 SIGCSE Affiliated Event)

Join us on Wednesday, February 26, 2025, from 1-5pm PST, in Pittsburgh, Pennsylvania, United States (the day before ACM SIGCSE) for a workshop on conducting high-quality, equity-enabling education research in computing!!!

If you attended last year’s session, you are welcome to attend this session as well with four different topics. See below for more details.

Register using this form.

This event will be for computer science education researchers who want to learn more about:

  • Characteristics of high-quality education research,
  • How to conduct research that meets these characteristics, and
  • How to center the participants and their lived experiences throughout the research process.

The workshop will be held Wednesday, February 26, 2025, 1-5pm PST at ACM SIGCSE Technical Symposium as an affiliated event*.

Participants will learn about the guidelines and associated resources, discuss their application to current or proposed research projects, and gain a new appreciation for how to embed equity perspectives in each phase of their research. Specifically, participants will develop a broader perspective of literature reviews, ways to approach writing well-crafted abstracts, strategies for engaging with participants, and how to incorporate ethics into research. As a bonus, we’ll also dive into publication strategies, so when an article is rejected, you can reset, revise, and resubmit to other publication venues.

This interactive workshop will be geared towards those studying computing education and who want to learn more. We welcome submissions from those with any level of education research. For this particular event, graduate students will be prioritized for the limited spaces available.

This presentation is supported by a National Science Foundation grant. U.S. citizens, nationals, and permanent residents will receive a $150 stipend for participating.

Facilitators for this event will include:

  • Monica McGill, Institute for Advancing Computing Education
  • Sarah Heckman, North Carolina State University
  • Julie Smith, Institute for Advancing Computing Education
  • Jennifer Rosato, National Center for Computer Science Education
  • Isabella Gransbury, North Carolina State University

This workshop is based in part on guidelines for conducting education research that were created during a 2023 ITiCSE workshop (McGill, M. M., Heckman, S., Chytas, C., Diaz, L., Liut, M., Kazakova, V., Sanusi, I. T., Shah, S. M., & Szabo, C. Conducting High-Quality Equity-Enabling Computing Education Research. (Working Group Report).(accepted with revisions)

*This event is an in-person event only. We are aware that attending in-person is not feasible for all researchers. Therefore, we hosted an 8-part webinar September – December 2024 to accommodate those who may not be able to attend in person in 2024 or 2025.

For questions about this event, please email monica@csedresearch.org.

Register using this form.

“But They Just Aren’t Interested in Computer Science” (Part One)

Written By: Julie Smith

Note: this post is part of a series about the most-cited research studies related to K12 computer science education.

When discussions about the lack of women in tech occur, it is sometimes observed that the disparities exist because girls just aren’t as interested in studying computer science in school and women just choose not to work in the tech industry. 

This sentiment is horribly misleading. It is true that research shows that girls and women are, on average, not as interested in studying or working in computing. But what is important to understand is that interest isn’t like eye color: it’s not an inherent, biological attribute that simply reflects human diversity. Rather, what we choose to be interested in is strongly influenced by what our culture conveys is appropriate for ‘people like us.’ This may seem to be an unusual way of thinking about the issue; we often assume that our interests are simply pure reflections of our personality and volition. But, at least for the case of interest in computer science, the evidence suggests otherwise.

In “Computing Whether She Belongs: Stereotypes Undermine Girls’ Interest and Sense of Belonging in Computer Science,” Allison Master, Sapna Cheryan, and Andrew N. Meltzoff describe two experiments which show how (dis)interest in computer science can be influenced by very simple interventions. They created photographs of stereotypical (think: Star Trek posters) and non-stereotypical computer science classrooms and showed them to high school students and asked which classroom they would prefer. Girls were significantly more likely to express interest in the course in the non-stereotypical classroom. (Boys’ interest was not impacted.). In their second experiment, the researchers provided participants with a written description of a computer science classroom, some stereotypical and some not. Again, girls  were much more interested in a course in the non-stereotypical classroom. 

These two experiments are important because they show that interest in computer science isn’t hard-wired. Rather, it appears to be strongly influenced by whether computing is presented as conforming to stereotypes that aren’t as welcoming to girls. For those of us concerned with the negative effects of the lack of women in tech – not just on the women themselves but on a society that is ever-increasingly shaped by technology – these results are good news because they show that relatively simple interventions can increase girls’ interest in the study of computing.

Further Reading

Series

“But They Just Aren’t Interested in Computer Science” (Part Two)

“But They Just Aren’t Interested in Computer Science” (Part Three)

National Intern Day- Intern Shout Out

Last week was National Intern Day, which gave me another reason to reflect on the many students I’ve worked with over the last seven years contributing to the K-12 CS Education Research Resource Center on our site.

The Resource Center has been under development since 2017. Originally funded under a National Science Foundation grant and now from Amazon Future Engineer, we owe much to the many interns who have worked on our project over the course of this time. This is true whether they provided support to our mission over one semester or three years. I’m also thrilled to say that I was fortunate enough to publish several articles together with one-third of our interns in an effort to engage them in computing education research. (Why, yes, that was my attempt to bring them to the dark side!)

It’s really incredible and I am personally grateful for their contributions and camaraderie. I’m also thankful that so many have stayed in touch with me after graduating and starting their post-college careers.

So, HUGE SHOUT OUT to all of you! And many, many thanks from me and a grateful research community who still uses your contributions today.

Monica McGill, President & CEO, CSEdResearch.org

 

Media

  • Emily Nelson, Undergraduate Student, Bradley University (current)

Data Curation (2017-2023)

  • Alia Saadi El Hassani, Undergraduate Student, Knox College
  • Alaina Mabie, Undergraduate Student, Bradley University
  • Arsalan Bin Najeeb, Undergraduate Student, Knox College
  • Ava Lu, Undergraduate Student, Knox College
  • Bishakha Awale, Undergraduate Student, Knox College
  • Bishakha Upadhyaya, Undergraduate Student, Knox College
  • Brenda Huerta, Undergraduate Student, Bradley University
  • Emily Schroeder, Undergraduate Student, Knox College
  • Jessica Potter, Undergraduate Student, Bradley University
  • Joey Reyes, Undergraduate Student, Knox College
  • Ma’Kiah Holliday, Undergraduate Student, Rochester Institute of Technology
  • Olivia Lu, Undergraduate Student, Bradley University
  • Sarah Wallenfelsz, Undergraduate Student, Knox College
  • Sean Mackay, Graduate Student, University at Buffalo
  • Shebaz Chowdhury⁺, Undergraduate Student, Knox College
  • Tavian James, Undergraduate Student, Knox College
  • Zachary Abbott, Undergraduate Student, Bradley University

Software Development (2017-2023)

  • Bishakha Upadhyaya, Undergraduate Student, Knox College
  • Hung Vu, Undergraduate Student, Knox College
  • Momin Zahid, Undergraduate Student, Knox College
  • Nate Blair, Graduate Student, Rochester Institute of Technology
  • Nhan Thai, Undergraduate Student, Knox College
  • Thu Nguyen, Undergraduate Student, Knox College
  • Trang Tran, Undergraduate Student, Knox College

 

⁺ Deceased

Learning Isn’t Observable. So How Do We Measure It?

Written By: Julie Smith

Note: This post is the second in an occasional series about learning analytics, based on the Handbook of Learning Analytics.

We measure learning constantly – think of grades on spelling quizzes, SAT scores, and the bar exam. But it’s worth remembering that learning itself cannot be observed in the same way that the growth of a plant can: when it comes to learning, we have to make decisions about how to measure something we can’t see. Gray & Bergner (2022) outline the choices that must be made when educational researchers operationalize a learning construct – such as a sense of self-efficacy, the ability to work productivity in a group, or subject matter knowledge. Learning isn’t observable, so how do we measure it?

First, the decisions: Gray & Bergner present a very helpful distinction between measurements designed to understand a construct and those designed to improve a construct. They aren’t the same: we might understand something (for example, that students who spend more time on discussion boards earn higher grades) without being able to improve it (if we direct students to spend more time on discussion boards, they may spend less time reviewing for an exam and get a lower grade).

Next, Gray & Bergner review the strengths and weaknesses of three kinds of data that can be used for educational measurement. Validated and reliable surveys exist, and they are easy to administer at scale. But they may suffer from various biases related to self-reporting. Trace data (such as keystroke data from students using an educational technology learning platform) can be gathered unobtrusively, but it can be difficult to draw conclusions from it. Text data can be a rich source of insight into a student’s thought processes, but training either machine learning models or humans to assess and code such data is tricky. 

Other pitfalls exist as well. One challenge with many forms of data is that it doesn’t capture change over time. The authors point out that previous research shows that “cycles between positive and negative emotions can have a positive impact on the learning process compared to maintaining a consistent emotion” (p24), which is precisely the kind of insight that can be lost from a data snapshot. Similarly, information can be lost when data is cleaned; for example, grouping students by their final letter grade may result in both overemphasizing the difference between a student who earned a high B and a low A as well as underemphasizing the difference between a student who failed with a 0 and one who failed with a 69.

Despite these challenges, Gray & Bergner aren’t discouraged about the potential for learning analytics to help understand and improve learning outcomes – in fact, their careful outline of the challenges facing various forms of data collection are a good step toward thoughtful, responsible data collection and use.

Further Reading

Series

Learning Analytics: What is it?