Category Archive: Research

“But They Just Aren’t Interested in Computer Science” (Part One)

Written By: Julie Smith

Note: this post is part of a series about the most-cited research studies related to K12 computer science education.

When discussions about the lack of women in tech occur, it is sometimes observed that the disparities exist because girls just aren’t as interested in studying computer science in school and women just choose not to work in the tech industry. 

This sentiment is horribly misleading. It is true that research shows that girls and women are, on average, not as interested in studying or working in computing. But what is important to understand is that interest isn’t like eye color: it’s not an inherent, biological attribute that simply reflects human differences. Rather, what we choose to be interested in is strongly influenced by what our culture conveys is appropriate for ‘people like us.’ This may seem to be an unusual way of thinking about the issue; we often assume that our interests are simply pure reflections of our personality and volition. But, at least for the case of interest in computer science, the evidence suggests otherwise.

In a recent report, Allison Master, Sapna Cheryan, and Andrew N. Meltzoff describe two experiments which show how (dis)interest in computer science can be influenced by very simple interventions. They created photographs of stereotypical (think: Star Trek posters) and non-stereotypical computer science classrooms and showed them to high school students and asked which classroom they would prefer. Girls were significantly more likely to express interest in the course in the non-stereotypical classroom. (Boys’ interest was not impacted.). In their second experiment, the researchers provided participants with a written description of a computer science classroom, some stereotypical and some not. Again, girls  were much more interested in a course in the non-stereotypical classroom. 

These two experiments are important because they show that interest in computer science isn’t hard-wired. Rather, it appears to be strongly influenced by whether computing is presented as conforming to stereotypes that aren’t as welcoming to girls. For those of us concerned with the negative effects of the lack of women in tech – not just on the women themselves but on a society that is ever-increasingly shaped by technology – these results are good news because they show that relatively simple interventions can increase girls’ interest in the study of computing.

Further Reading

Series

“But They Just Aren’t Interested in Computer Science” (Part Two)

“But They Just Aren’t Interested in Computer Science” (Part Three)

National Intern Day- Intern Shout Out

Last week was National Intern Day, which gave me another reason to reflect on the many students I’ve worked with over the last seven years contributing to the K-12 CS Education Research Resource Center on our site.

The Resource Center has been under development since 2017. Originally funded under a National Science Foundation grant and now from Amazon Future Engineer, we owe much to the many interns who have worked on our project over the course of this time. This is true whether they provided support to our mission over one semester or three years. I’m also thrilled to say that I was fortunate enough to publish several articles together with one-third of our interns in an effort to engage them in computing education research. (Why, yes, that was my attempt to bring them to the dark side!)

It’s really incredible and I am personally grateful for their contributions and camaraderie. I’m also thankful that so many have stayed in touch with me after graduating and starting their post-college careers.

So, HUGE SHOUT OUT to all of you! And many, many thanks from me and a grateful research community who still uses your contributions today.

Monica McGill, President & CEO, CSEdResearch.org

 

Media

  • Emily Nelson, Undergraduate Student, Bradley University (current)

Data Curation (2017-2023)

  • Alia Saadi El Hassani, Undergraduate Student, Knox College
  • Alaina Mabie, Undergraduate Student, Bradley University
  • Arsalan Bin Najeeb, Undergraduate Student, Knox College
  • Ava Lu, Undergraduate Student, Knox College
  • Bishakha Awale, Undergraduate Student, Knox College
  • Bishakha Upadhyaya, Undergraduate Student, Knox College
  • Brenda Huerta, Undergraduate Student, Bradley University
  • Emily Schroeder, Undergraduate Student, Knox College
  • Jessica Potter, Undergraduate Student, Bradley University
  • Joey Reyes, Undergraduate Student, Knox College
  • Ma’Kiah Holliday, Undergraduate Student, Rochester Institute of Technology
  • Olivia Lu, Undergraduate Student, Bradley University
  • Sarah Wallenfelsz, Undergraduate Student, Knox College
  • Sean Mackay, Graduate Student, University at Buffalo
  • Shebaz Chowdhury⁺, Undergraduate Student, Knox College
  • Tavian James, Undergraduate Student, Knox College
  • Zachary Abbott, Undergraduate Student, Bradley University

Software Development (2017-2023)

  • Bishakha Upadhyaya, Undergraduate Student, Knox College
  • Hung Vu, Undergraduate Student, Knox College
  • Momin Zahid, Undergraduate Student, Knox College
  • Nate Blair, Graduate Student, Rochester Institute of Technology
  • Nhan Thai, Undergraduate Student, Knox College
  • Thu Nguyen, Undergraduate Student, Knox College
  • Trang Tran, Undergraduate Student, Knox College

 

⁺ Deceased

Learning Isn’t Observable. So How Do We Measure It?

Written By: Julie Smith

Note: This post is the second in an occasional series about learning analytics, based on the Handbook of Learning Analytics.

We measure learning constantly – think of grades on spelling quizzes, SAT scores, and the bar exam. But it’s worth remembering that learning itself cannot be observed in the same way that the growth of a plant can: when it comes to learning, we have to make decisions about how to measure something we can’t see. Gray & Bergner (2022) outline the choices that must be made when educational researchers operationalize a learning construct – such as a sense of self-efficacy, the ability to work productivity in a group, or subject matter knowledge. Learning isn’t observable, so how do we measure it?

First, the decisions: Gray & Bergner present a very helpful distinction between measurements designed to understand a construct and those designed to improve a construct. They aren’t the same: we might understand something (for example, that students who spend more time on discussion boards earn higher grades) without being able to improve it (if we direct students to spend more time on discussion boards, they may spend less time reviewing for an exam and get a lower grade).

Next, Gray & Bergner review the strengths and weaknesses of three kinds of data that can be used for educational measurement. Validated and reliable surveys exist, and they are easy to administer at scale. But they may suffer from various biases related to self-reporting. Trace data (such as keystroke data from students using an educational technology learning platform) can be gathered unobtrusively, but it can be difficult to draw conclusions from it. Text data can be a rich source of insight into a student’s thought processes, but training either machine learning models or humans to assess and code such data is tricky. 

Other pitfalls exist as well. One challenge with many forms of data is that it doesn’t capture change over time. The authors point out that previous research shows that “cycles between positive and negative emotions can have a positive impact on the learning process compared to maintaining a consistent emotion” (p24), which is precisely the kind of insight that can be lost from a data snapshot. Similarly, information can be lost when data is cleaned; for example, grouping students by their final letter grade may result in both overemphasizing the difference between a student who earned a high B and a low A as well as underemphasizing the difference between a student who failed with a 0 and one who failed with a 69.

Despite these challenges, Gray & Bergner aren’t discouraged about the potential for learning analytics to help understand and improve learning outcomes – in fact, their careful outline of the challenges facing various forms of data collection are a good step toward thoughtful, responsible data collection and use.

Further Reading

Series

Learning Analytics: What is it?

Learning Analytics: What Is It?

Written by: Julie Smith, PhD, Senior Education Researcher, IACE

Note: This post is the first in an occasional series about learning analytics, based on the Handbook of Learning Analytics.

Concepts that are difficult to define are sometimes compared to trying to nail Jell-O to a wall. That analogy could certainly apply to learning analytics – there’s no shortage of definitions of ‘learning’ or ways to measure and analyze it. So the response to the question ‘What Is Learning Analytics’ (Lang et al., 2022) is a welcome framing of a complex topic; the authors present this concept through four different lenses:

First, learning analytics is a concern or a problem. That is, modern educational methods generate big data which needs to be analyzed. Not only does that require technical skills grounded in a sound approach, but it also raises issues related to privacy and ethics. The authors point out a “tension between learning as a creative and social endeavor and analytics as a reductionist process that is removed from human relationships” (p9).

Second, it is an opportunity. The data generated by learning management systems (such as Canvas and Blackboard) has created the possibility of gaining insight into learning – particularly the process – as opposed to just the product – of learning. 

Next, it has become a field of inquiry. What distinguishes it from other uses of data to improve education? The authors point to the idea of a ‘human in the loop’ as central to the field. That is, the goal is not to replace instructors or curriculum designers but rather to provide information to augment their decision-making. As a field, learning analytics has grown exponentially since its inception a little over a decade ago.

And, finally, it is a community. Focused around the Society for Learning Analytics Research, academics, researchers, educators, practitioners, and industry representatives have formed a community of practice.

This framework through four different lenses provides a balanced approach to the promise and peril of using big data in educational contexts. Future posts will explore various methods and applications this concept.

 

Further Reading

Attending SIGCSE Technical Symposium 2023? We’ll be there!

We’ll be actively engaged in the 2023 SIGCSE Technical Symposium (TS) in Toronto.

If you’d like to learn more about the work we have recently been engaged in, be sure to stop any of our sessions. Only our workshop requires registration.

 

Day/TimeTypeRoomTitle

Wednesday, March 15, 7-10pm Workshop 713 Creating and Modifying Existing Surveys to Fit Your CS Education Research Needs (In-Person)
Ryan Torbey (AIR), Monica McGill (CSEdResearch.org), Lisa Garbrecht (University of Texas at Austin)
Thursday, March 16, 11:35am Paper Presentation 715 Growing an Inclusive Community of K-12 CS Education Researchers (In-Person)
Sloan Davis (Google), Monica McGill (CSEdResearch.org)
Friday, March 17, 11:10am Paper Presentation 701B Building upon the CAPE Framework for Broader Understanding of Capacity in K-12 CS Education K12 (In-Person)
Monica McGill (CSEdResearch.org), Angelica Thompson (CSEdResearch.org), Isabella Gransbury (North Carolina State University), Sarah Heckman (North Carolina State University), Jennifer Rosato (College of St. Scholastica), Leigh Ann Delyser (CSforALL)
Friday, March 17, 3:45pm Panel 718A Building Capacity Among Black Computer Science Educators (Hybrid)
Angelica Thompson (CSEdResearch.org), Allen Antoine (The University of Texas at Austin), Anita Debarlaben (University of Chicago Laboratory Schools), Donald Saint-Germain (University Heights Secondary School), Leon Tynes (Xavier College Preparatory), Vanessa Jones (Computer Science Teachers Association (CSTA))
Friday, March 17, 4:10pm Paper Presentation 701B Measuring Teacher Growth Based on the CSTA K-12 Standards for CS Teachers K-12 (In-Person)
Monica McGill (CSEdResearch.org), Amanda Bell (CSTA), Jake Baskin (Computer Science Teachers Association), Anni Reinking (CSEdResearch.org), Monica Sweet (University of California San Diego CREATE)

Community Listening Sessions

Given the recent discussions across asynchronous platforms this past week, many related to feedback from SIGCSE TS reviews, we have set up two listening sessions for community members to talk about their experiences and perspectives.

The listening sessions are an opportunity to share concerns about barriers within the community that prevent researchers from reaching their full potential (and even worse, be driven out of the community) AND ultimately helping the hundreds of thousands of teachers and millions of students that we aim to support.

October 12, 4-5pm CT

October 13, 1-2pm CT

The sessions will not be a time for anyone to offer excuses or rationale for these barriers. Also, you do not need to attend both sessions.

Sessions will be moderated and norms will be set to minimize disrespect and harm. You will be able to share concerns anonymously if you choose. Sessions will be closed captioned. For those with disabilities or conflicts and attending is not possible, please feel free to email me directly with concerns.

We offer this as a community place to recognize the harm as well as the trauma that has been experienced by members in an effort to help move the dialogue into forming actionable steps for improvement at a later time. As an aside, the SIGCSE TS committee has offered to share our aggregated results of these listening sessions on their website through a blog post or another mechanism in conjunction with other plans they have for addressing these issues. Though these listening sessions are independent of the SIGCSE board and SIGCSE TS Committee, we hope they can be used to help inform their future plans. However, we are also very open to hearing more broadly about barriers that go beyond SIGCSE conferences and community.

Connecting K-5 Students to Integrated Computer Science

We recently partnered with Code.org to conduct a national study that focuses on how K-5 teachers integrated computer science (CS) into their curriculum. Why? Well, Code.org is working on a new and unique CS curriculum called Computer Science Connections

The goal of their curriculum is to teach computing by making critical connections between learning CS and other subjects like math, language arts, science, and social studies. 

Presently, there is minimal research and knowledge available that discusses how and/or why teachers integrate CS into other subject areas. There is also minimal scholarship focused on the barriers teachers and administrators may face when attempting to integrate CS into other K-5 content areas. We believe this will be an important area to watch in the next few years as CS enters into more K-5 classrooms and teachers struggle to balance teaching a new subject without more hours in the day to do so.

Where do we come in? We will be reaching out to states all over the country – 29 in total – to get an overarching view of how and why CS is integrated (or not integrated) into K-5 classrooms. We will also be conducting a systematic literature review (SLR) to better facilitate conversations around promising practices integrating CS into K-5 learning environments. Overall, this information will be used as a launching point that Code.org will be able to use as they continue to expand their mission of teaching all students computer science. 

Are you interested in finding out more about our work? Watch our social media and look for our ongoing updates, or visit Code.org’s Computer Science Connections page and start integrating CS into your curriculum.

 

Computer Science Teachers’ Problems of Practice: Solve This!

In 2021 we received funding from a ACM SIGCSE Special Projects Grant, with our colleague Dr. Michelle Friend (University of Nebraska – Omaha) for a project we called: Solve this! Problems of practice teachers face in K-12 CS Education. Since then we have been working on gathering, analyzing, and disseminating the findings. Overall, our goal for this project is to provide a platform for researchers to understand authentic problems of practice that teachers face in order to bridge the gap between research and practice. 

What have we accomplished so far?

At the beginning of the project we designed a survey to be sent to teachers around the world. The survey included demographic questions about the teacher and their locale, but most importantly about the problems of practice they experience when planning, teaching, or attempting to plan/teach computer science in their school or classroom. Once the survey underwent internal and external face validity, we disseminated the survey. Our survey reached teachers in Ireland, Canada, and the United States. We opened it in July 2021 and closed it in October 2021, receiving over 700 responses.

Table of survey results from problems of practice teacher survey

After cleaning the data, we were left with 396 responses. We created over 40 codes as we  analyzed the data and several themes emerged. Although we are still in the process of data analysis, some of the initial findings include problems of practice such as a lack of teaching time or schedule availability to teacher CS, poor academic habits, and challenges related to student interactions or partner work. We have been able to share initial results at several conferences and our paper examining our initial set of data has been accepted to Koli Calling 2022

What is next? 

Our goal is to have our interactive K-12 CS education teaching problems of practice populated and ready for use by the end of this year. All of the problems of practice entered through this study will be added to our website and will be searchable by demographics of the teachers who submitted them (e.g., country, years teaching CS). 

For researchers, this site will provide you with the problems teachers are facing and can help inform your research agenda. 

Teachers will be able to upvote problems of practice that they experience and will be able to add their own problems. 

Watch our social media platforms for our Problems of Practice page announcement!

CS Teachers’ Reflection on the CSTA K-12 Teacher Standards

This past summer, IACE had the opportunity to partner with the Computer Science Teachers Association (CSTA) and CREATE, a research center at the University of California, San Diego, to develop an assessment of teachers’ understanding and use of the CSTA K-12 CS Teacher Standards. As part of this process, we wanted to understand how the Standards can help inform CS teachers’ professional reflection process and their professional development trajectory. With funding from E_CSPD_Week, a U.S. Department of Education EIR grant, CSEdResearch.org joined the partnership to break down the CSTA K-12 Teacher Standards to usable rubric language for personalized reflection and feedback. This summer we piloted a reflection-based assessment for Standards 2-5, with Standard 1 being piloted next summer. After piloting our designed process in two states, Indiana and South Carolina, we learned a lot and continue to improve the process. 

To provide a high level overview of the work that went into the process, our team, along with assistance from CSTA, dissected the CSTA K-12 Teacher Standards 2-5 to create 18 rubric items and scales across three main categories: 1) plan, 2) assessment, and 3) professional growth and development. We then created an entry form to collect the data from a group of teachers in Indiana and South Carolina who participated in the pilot of this work. We are currently undergoing the next phase, scoring and developing a process for external expert readers to provide feedback to the teachers who submit their information as part of this optional process. Our work has resulted in a set of recommendations on how to improve the process so teachers are able to more easily collect and enter their data, which we provided to CSTA and CREATE during a recent discussion. Once completed, this will be tested with a wider group of teachers in summer 2023 and go through a second revision process.

We are also in the process of starting work developing an assessment for Standard 1, CS content knowledge. Working with Dr. Adrienne Decker, we will be creating a brief assessment for AP CS A targeted to high school teachers. We will be piloting this assessment in summer 2023.

Stay tuned for updates on this project. 

Educate Maine: Decreasing Financial Barriers and Increasing Access to Coding

At IACE we find great value in raising up the voices of our partners who are doing great things in the computer science community. One of those partners is Educate Maine.

 

This summer Educate Maine’s signature project, Project>Login, hosted 5 Girls Who Code camps all over the state of Maine.

They were able to provide these camps for free, decreasing financial barriers and increasing access for all students.  The girls who participated were able to engage with industry professionals, learn from experienced teachers, and make memories to last a lifetime.

What was our role in this amazing experience? External evaluators. As part of this work, Educate Maine is continuing to reflect and improve their practices through evaluations. The evaluations focused on student and teacher experiences during the week long camps all over the state. Most of the girls who participated in Project>Login’s Girls Who Code camps do not have coding at their school or a Girls Who Code after school program, therefore this summer experience is truly increasing their knowledge of what it means to be a “coder” and, more widely, a “computer scientist”

WEX Industry Partner with Girls Who Code campers

Girls at their Girls Who Code camp

Partner work at one of the Girls Who Code camps

Interested in checking out our other projects? Click here