Category Archive: Research

Coaching as a Means to Support Teacher Development of Computer Science Knowledge and Skills

Authored by Monica McGill, EdD

Many teachers in the United States are just starting to learn how to teach computer science (CS). However, the majority do not have others nearby with CS teaching experience to provide support. Further, after educators complete CS professional development, there are few opportunities for individual feedback on their classroom practice. 

Piloting a Coaching Program

To address this gap, the Northern Lights Collaborative, CodeSavvy, and the Institute for Advancing Computing Education (IACE) piloted and studied the impacts of a one-year remote coaching program designed to provide individualized support to teachers. The coaching model used in this study is designed around the CSTA Standards for CS Teachers (CSTA, 2020) and accompanying resources and to focus specifically on teaching practices that meet students’ needs.

Our research question for this project was: How does teachers’ ability to apply CS practices and knowledge of CS concepts change after the coaching process? 

We used a mixed-methods study that leveraged three primary forms of data from teachers who were coached (coachees) and teachers providing coaching (coaches): pre- and post-surveys, coaching logs, and self-reflection checklists. 

Highlights

Coachees’ reported CS knowledge and skills related to Standard 1 were significantly higher than before intervention, indicating preliminary evidence that the intervention increased CS coachees’ reported knowledge and skills. Coachees also showed growth in their knowledge of CS concepts and ability to apply CS practices. Coaches provided encouragement, advice, suggestions, and reflective questions to help coachees grow in their ability to teach CS. In the survey, coachees’ rated their knowledge of and their abilities to apply CS concepts significantly higher after the intervention compared to before intervention.

As more teachers continue to learn how to teach CS and hone their skills and practices, engaging these teachers in coaching can be powerful in improving their student’s learning. Professional development providers and regional education agencies (districts and schools) could leverage the processes formed through this intervention (which is based on professional development practices with solid evidence for positive impacts) to provide similar coaching to teachers just learning how to teach CS.

You can read our paper here. To cite this article, please use Rosato, J., Tise, J., Thigpen, L., Brunson, F. & McGill, M. (2025). Coaching as a Means to Support Teacher Development of Computer Science Knowledge and Skills. In Proceedings of SITE 2025 (pp. 1782-1792). Waynesville, NC USA: Association for the Advancement of Computing in Education (AACE). Retrieved March 20, 2025 from https://www.learntechlib.org/primary/p/225756/.

This material is based upon work supported by the U.S. National Science Foundation under Grant No. 2122511. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

What do we know about engineering students who are parents?

Published by Julie Smith, PhD, IACE

About one-fifth of undergraduate students are parents, and these students are likely to differ somewhat from other students. However, little research has been conducted on engineering students who are parents.

For work conducted as part of the Engineering PLUS Alliance funded by the National Science Foundation, we explored data from the U.S. 2020 National Postsecondary Student Aid Study. We wanted to determine how engineering students who are parents differ from other engineering students. In this study presented at FIE 2024 in Washington, DC, our research question was:

How do engineering students who are parents differ from other engineering students?

We found that engineering students are far less likely than other students to be parents (6% v. 18%). Engineering students who are parents differ from other engineering students. They are more likely to be older, to be men, to be Black or African American, to have a job, to attend school part-time, and to attend a nonselective college or a two-year college. They are also more likely to have no funding for tuition other than a credit card and to experience food insecurity. Surprisingly, they are less likely to attend a school that offers child care.

Similarities and differences between parents and non-parents who are studying engineering.

However, engineering students do not significantly differ with respect to their status as parents with regard to disability status or in their likelihood of having a high or low GPA. This final finding is particularly important. Research that analyzes the experiences of some groups – including student parents – often assumes a deficit framework. That is, the differences between groups may be assumed to stem from problems within some groups. This framing may cause problems if group members are blamed for the discrepancy between their performance and that of others, while their potential strengths are given little attention. Thus, we feel it is important to focus on strengths as well as challenges faced by student parents.

Our findings indicate that engineering students who are parents face unique challenges, such as financial insecurity. However, they also demonstrate notable strengths—for example, maintaining GPAs comparable to their peers despite the many demands on their time. Additionally, the high proportion of Black engineering students among student parents suggests that supporting this group more effectively could be a meaningful way to improve representation in engineering education.

Our goal is to study with engineering students to determine current barriers and how institutions can support those who are parents.

You can learn more about our work in our 2024 FIE paper, Landscape of Student Parents Studying Engineering. Cite this work as follows: Smith, J. M. (2024, October). Landscape of Student Parents Studying Engineering. In 2024 IEEE Frontiers in Education Conference (FIE) (pp. 1-7). IEEE.

This work is supported by the National Science Foundation under award HRD-2119930. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

 

Validity and Reliability in Qualitative Research

Post prepared and written by Joe Tise, PhD, Senior Education Researcher

In this series we have discovered the many ways in which evidence of validity can be produced and ways in which reliable data can be produced. To be sure, the bulk of this series was focused on quantitative research, but any mixed-methods or qualitative researcher will tell you that quantitative research only tells us one piece of the puzzle.

Qualitative research is needed to answer questions not suited for quantitative research, and validity and reliability need to be considered in qualitative research too. Qualitative research includes numerous methodological approaches, such as individual and focus group interviews, naturalistic observations, artifact analysis, and even open-ended survey questions. Unlike quantitative research–which utilizes forms, surveys, tests, institutional data, etc.–in qualitative research, the researcher often is the data collection mechanism and the analysis mechanism.

Researchers usually don’t run a statistical analysis on qualitative data; instead, a researcher typically analyzes the qualitative data, extracts meaning from it, and answers a research question from that meaning. Though this is similar to quantitative research, some of the analysis methods can be viewed as more subjective.

So, how can we know that results obtained from a qualitative analysis reflect some truth, and not the researcher’s personal biases, experiences, or lenses?

Reliability and validity are equally important to consider in qualitative research. Ways to enhance validity in qualitative research include:

  • Use multiple analysts
  • Create/maintain audit trails
  • Conduct member checks
  • Include positionality statements
  • Solicit peer review of analytical approach
  • Triangulate findings via multiple data sources
  • Search for and discuss negative cases (i.e., those which refute a theme)

Building reliability can include one or more of the following:

  • Clearly define your codes and criteria for applying them
  • Use detailed transcriptions which include things like pauses, crosstalk, and non-word verbal expressions
  • Train coders on a common set of data
  • Ensure coders are consistent with each other before coding the reset of the data
  • Periodically reassess interrater agreement/reliability
  • Use high-quality recording devices

The most well-known measure of qualitative reliability in education research is inter-rater reliability and consensus coding. I want to make a distinction between two common measures of inter-rater reliability: percent agreement and Cohen’s Kappa.

Percent agreement refers to the percentage of coding instances in which two raters assign the same code to a common “piece” of data. Because this is a simple percentage, it’s more intuitive to understand. But it also does not account for chance–in any deductive coding framework (i.e., when all possible codes are already defined), there is a random chance that two coders will apply the same code without actually “seeing” the same thing in the data.

By contrast, Cohen’s Kappa is designed to parse out the influence of chance agreement, and for this reason Cohen’s Kappa will always be smaller than the percent agreement for a given dataset. Many qualitative data analysis software packages (e.g., NVivo) will calculate both percent agreement and Cohen’s Kappa.

In consensus coding, multiple raters code the same data, discuss the codes that may apply, and decide together how to code the data. With consensus coding, the need for inter-rater agreement/reliability metrics is circumvented, because by definition, you will always have 100% agreement/reliability. The major downside of consensus coding is, of course, the time and effort needed to engage it. With large sets of qualitative data, consensus coding may not be feasible.

For a deeper dive into these topics, there are many excellent textbooks that explore the nuances of qualitative validity and reliability. Below, you’ll find a selection of recommended resources, as well as others that provide detailed insights into strengthening qualitative research methods.

Resources

Corbin, J., & Strauss, A. (2015). Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory (4th ed.). Sage Publications.
Creswell, J. W., & Báez, J. C. (2021). 30 Essential Skills for the Qualitative Researcher (2nd ed.). Sage Publications.
Creswell, J. W., & Poth, C. N. (2018). Qualitative inquiry and research design: Choosing among five approaches. Sage Publications.
Saldaña, J. (2013). An introduction to codes and coding. In The coding manual for qualitative researchers (pp. 1–40). Sage Publications.

Reliability in Education Research: A Deeper Examination

Presented by Joe Tise, PhD, Senior Education Researcher at IACE

We can view reliability in education research as inherently linked to measurement error. For example, to be confident in claims we make about something we assess, the assessment must produce reliable data. A thermometer that says it’s 72 degrees Fahrenheit one minute, but 56 degrees the next would not instill confidence. Likewise, a measure of a student’s computational thinking skills that places the student in the top 1% of all students one day, but the next places them in the bottom 5% would not be very useful. Clearly, any measure that produces data with that much variation likely contains a lot of error in the measurement (assuming no true change in the measured variable occurred).

The Standards for Educational and Psychological Testing defines reliability as “the more general notion of consistency of the scores across instances of the testing procedure” (American Educational Research Association et al., 2014, p. 33). Note in this definition that reliability refers to the data, not the test itself. Tests that produce data that are not reliable contain more error than tests that produce highly reliable data.

In a never-ending quest for enhanced approaches to measuring latent (i.e., unobserved) constructs, psychometricians have developed three major theories of assessment over the last century: Classical Test Theory (CTT), Generalizability Theory (G Theory), and Item-response Theory (IRT). Each of these frameworks seeks to minimize error and therefore enhance data reliability. They differ, however, in their approach.

Image of venn diagram between true ability and observed score

CTT (Traub, 1997) conceptualizes error as a unitary latent construct: any part of an observed test score that is not a participant’s true score is considered error. CTT provides no way to further decompose error, which is why G Theory was developed shortly after. G Theory (Brennan, 2001) provides researchers the ability to determine how much of an observed score is attributable to different sources of error, such as the occasion of measurement or the item format. To accomplish this, a researcher must first define two things: the universe of admissible observations and the universe of generalization. IRT (Hambleton et al., 1991; Hambleton & Swaminathan, 1985) works from a slightly different perspective. CTT and G Theory consider the test as a whole, but as its name suggests, IRT focuses more specifically on individual items. Thus, from an IRT perspective, reliable data is produced from items that can effectively differentiate lower from higher performers. An item’s ability to differentiate participants by ability level directly impacts the amount of information provided by that item, and item information gives the researcher a sense of the item’s quality (i.e., higher item information means higher item quality).

This blog post is far too short to sufficiently describe these three theories of assessment in any detail, but I suggest that interested readers explore the references section for more detailed overviews of each theory. Even a basic understanding of these assessment theories can help researchers design better measures and evaluate the quality of existing measures. To the extent we can reduce error in measurement, we can simultaneously enhance reliability in education research.

References

American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for Educational and Psychological Testing. American Educational Research Association.

Brennan, R. L. (2001). Generalizability Theory. Springer-Verlag Berlin Heidelberg. https://doi.org/10.1007/978-1-4757-3456-0

Hambleton, R. K., & Swaminathan, H. (1985). Item Response Theory: Principles and Applications. Springer Science+Business Media.

Hambleton, R. K., Swaminathan, H., & Rogers, H. J. (1991). Fundamentals of Item Response Theory. Sage Publications, Inc.
Traub, R. E. (1997). Classical Test Theory in historical perspective. Educational Measurement: Issues and Practice, 8–14.

Podcasts! Considering K-5 Computing Education Practices

We’re super excited to announce our long-awaited series on K-5 computing education practices!

Our podcasts provide insights from discussions among teachers as they consider meaningful research and how they could adopt new practices into their classrooms.

For educators, these podcasts are meant to provide you with information on various research studies that are may be suitable for your classrooms.

For researchers, the computing education podcasts are meant to insight reflection and further inquiry into how teachers interpret research in context with their classrooms. As we continue our working closing the gap between researchers and practitioners, the discussions can give researchers additional perspectives that they may not already have.

Special thanks to Association of Computing Machinery (ACM) SIGCSE for funding to support the computing education podcasts through a Special Projects grant! We also thank our additional sponsors, Amazon Future Engineer and Siegel Family Endowment, who support our outreach efforts at IACE.

And special thanks to Emily Thomforde for tirelessly leading the discussion groups every week for many years. Shout out to Jordan Williamson (IACE), Emily Nelson (IACE), and Monica McGill (IACE) for creating, modifying, and reviewing the podcasts and briefs!

Either way, we hope you enjoy the podcasts!

Join Us at the 2024 ACM SIGCSE Technical Symposium

We’re always excited to attend the ACM SIGCSE Technical Symposium, and this year is no exception!

You can catch IACE team members (Laycee Thigpen, Joe Tise, Julie Smith, and Monica McGill) at the following events. (Pre-symposium events are invitation only.)

For all the rest, please stop by and say Hi! We’d love to hear about research you’re engaged in that supports learning for all students!

Day/Time Event Authors/Presenters Location
Tuesday, All day Reimagining CS Pathways (Invitation only) Bryan Twarek and Jake Karossel (CSTA), Julie Smith and Monica McGill (IACE) Off-site
Wednesday, All day Reimagining CS Pathways (Invitation only) Bryan Twarek and Jake Karossel (CSTA), Julie Smith and Monica McGill (IACE) Off-site
Wednesday, 1-5pm PST Conducting High-quality Education Research in Computing Designed to Support CS for All (Invitation only) Monica McGill, Institute for Advancing Computing Education
Jennifer Rosato, Northern Lights Collaborative
Leigh Ann DeLyser, CSforALL
Sarah Heckman, North Carolina State University
Bella Gransbury White, North Carolina State University
Meeting Room E146
Thursday, 1:45-3pm PT Unlocking Excellence in Educational Research: Guidelines for High-Quality Research that Promotes Learning for All Monica McGill (IACE), Sarah Heckman (North Carolina State University), Michael Liut (University of Toronto Mississauga), Ismaila Temitayo Sanusi (University of Eastern Finland), Claudia Szabo (The University of Adelaide) Portland Ballroom 252
Thursday, 3:45-5pm PT The NSF Project Showcase: Building High-Quality K-12 CS Education Research Across an Outcome Framework of Equitable Capacity, Access, Participation, and Experience Monica McGill (IACE) Meeting Rooms E143-144
Friday, 10am PT The Landscape of Disability-Related K-12 Computing Education Research (poster) Julie Smith (IACE), Monica McGill (IACE) Exhibit Hall E
Friday, 10:45am PT Piloting a Diagnostic Tool to Measure AP CS Principles Teachers’ Knowledge Against CSTA Teacher Standard 1 Monica McGill (IACE), Joseph Tise (IACE), Adrienne Decker (University at Buffalo) Meeting Room D136
Saturday, 10am PT Reimagining CS Courses for High School Students (poster) Julie Smith (IACE), Bryan Twarek (CSTA), Monica McGill (IACE) Exhibit Hall E

To learn more about SIGCSE, click the link below.

https://sigcse2024.sigcse.org/

Key Levers for Advancing K-12 Computer Science Education

Computer science has become an essential skill for K-12 students. As the demand for computing jobs grows, there is a pressing need to advance K-12 CS education across the nation. To achieve this, there are several key levers that can advance change, including policy changes, teacher training and development, increased access to technology and resources, and partnerships between educational institutions, non-profits, and industry leaders. By leveraging these, we can equip students with the skills they need to thrive in an increasingly digital world and drive innovation and progress.

Under funding and direction from the CME Group Foundation, we took a look at K-12 computer science education in Chicago and Illinois, in context with efforts across the United States. As a result of this work, we are pleased to announce our most recent publication on this work, Key Levers for Advancing K-12 CS Education in Chicago, in Illinois and in the United States.

In particular, the Foundation funded this study to understand:

  • How the landscape of K-12 CS education in Chicago has changed across 2013- 2022, with a focus on public schools, out-of-school-time (OST) programs, and research for evidence of progress.
  • The current strengths and opportunities of the K-12 CS education landscape in Chicago, in Illinois, and nationally.
  • How the support from the Foundation since it first started funding K-12 CS education in Chicago in 2015 has influenced the CS education landscape.

This qualitative study, conducted by Laycee Thigpen, Annamaria Lu, Monica McGill (all from the Institute for Advancing Computing Education), and Eva Giglio (CME Group Foundation), involved conducting 49 interviews (57 people in total). The interviewees represented a wide variety of organizations and voices.

Key findings for Chicago Public Schools (CPS) include the need to:

  • Support consistency and fidelity across schools
  • Continue to address the teacher shortage and to support the need for teacher CS professional development
  • Support research within CPS to inform decision-making to improve equitable outcomes for all students
  • Support workforce pathways for high school students
  • Support expanded K-8 CS, including integration into other subject areas
  • Support the design of scaffolded, standards-based curriculum

Specific to out-of-school time programs, we found that there is a need to support the creation, implementation and maintenance of ways to search CS learning opportunities and for program providers to also engage in partnerships with schools.

The report also details more findings for Illinois–some of which are similar, others that differ to meet the unique needs of rural communities.

We look forward to hearing your thoughts on the report!

Learning more about Teaching AI

By Joe Tise, PhD, Senior Education Researcher, IACE

Driving into the heart of Washington, D.C. is a unique experience. Mixed with thousands of business people, sight-seers, and the occasional politician shuffling to and fro, is the sense of optimism for what could be. Every significant social, policy, or and/or economic movement that had national—and often international—influence went through our nation’s capital. 

As I arrived at Teaching Inclusive AI in Computer Science event co-hosted by the National Economic Council and U.S. National Science Foundation (NSF) (and organized by CSTA) on the White House grounds, I wondered how the CS education landscape would look 10 years from now, and how the presenters and attendees would prove pivotal in shaping its form. It was clear by the end of the event that everyone there shared two core characteristics: a deep passion for CS education and an unwavering optimism for the future.

The event kicked off with speeches from several representatives from members of the Biden-Harris Administration (e.g., Chirag Parikh [Deputy Assistant to the President and Executive Secretary, National Space Council], Ami Fields-Meyer [Senior Policy Advisor, Office of the Vice President], and Seeyew Mo [Assistant National Cyber Director, Office of the National Cyber Director]). Each emphasized the importance of a CS and AI-literate citizenry and further discussed how the Biden-Harris administration plans to support CS, AI, and AI in computer science education. One of the highlighted efforts was an executive order signed by President Biden targeting safe and trustworthy AI development. 

To make the policy discussion more concrete, we next heard from a panel of four CS teachers from across the country who represented both middle and high school level CS. They discussed how they have seen CS, and particularly AI, influence many subjects in school beyond standalone CS courses. One teacher pointed out that their school district adopted a total ban on generative AI tools in an attempt to prevent academic misconduct. The teachers agreed that while the district’s motivation may be noble, the ban would likely disadvantage the students in the long run because they would not have the opportunity to learn how generative AI works—and more importantly, learn about its limitations. The panel discussion ended by acknowledging the continued struggle to recruit and retain CS teachers at both middle and high school levels.

Finally, the plethora of work remaining requires funding. To this point, Margaret Martonosi (NSF Chief Operating Officer) and Erwin Gianchandani (Assistant Director of the CISE directorate) discussed how NSF as a whole, and particularly the CISE directorate, is prioritizing CS education research, with reference to the recent Dear Colleague: Advancing education for the future AI workforce (EducateAI) letter released. 

Suffice to say I left the White House grounds even more inspired and hopeful for the future of CS education—and our nation as a whole. We have the vision, we have the motivation, and the groundwork is laid. Now we need to act. You can read a summary readout of the event on the White House website here.

Constructivism and Sociocultural

Behaviorism highlighted the influence of the environment, information processing theory essentially ignored it, and social-cognitive theory tried to strike a balance between the two by acknowledging its potential influence. Constructivist (also known as sociocultural) theorists take it a step further. 

According to constructivist theories (which can either focus more on individual or on societal construction of knowledge; Phillips, 1995), knowledge and learning are inherently dependent on the cultural context (i.e., environment) to which one belongs. That may sound like repackaged behaviorism, but “the environment” to a constructivist goes far beyond stimuli, rewards, and punishments. 

In constructivist theories of learning, “the environment” includes our family dynamics, friends, broad cultures and specific subcultures of groups with which we associate, and numerous other factors which all influence our learning. Although all constructivist theories may not agree on one single definition of learning, for our purposes a basic definition suffices: learning is development through internalization of the content and tools of thinking within a cultural context.

Constructivist theories posit that one’s culture provides the tools of thinking, which in turn influence how we learn—or “construct” knowledge. Perhaps the best-known constructivist theorist is Lev S. Vygotsky. Vygotsky authored many papers and two books, which were eventually published together posthumously as a single book titled Mind in Society

In this collection of Vygotsky’s work, concepts such as internalization and the zone of proximal development (ZPD) are introduced (Vygotsky, 1978). Briefly, constructivist learning theories posit that something is learned when a person internalizes its meaning—internalization is an independent developmental achievement. 

Further, the ZPD encompasses tasks that a learner cannot yet complete independently but which they can complete with help (see Figure below). Constructivist theorists target the ZPD for optimal learning, and if successfully done, learners construct their own meaning of the new information, thereby internalizing it.

Zone of proximal development theory th

Vygotsky asserted that thoughts are words. That is, thoughts are inextricably tied to the language we use. In his view, one can not “think” a thought without first having a word for the thought. For example, one cannot think about a sandwich if they have not first internalized the meaning of the word sandwich. In this way, Vygotsky (and by extension many constructivist theorists) view words as the “tools” for cognition and thus higher-order thinking.

Strengths

  • Constructivist theories are more attentive to learners’ past experiences and cultural contexts than the other major learning theories discussed in this blog series. Because of that, they can provide solid theoretical footing to many research projects focused on addressing disparities.
  • The zone of proximal development, internalization, and consideration of words as tools of thought are compelling concepts introduced by constructivist theories. 
  • The focus on culture and words as tools of thought in constructivist theories can help explain the variety of cognition patterns observed across cultures (e.g., different arithmetic strategies across cultures).

Limitations

  • Although constructivist theories prove strong in many aspects, they are not as applicable to other forms of learning. For example, constructivists are almost exclusively concerned with higher-order learning, and they largely cannot account for learning exhibited by animals (like behaviorist theories can). 
  • Constructivist theories of learning lie on a scale, from more radical to more conservative, regarding the influence of a person’s individual history. While this is not necessarily a bad thing, it does muddy the waters when one refers simply to “constructivism,” which in fact encompasses a very wide swath of perspectives on learning.

Potential Use Cases in Computing Education

  • Research: A researcher may attempt to identify first-year computer science students’ individual zones of proximal development related to coding, and see if teaching coding within this zone enhances students’ motivation, interest, and intention to pursue the content further compared to teaching content outside the zone.
  • Practice: Teachers need to ensure all students have (accurately) internalized the meanings of key vocabulary terms/concepts related to their content (e.g., what “objects” or “classes” really are) before they can rely on students to properly use the concept in higher-order, complex problem solving.

Influential theorists:

  • Lev Semyonovich Vygotsky (1896 – 1934)
  • John Dewey (1859 – 1952)
  • Jean Piaget (1896 – 1980)

Recommended seminal works:

Cobb, P. (1994). Where Is the Mind? Constructivist and Sociocultural Perspectives on Mathematical Development. Educational Researcher, 23(7), 13–20. https://doi.org/10.3102/0013189X023007013

Phillips, D. C. (1995). The Good, the Bad, and the Ugly: The Many Faces of Constructivism. Educational Researcher24(7), 5-12. https://doi.org/10.3102/0013189X024007005

Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Harvard University Press. https://books.google.com/books?id=Irq913lEZ1QC 

References

Phillips, D. C. (1995). The Good, the Bad, and the Ugly: The Many Faces of Constructivism. Educational Researcher24(7), 5-12. https://doi.org/10.3102/0013189X024007005

Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Harvard University Press. https://books.google.com/books?id=Irq913lEZ1QC  

Series

Social Cognitive Theory

Presented by Joe Tise, PhD, Educational Psychology & Senior Education Researcher at CSEdResearch.org

In light of these two influential (albeit largely opposing) theories of learning, we see that both theories account for unique aspects of learning despite their limitations. Still, neither behaviorist nor information-processing theories account for one prominent form of learning, with which all people have experience—learning by observation. We may rightfully wonder then: is there a new theory that incorporates the strongest elements of each and can explain how humans learn through observing others? Enter: Social-cognitive theory (SCT; Bandura, 1986). 

SCT is largely attributed to the prominent psychologist Albert Bandura, but many other theorists have since produced high quality research that has supported and refined the theory, especially within educational contexts (e.g., Barry Zimmerman, Paul Pintrich, Dale Schunk). 

SCT posits human behavior is one part of a triadic, reciprocal relationship with the environment (think behaviorism) and personal factors (IPT is represented here). The figure below represents this relationship visually.

Triangle with behavior, personal factors, and environment in each corner and arrows in between.

Whereas behaviorist theories of learning drew only a one-way connection between environment and behavior (i.e., environment determines behavior) and IPT essentially stayed within the cognitive system entirely, SCT asserts that the environment, behavior, and personal factors (e.g., ways of thinking, attitudes, emotions, metacognition) all influence and partially determine each other. 

Support for this assertion comes from decades of research kicked off by the seminal bobo doll experiments at Stanford (Bandura et al., 1961, 1963). These groundbreaking experiments were some of the first to show empirically that humans do learn through observation. 

This evidence directly contradicted a basic tenet of behaviorism—that organisms only learn after the environment acts upon them directly (e.g., through direct punishment or reward). These experiments further showed that one’s behavior (children in this case) is influenced not just by the environment (in these experiments, the presence of an aggressive-acting adult), but also by personal factors (in these experiments, the children’s gender). Thus, learning in SCT is inherently tied to the context and learners’ personal factors. 

From these basic tenets, we get the SCT definition of learning: a change or potential for change in behavior or cognition, situated within specific contexts.

Strengths 

  • Social-cognitive theory is deceptively simple—it involves just three overarching components but each component represents countless influential factors. 
  • SCT incorporates many of the strengths of both behaviorism and IPT and extends both theories in unique ways. It is the only theory of learning that sufficiently explains observational learning. It is also highly relevant to both research and practice. 
  • SCT explains many complex human phenomena, such as self-efficacy, self-regulated learning, stereotype threats, and the influence of role models. Others have adapted SCT to other realms, for example business where it is referred to as social-cognitive career theory (SCCT; Lent et al., 1994, 2002).

Limitations

  • SCT can be a bit more abstract than behaviorism or IPT and thus the implications for practice are sometimes less clear. This is especially true for researchers and practitioners who are unfamiliar with learning theories. 
  • Further, because of its deceptive complexity, a comprehensive test of SCT within a single study is more difficult compared to behaviorism and IPT. Controlling for the numerous environmental, personal, and behavioral factors that SCT might identify as relevant would be a heavy lift for even the most well-funded research study to accommodate.

Potential Use Cases in Computing Education

  • Research: A study could investigate how characteristics of a role model (e.g., race/ethnicity, gender, age) moderate the influence that role model has on underrepresented children’s intentions to pursue a particular STEM career path.
  • Practice: Teachers, mentors, and parents should highlight examples of when a student successfully overcame a challenge, and provide verbal encouragement and/or modeling when the student faces new challenges. These behaviors will support the student’s self-efficacy, and ultimately their persistence in the domain/on the task

Influential theorists:

  • Albert Bandura (1925 – 2021)
  • Barry J. Zimmerman (1942 – present)
  • Paul R. Pintrich (1953 – 2003)

Recommended seminal works:

Bandura, A., Ross, D., & Ross, S. A. (1961). Transmission of aggression through imitation of aggressive models. Journal of Abnormal and Social Psychology, 63(3), 575–582.

Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84(2), 191–215. https://doi.org/10.1111/1467-9280.00090

Bandura, A. (1986). Social foundations of thought & action: A social cognitive theory. Pearson Education.

Bandura, A. (1989). Human agency in social cognitive theory. American Psychologist, 44(9), 1175–1184. https://doi.org/10.1037/0003-066X.44.9.1175

References

Bandura, A., Ross, D., & Ross, S. A. (1961). Transmission of aggression through imitation of aggressive models. Journal of Abnormal and Social Psychology, 63(3), 575–582.

Bandura, A., Ross, D., & Ross, S. A. (1963). Imitation of film-mediated aggressive models. The Journal of Abnormal and Social Psychology, 66(1), 3–11. https://doi.org/10.1037/h0048687

Bandura, A. (1986). Social foundations of thought & action: A social cognitive theory. Pearson Education.

Lent, R. W., Brown, S. D., & Hackett, G. (1994). Toward a Unifying Social Cognitive Theory of Career and Academic Interest, Choice, and Performance. Journal of Vocational Behavior, 45(1), 79–122. https://doi.org/10.1006/jvbe.1994.1027

Lent, R. W., Brown, S. D., & Hackett, G. (2002). Social Cognitive Career Theory. In D. Brown (Ed.), Career Choice and Development (4th ed., pp. 255–311). Jossey-Bass.

Series