Category Archive: Research

Validity and Reliability in Qualitative Research

Post prepared and written by Joe Tise, PhD, Senior Education Researcher

In this series we have discovered the many ways in which evidence of validity can be produced and ways in which reliable data can be produced. To be sure, the bulk of this series was focused on quantitative research, but any mixed-methods or qualitative researcher will tell you that quantitative research only tells us one piece of the puzzle.

Qualitative research is needed to answer questions not suited for quantitative research, and validity and reliability need to be considered in qualitative research too. Qualitative research includes numerous methodological approaches, such as individual and focus group interviews, naturalistic observations, artifact analysis, and even open-ended survey questions. Unlike quantitative research–which utilizes forms, surveys, tests, institutional data, etc.–in qualitative research, the researcher often is the data collection mechanism and the analysis mechanism.

Researchers usually don’t run a statistical analysis on qualitative data; instead, a researcher typically analyzes the qualitative data, extracts meaning from it, and answers a research question from that meaning. Though this is similar to quantitative research, some of the analysis methods can be viewed as more subjective.

So, how can we know that results obtained from a qualitative analysis reflect some truth, and not the researcher’s personal biases, experiences, or lenses?

Reliability and validity are equally important to consider in qualitative research. Ways to enhance validity in qualitative research include:

  • Use multiple analysts
  • Create/maintain audit trails
  • Conduct member checks
  • Include positionality statements
  • Solicit peer review of analytical approach
  • Triangulate findings via multiple data sources
  • Search for and discuss negative cases (i.e., those which refute a theme)

Building reliability can include one or more of the following:

  • Clearly define your codes and criteria for applying them
  • Use detailed transcriptions which include things like pauses, crosstalk, and non-word verbal expressions
  • Train coders on a common set of data
  • Ensure coders are consistent with each other before coding the reset of the data
  • Periodically reassess interrater agreement/reliability
  • Use high-quality recording devices

The most well-known measure of qualitative reliability in education research is inter-rater reliability and consensus coding. I want to make a distinction between two common measures of inter-rater reliability: percent agreement and Cohen’s Kappa.

Percent agreement refers to the percentage of coding instances in which two raters assign the same code to a common “piece” of data. Because this is a simple percentage, it’s more intuitive to understand. But it also does not account for chance–in any deductive coding framework (i.e., when all possible codes are already defined), there is a random chance that two coders will apply the same code without actually “seeing” the same thing in the data.

By contrast, Cohen’s Kappa is designed to parse out the influence of chance agreement, and for this reason Cohen’s Kappa will always be smaller than the percent agreement for a given dataset. Many qualitative data analysis software packages (e.g., NVivo) will calculate both percent agreement and Cohen’s Kappa.

In consensus coding, multiple raters code the same data, discuss the codes that may apply, and decide together how to code the data. With consensus coding, the need for inter-rater agreement/reliability metrics is circumvented, because by definition, you will always have 100% agreement/reliability. The major downside of consensus coding is, of course, the time and effort needed to engage it. With large sets of qualitative data, consensus coding may not be feasible.

For a deeper dive into these topics, there are many excellent textbooks that explore the nuances of qualitative validity and reliability. Below, you’ll find a selection of recommended resources, as well as others that provide detailed insights into strengthening qualitative research methods.

Resources

Corbin, J., & Strauss, A. (2015). Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory (4th ed.). Sage Publications.
Creswell, J. W., & Báez, J. C. (2021). 30 Essential Skills for the Qualitative Researcher (2nd ed.). Sage Publications.
Creswell, J. W., & Poth, C. N. (2018). Qualitative inquiry and research design: Choosing among five approaches. Sage Publications.
Saldaña, J. (2013). An introduction to codes and coding. In The coding manual for qualitative researchers (pp. 1–40). Sage Publications.

Reliability in Education Research: A Deeper Examination

Presented by Joe Tise, PhD, Senior Education Researcher at IACE

We can view reliability in education research as inherently linked to measurement error. For example, to be confident in claims we make about something we assess, the assessment must produce reliable data. A thermometer that says it’s 72 degrees Fahrenheit one minute, but 56 degrees the next would not instill confidence. Likewise, a measure of a student’s computational thinking skills that places the student in the top 1% of all students one day, but the next places them in the bottom 5% would not be very useful. Clearly, any measure that produces data with that much variation likely contains a lot of error in the measurement (assuming no true change in the measured variable occurred).

The Standards for Educational and Psychological Testing defines reliability as “the more general notion of consistency of the scores across instances of the testing procedure” (American Educational Research Association et al., 2014, p. 33). Note in this definition that reliability refers to the data, not the test itself. Tests that produce data that are not reliable contain more error than tests that produce highly reliable data.

In a never-ending quest for enhanced approaches to measuring latent (i.e., unobserved) constructs, psychometricians have developed three major theories of assessment over the last century: Classical Test Theory (CTT), Generalizability Theory (G Theory), and Item-response Theory (IRT). Each of these frameworks seeks to minimize error and therefore enhance data reliability. They differ, however, in their approach.

Image of venn diagram between true ability and observed score

CTT (Traub, 1997) conceptualizes error as a unitary latent construct: any part of an observed test score that is not a participant’s true score is considered error. CTT provides no way to further decompose error, which is why G Theory was developed shortly after. G Theory (Brennan, 2001) provides researchers the ability to determine how much of an observed score is attributable to different sources of error, such as the occasion of measurement or the item format. To accomplish this, a researcher must first define two things: the universe of admissible observations and the universe of generalization. IRT (Hambleton et al., 1991; Hambleton & Swaminathan, 1985) works from a slightly different perspective. CTT and G Theory consider the test as a whole, but as its name suggests, IRT focuses more specifically on individual items. Thus, from an IRT perspective, reliable data is produced from items that can effectively differentiate lower from higher performers. An item’s ability to differentiate participants by ability level directly impacts the amount of information provided by that item, and item information gives the researcher a sense of the item’s quality (i.e., higher item information means higher item quality).

This blog post is far too short to sufficiently describe these three theories of assessment in any detail, but I suggest that interested readers explore the references section for more detailed overviews of each theory. Even a basic understanding of these assessment theories can help researchers design better measures and evaluate the quality of existing measures. To the extent we can reduce error in measurement, we can simultaneously enhance reliability in education research.

References

American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for Educational and Psychological Testing. American Educational Research Association.

Brennan, R. L. (2001). Generalizability Theory. Springer-Verlag Berlin Heidelberg. https://doi.org/10.1007/978-1-4757-3456-0

Hambleton, R. K., & Swaminathan, H. (1985). Item Response Theory: Principles and Applications. Springer Science+Business Media.

Hambleton, R. K., Swaminathan, H., & Rogers, H. J. (1991). Fundamentals of Item Response Theory. Sage Publications, Inc.
Traub, R. E. (1997). Classical Test Theory in historical perspective. Educational Measurement: Issues and Practice, 8–14.

Podcasts! Considering K-5 Computing Education Practices

We’re super excited to announce our long-awaited series on K-5 computing education practices!

Our podcasts provide insights from discussions among teachers as they consider meaningful research and how they could adopt new practices into their classrooms.

For educators, these podcasts are meant to provide you with information on various research studies that are may be suitable for your classrooms.

For researchers, the podcasts are meant to insight reflection and further inquiry into how teachers interpret research in context with their classrooms. As we continue our working closing the gap between researchers and practitioners, the discussions can give researchers additional perspectives that they may not already have.

Special thanks to Association of Computing Machinery (ACM) SIGCSE for funding to support this work through a Special Projects grant! We also thank our additional sponsors, Amazon Future Engineer and Siegel Family Endowment, who support our outreach efforts at IACE.

And special thanks to Emily Thomforde for tirelessly leading the discussion groups every week for many years. Shout out to Jordan Williamson (IACE), Emily Nelson (IACE), and Monica McGill (IACE) for creating, modifying, and reviewing the podcasts and briefs!

Either way, we hope you enjoy the podcasts!

Join Us at the 2024 ACM SIGCSE Technical Symposium

We’re always excited to attend the ACM SIGCSE Technical Symposium, and this year is no exception!

You can catch IACE team members (Laycee Thigpen, Joe Tise, Julie Smith, and Monica McGill) at the following events. (Pre-symposium events are invitation only.)

For all the rest, please stop by and say Hi! We’d love to hear about research you’re engaged in that supports learning for all students!

Day/Time Event Authors/Presenters Location
Tuesday, All day Reimagining CS Pathways (Invitation only) Bryan Twarek and Jake Karossel (CSTA), Julie Smith and Monica McGill (IACE) Off-site
Wednesday, All day Reimagining CS Pathways (Invitation only) Bryan Twarek and Jake Karossel (CSTA), Julie Smith and Monica McGill (IACE) Off-site
Wednesday, 1-5pm PST Conducting High-quality Education Research in Computing Designed to Support CS for All (Invitation only) Monica McGill, Institute for Advancing Computing Education
Jennifer Rosato, Northern Lights Collaborative
Leigh Ann DeLyser, CSforALL
Sarah Heckman, North Carolina State University
Bella Gransbury White, North Carolina State University
Meeting Room E146
Thursday, 1:45-3pm PT Unlocking Excellence in Educational Research: Guidelines for High-Quality Research that Promotes Learning for All Monica McGill (IACE), Sarah Heckman (North Carolina State University), Michael Liut (University of Toronto Mississauga), Ismaila Temitayo Sanusi (University of Eastern Finland), Claudia Szabo (The University of Adelaide) Portland Ballroom 252
Thursday, 3:45-5pm PT The NSF Project Showcase: Building High-Quality K-12 CS Education Research Across an Outcome Framework of Equitable Capacity, Access, Participation, and Experience Monica McGill (IACE) Meeting Rooms E143-144
Friday, 10am PT The Landscape of Disability-Related K-12 Computing Education Research (poster) Julie Smith (IACE), Monica McGill (IACE) Exhibit Hall E
Friday, 10:45am PT Piloting a Diagnostic Tool to Measure AP CS Principles Teachers’ Knowledge Against CSTA Teacher Standard 1 Monica McGill (IACE), Joseph Tise (IACE), Adrienne Decker (University at Buffalo) Meeting Room D136
Saturday, 10am PT Reimagining CS Courses for High School Students (poster) Julie Smith (IACE), Bryan Twarek (CSTA), Monica McGill (IACE) Exhibit Hall E

Key Levers for Advancing K-12 Computer Science Education in Chicago, in Illinois, and in the United States

Computer science has become an essential skill for K-12 students. As the demand for computing jobs grows, there is a pressing need to advance K-12 CS education across the nation. To achieve this, there are several key levers that can advance change, including policy changes, teacher training and development, increased access to technology and resources, and partnerships between educational institutions, non-profits, and industry leaders. By leveraging these, we can equip students with the skills they need to thrive in an increasingly digital world and drive innovation and progress.

Under funding and direction from the CME Group Foundation, we took a look at K-12 computer science education in Chicago and Illinois, in context with efforts across the United States. As a result of this work, we are pleased to announce our most recent publication on this work, Key Levers for Advancing K-12 CS Education in Chicago, in Illinois and in the United States.

In particular, the Foundation funded this study to understand:

  • How the landscape of K-12 CS education in Chicago has changed across 2013- 2022, with a focus on public schools, out-of-school-time (OST) programs, and research for evidence of progress.
  • The current strengths and opportunities of the K-12 CS education landscape in Chicago, in Illinois, and nationally.
  • How the support from the Foundation since it first started funding K-12 CS education in Chicago in 2015 has influenced the CS education landscape.

This qualitative study, conducted by Laycee Thigpen, Annamaria Lu, Monica McGill (all from the Institute for Advancing Computing Education), and Eva Giglio (CME Group Foundation), involved conducting 49 interviews (57 people in total). The interviewees represented a wide variety of organizations and voices.

Key findings for Chicago Public Schools (CPS) include the need to:

  • Support consistency and fidelity across schools
  • Continue to address the teacher shortage and to support the need for teacher CS professional development
  • Support research within CPS to inform decision-making to improve equitable outcomes for all students
  • Support workforce pathways for high school students
  • Support expanded K-8 CS, including integration into other subject areas
  • Support the design of scaffolded, standards-based curriculum

Specific to out-of-school time programs, we found that there is a need to support the creation, implementation and maintenance of ways to search CS learning opportunities and for program providers to also engage in partnerships with schools.

The report also details more findings for Illinois–some of which are similar, others that differ to meet the unique needs of rural communities.

We look forward to hearing your thoughts on the report!

“Teaching Inclusive AI in Computer Science” Event

By Joe Tise, PhD, Senior Education Researcher, CSEdResearch.org

Driving into the heart of Washington, D.C. is a unique experience. Mixed with thousands of business people, sight-seers, and the occasional politician shuffling to and fro, is the sense of optimism for what could be. Every significant social, policy, or and/or economic movement that had national—and often international—influence went through our nation’s capital. 

As I arrived at Teaching Inclusive AI in Computer Science event co-hosted by the National Economic Council and U.S. National Science Foundation (NSF) (and organized by CSTA) on the White House grounds, I wondered how the CS education landscape would look 10 years from now, and how the presenters and attendees would prove pivotal in shaping its form. It was clear by the end of the event that everyone there shared two core characteristics: a deep passion for CS education and an unwavering optimism for the future.

The event kicked off with speeches from several representatives from members of the Biden-Harris Administration (e.g., Chirag Parikh [Deputy Assistant to the President and Executive Secretary, National Space Council], Ami Fields-Meyer [Senior Policy Advisor, Office of the Vice President], and Seeyew Mo [Assistant National Cyber Director, Office of the National Cyber Director]). Each emphasized the importance of a CS and AI-literate citizenry and further discussed how the Biden-Harris administration plans to support CS and AI education. One of the highlighted efforts was an executive order signed by President Biden targeting safe and trustworthy AI development. 

To make the policy discussion more concrete, we next heard from a panel of four CS teachers from across the country who represented both middle and high school level CS. They discussed how they have seen CS, and particularly AI, influence many subjects in school beyond standalone CS courses. One teacher pointed out that their school district adopted a total ban on generative AI tools in an attempt to prevent academic misconduct. The teachers agreed that while the district’s motivation may be noble, the ban would likely disadvantage the students in the long run because they would not have the opportunity to learn how generative AI works—and more importantly, learn about its limitations. The panel discussion ended by acknowledging the continued struggle to recruit and retain CS teachers at both middle and high school levels.

Finally, the plethora of work remaining requires funding. To this point, Margaret Martonosi (NSF Chief Operating Officer) and Erwin Gianchandani (Assistant Director of the CISE directorate) discussed how NSF as a whole, and particularly the CISE directorate, is prioritizing CS education research, with reference to the recent Dear Colleague: Advancing education for the future AI workforce (EducateAI) letter released. 

Suffice to say I left the White House grounds even more inspired and hopeful for the future of CS education—and our nation as a whole. We have the vision, we have the motivation, and the groundwork is laid. Now we need to act. You can read a summary readout of the event on the White House website here.

Constructivism/Sociocultural

Behaviorism highlighted the influence of the environment, information processing theory essentially ignored it, and social-cognitive theory tried to strike a balance between the two by acknowledging its potential influence. Constructivist (also known as sociocultural) theorists take it a step further. 

According to constructivist theories (which can either focus more on individual or on societal construction of knowledge; Phillips, 1995), knowledge and learning are inherently dependent on the cultural context (i.e., environment) to which one belongs. That may sound like repackaged behaviorism, but “the environment” to a constructivist goes far beyond stimuli, rewards, and punishments. 

In constructivist theories of learning, “the environment” includes our family dynamics, friends, broad cultures and specific subcultures of groups with which we associate, and numerous other factors which all influence our learning. Although all constructivist theories may not agree on one single definition of learning, for our purposes a basic definition suffices: learning is development through internalization of the content and tools of thinking within a cultural context.

Constructivist theories posit that one’s culture provides the tools of thinking, which in turn influence how we learn—or “construct” knowledge. Perhaps the best-known constructivist theorist is Lev S. Vygotsky. Vygotsky authored many papers and two books, which were eventually published together posthumously as a single book titled Mind in Society

In this collection of Vygotsky’s work, concepts such as internalization and the zone of proximal development (ZPD) are introduced (Vygotsky, 1978). Briefly, constructivist learning theories posit that something is learned when a person internalizes its meaning—internalization is an independent developmental achievement. 

Further, the ZPD encompasses tasks that a learner cannot yet complete independently but which they can complete with help (see Figure below). Constructivist theorists target the ZPD for optimal learning, and if successfully done, learners construct their own meaning of the new information, thereby internalizing it.

Zone of proximal development theory th

Vygotsky asserted that thoughts are words. That is, thoughts are inextricably tied to the language we use. In his view, one can not “think” a thought without first having a word for the thought. For example, one cannot think about a sandwich if they have not first internalized the meaning of the word sandwich. In this way, Vygotsky (and by extension many constructivist theorists) view words as the “tools” for cognition and thus higher-order thinking.

Strengths

  • Constructivist theories are more attentive to learners’ past experiences and cultural contexts than the other major learning theories discussed in this blog series. Because of that, they can provide solid theoretical footing to many diversity, equity, and inclusion focused research projects. 
  • The zone of proximal development, internalization, and consideration of words as tools of thought are compelling concepts introduced by constructivist theories. 
  • The focus on culture and words as tools of thought in constructivist theories can help explain the variety of cognition patterns observed across cultures (e.g., different arithmetic strategies across cultures).

Limitations

  • Although constructivist theories prove strong in many aspects, they are not as applicable to other forms of learning. For example, constructivists are almost exclusively concerned with higher-order learning, and they largely cannot account for learning exhibited by animals (like behaviorist theories can). 
  • Constructivist theories of learning lie on a scale, from more radical to more conservative, regarding the influence of a person’s individual history. While this is not necessarily a bad thing, it does muddy the waters when one refers simply to “constructivism,” which in fact encompasses a very wide swath of perspectives on learning.

Potential Use Cases in Computing Education

  • Research: A researcher may attempt to identify first-year computer science students’ individual zones of proximal development related to coding, and see if teaching coding within this zone enhances students’ motivation, interest, and intention to pursue the content further compared to teaching content outside the zone.
  • Practice: Teachers need to ensure all students have (accurately) internalized the meanings of key vocabulary terms/concepts related to their content (e.g., what “objects” or “classes” really are) before they can rely on students to properly use the concept in higher-order, complex problem solving.

Influential theorists:

  • Lev Semyonovich Vygotsky (1896 – 1934)
  • John Dewey (1859 – 1952)
  • Jean Piaget (1896 – 1980)

Recommended seminal works:

Cobb, P. (1994). Where Is the Mind? Constructivist and Sociocultural Perspectives on Mathematical Development. Educational Researcher, 23(7), 13–20. https://doi.org/10.3102/0013189X023007013

Phillips, D. C. (1995). The Good, the Bad, and the Ugly: The Many Faces of Constructivism. Educational Researcher24(7), 5-12. https://doi.org/10.3102/0013189X024007005

Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Harvard University Press. https://books.google.com/books?id=Irq913lEZ1QC 

References

Phillips, D. C. (1995). The Good, the Bad, and the Ugly: The Many Faces of Constructivism. Educational Researcher24(7), 5-12. https://doi.org/10.3102/0013189X024007005

Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Harvard University Press. https://books.google.com/books?id=Irq913lEZ1QC  

Series

Social Cognitive Theory

Presented by Joe Tise, PhD, Educational Psychology & Senior Education Researcher at CSEdResearch.org

In light of these two influential (albeit largely opposing) theories of learning, we see that both theories account for unique aspects of learning despite their limitations. Still, neither behaviorist nor information-processing theories account for one prominent form of learning, with which all people have experience—learning by observation. We may rightfully wonder then: is there a new theory that incorporates the strongest elements of each and can explain how humans learn through observing others? Enter: Social-cognitive theory (SCT; Bandura, 1986). 

SCT is largely attributed to the prominent psychologist Albert Bandura, but many other theorists have since produced high quality research that has supported and refined the theory, especially within educational contexts (e.g., Barry Zimmerman, Paul Pintrich, Dale Schunk). 

SCT posits human behavior is one part of a triadic, reciprocal relationship with the environment (think behaviorism) and personal factors (IPT is represented here). The figure below represents this relationship visually.

Triangle with behavior, personal factors, and environment in each corner and arrows in between.

Whereas behaviorist theories of learning drew only a one-way connection between environment and behavior (i.e., environment determines behavior) and IPT essentially stayed within the cognitive system entirely, SCT asserts that the environment, behavior, and personal factors (e.g., ways of thinking, attitudes, emotions, metacognition) all influence and partially determine each other. 

Support for this assertion comes from decades of research kicked off by the seminal bobo doll experiments at Stanford (Bandura et al., 1961, 1963). These groundbreaking experiments were some of the first to show empirically that humans do learn through observation. 

This evidence directly contradicted a basic tenet of behaviorism—that organisms only learn after the environment acts upon them directly (e.g., through direct punishment or reward). These experiments further showed that one’s behavior (children in this case) is influenced not just by the environment (in these experiments, the presence of an aggressive-acting adult), but also by personal factors (in these experiments, the children’s gender). Thus, learning in SCT is inherently tied to the context and learners’ personal factors. 

From these basic tenets, we get the SCT definition of learning: a change or potential for change in behavior or cognition, situated within specific contexts.

Strengths 

  • Social-cognitive theory is deceptively simple—it involves just three overarching components but each component represents countless influential factors. 
  • SCT incorporates many of the strengths of both behaviorism and IPT and extends both theories in unique ways. It is the only theory of learning that sufficiently explains observational learning. It is also highly relevant to both research and practice. 
  • SCT explains many complex human phenomena, such as self-efficacy, self-regulated learning, stereotype threats, and the influence of role models. Others have adapted SCT to other realms, for example business where it is referred to as social-cognitive career theory (SCCT; Lent et al., 1994, 2002).

Limitations

  • SCT can be a bit more abstract than behaviorism or IPT and thus the implications for practice are sometimes less clear. This is especially true for researchers and practitioners who are unfamiliar with learning theories. 
  • Further, because of its deceptive complexity, a comprehensive test of SCT within a single study is more difficult compared to behaviorism and IPT. Controlling for the numerous environmental, personal, and behavioral factors that SCT might identify as relevant would be a heavy lift for even the most well-funded research study to accommodate.

Potential Use Cases in Computing Education

  • Research: A study could investigate how characteristics of a role model (e.g., race/ethnicity, gender, age) moderate the influence that role model has on underrepresented children’s intentions to pursue a particular STEM career path.
  • Practice: Teachers, mentors, and parents should highlight examples of when a student successfully overcame a challenge, and provide verbal encouragement and/or modeling when the student faces new challenges. These behaviors will support the student’s self-efficacy, and ultimately their persistence in the domain/on the task

Influential theorists:

  • Albert Bandura (1925 – 2021)
  • Barry J. Zimmerman (1942 – present)
  • Paul R. Pintrich (1953 – 2003)

Recommended seminal works:

Bandura, A., Ross, D., & Ross, S. A. (1961). Transmission of aggression through imitation of aggressive models. Journal of Abnormal and Social Psychology, 63(3), 575–582.

Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84(2), 191–215. https://doi.org/10.1111/1467-9280.00090

Bandura, A. (1986). Social foundations of thought & action: A social cognitive theory. Pearson Education.

Bandura, A. (1989). Human agency in social cognitive theory. American Psychologist, 44(9), 1175–1184. https://doi.org/10.1037/0003-066X.44.9.1175

References

Bandura, A., Ross, D., & Ross, S. A. (1961). Transmission of aggression through imitation of aggressive models. Journal of Abnormal and Social Psychology, 63(3), 575–582.

Bandura, A., Ross, D., & Ross, S. A. (1963). Imitation of film-mediated aggressive models. The Journal of Abnormal and Social Psychology, 66(1), 3–11. https://doi.org/10.1037/h0048687

Bandura, A. (1986). Social foundations of thought & action: A social cognitive theory. Pearson Education.

Lent, R. W., Brown, S. D., & Hackett, G. (1994). Toward a Unifying Social Cognitive Theory of Career and Academic Interest, Choice, and Performance. Journal of Vocational Behavior, 45(1), 79–122. https://doi.org/10.1006/jvbe.1994.1027

Lent, R. W., Brown, S. D., & Hackett, G. (2002). Social Cognitive Career Theory. In D. Brown (Ed.), Career Choice and Development (4th ed., pp. 255–311). Jossey-Bass.

Series

Information Processing Theory

Presented by Joe Tise, PhD, Educational Psychology & Senior Education Researcher at CSEdResearch.org

The stark limitations of behaviorist theories of learning gave rise (in part) to cognitive theories of learning, the most prominent of which is information processing theory (IPT) (Atkinson & Shiffrin, 1968). As you will see, IPT is analogous in many ways to a computer system. IPT posits three primary “stores” of memory and three primary cognitive “processes.” The three memory stores include the:

  • Sensory register (like a motion detector or thermostat)
  • Working memory (like RAM)
  • Long-term memory (like a hard drive)

The three processes include:

  • Attention (like selecting which folder or drive to work in)
  • Encoding (like writing to a disk)
  • Retrieval. (like reading a disk)

When a person encounters information (broadly construed), it exists first in the sensory register which is informed by the five physical senses as shown in the following figure.

Information processing graphic that shows sensory register, working memory, and long-term memory. Information in the sensory register persists only as long as the senses actively perceive the information (e.g., the shapes of words on a page). Once the senses stop perceiving the information, the sensory register is cleared. 

So how does one learn anything, then? The first primary cognitive process must be invoked—attention. Information is transferred from the sensory register to working memory when we direct attention toward the information—and only at this point do we become conscious of it. This is analogous to how a computer transfers information from a physical sensor (sensory register) to its RAM (working memory) for manipulation.

You have likely already heard that working memory (WM) is limited to 7 +/- 2 pieces of information (Miller, 1956), and this fact illustrates one relatively strict limitation of our cognitive system. Working memory is in many ways a “bottleneck” to human learning and cognitive functioning. Information persists in WM for about 20-30 seconds without rehearsal or other cognitive manipulations of the information. As with a computer’s RAM, it is limited in capacity and will be periodically cleared.

If we want the information to persist longer than that, we must apply the second primary cognitive process, encoding, to the information so that it can move from WM to long term memory (LTM), much like writing to a hard drive. LTM capacity is theoretically unlimited and information within LTM can persist forever. 

Finally, if we wish to use information in LTM, we must invoke the third primary cognitive process: retrieval. Retrieval brings information out of LTM and back into WM so that it is once again conscious to us and can be manipulated or articulated via speech, writing, actions, or other means. Drawing the information from LTM into WM is akin to reading information from a hard drive.

Only now can one understand the IPT definition of learning. IPT views human learning as the transfer (i.e., encoding) of information from working memory into long term memory.

Strengths

IPT provides a succinct framework for understanding how the human brain processes information. While behaviorists completely disregard the cognitive domain, IPT attempts to directly explain it. Tenets of IPT are ripe for empirical investigation (e.g., the capacity and duration of working memory has been studied countless times). 

IPT also is directly applicable to many fields beyond just learning, and its tenets are leveraged in domains such as user experience research, driving safety courses, and brain health assessments for sports injuries and dementia screening. 

Limitations

While IPT provides explanations for many of the cognitive phenomena we encounter during learning and daily life, some limitations still exist. For example, IPT faces a supposed “homunculus” problem. That is, models of working memory (yes, there are sub theories of IPT to further specify the working memory component) detail a central executive component, which controls the two other components of working memory (see Baddeley, 2003 for more detail). 

But this raises the question—what controls the central executive? Herein lies the problem. Such models of working memory appear to rely on a homunculus—a small imaginary “being” inside our brains that controls the central executive, which in turn controls the other components of working memory. 

This blog post is far too broad to properly detail models of working memory, so for our purposes just know that critics of IPT cite the homunculus problem as at least a needed point of further theoretical refinement. 

Potential Use Cases in Computing Education

  • Research: How does a student’s working memory capacity relate to their coding skill/accuracy?
  • Practice: Direct attention to salient features of content (to ensure attention is on the correct features of content), provide and model use of learning strategies (to promote encoding), and use low-stakes practice quizzes/questions often to exercise students’ retrieval process. 

Influential theorists:

  • Richard C. Atkinson (1942 – present)
  • Richard M. Shiffrin (1929 – present)
  • Alan Baddeley (1934 – present) 

Recommended seminal works:

Atkinson, R. C., & Shiffrin, R. M. (1968). Human memory: A proposed system and its control processes. In K. W. Spence & J. T. Spence (Eds.), The psychology of learning and motivation: Advances in research and theory, Vol II (pp. 89–195). Academic Press.

Baddeley, A. D., & Hitch, G. (1974). Working memory. In G. H. Bower (Ed.), Psychology of Learning and Motivation (Vol. 8, pp. 47–89). Academic Press. https://doi.org/10.1016/S0079-7421(08)60452-1

Baddeley, A. (1992). Working Memory. Science, 255(5044), 556–559. https://doi.org/10.1126/science.1736359 

Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63, 81-97. https://psychclassics.yorku.ca/Miller/

Shiffrin, R. M., & Atkinson, R. C. (1969). Storage and retrieval processes in long-term memory. Psychological Review, 76(2), 179–193.

References

Atkinson, R. C., & Shiffrin, R. M. (1968). Human memory: A proposed system and its control processes. In K. W. Spence & J. T. Spence (Eds.), The psychology of learning and motivation: Advances in research and theory, Vol II (pp. 89–195). Academic Press.

Baddeley, A. (2003). Working memory: Looking back and looking forward. Nature Reviews | Neuroscience, 4, 829–839. https://doi.org/10.1038/nml20l

Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63, 81-97. https://psychclassics.yorku.ca/Miller/

Series

Introduction to Behaviorism

Presented by Joe Tise, PhD, Educational Psychology & Senior Education Researcher at CSEdResearch.org

At least surface-level familiarity with Pavlov’s experiments and principles of classical and operant conditioning have become almost ubiquitous among the general public. What many may not know, however, is that classical and operant conditioning are the two primary Behaviorist theories of learning. To a behaviorist, only observable behavior is worthy (or even possible) of scientific study. From this philosophy stems the behaviorist definition of learning: a relatively permanent change in behavior that is caused by experience. Behaviorist theories of learning exclude any attempts to examine cognition or cognitive processes because they are not directly observable.

The theory of classical conditioning was born from Pavlov’s experiments with dogs. Pavlov discovered that his dogs began to associate food with the sound of a bell, and that upon ringing the bell, his dogs would salivate. Thus, a change in behavior (salivation) was caused by experience (presentation of food shortly after the bell). Note that classical conditioning only applies to involuntary (reflexive) behavior (e.g., fear, physiological responses). Operant conditioning (think B.F. Skinner) extends the theory into voluntary behavior. In a series of classic experiments, psychologist B.F. Skinner trained rats to perform novel behaviors in exchange for food (reinforcement) or in anticipation of an electric shock (punishment) (Skinner, 2019).
Dog working at a computer

Operant conditioning still relies on the mechanism of association, but accounts for novel (i.e., not innate) behavior. Indeed, operant conditioning is at the heart of nearly all animal training (e.g., dogs, show animals) and has real-world applications for classroom management. For example, a teacher may reward students with a small toy if they participate in class and may punish students (e.g., by issuing a demerit) for acting out. If this reinforcement and/or punishment is successful and the student’s behavior changes, the behaviorist would say the behavior change is evidence of learning. Although purely Behaviorist research studies are less common today than they were 60-70 years ago, elements of Behaviorism are still prevalent in some fields and sub-disciplines, including game-based learning (e.g., though badges, scoring) (Coskun, 2019; Hulsbosch et al., 2023; Leeder, 2022)

Strengths

There are several strengths of behaviorist theories of learning.

  • First, research has shown that behaviorist conceptions of learning are generalizable to not just multiple cultures, but indeed a wide variety of animals. That is, learning by association and reinforcement/punishment is not uniquely human, and therefore behaviorist theories of learning are by far the most generalizable.
  • And since behaviorists study only what can be observed directly (i.e., behavior), behaviorist theories are arguably the best-positioned to achieve replicability—a known problem in the psychology fields (Open Science Collaboration, 2015).
  • Behaviorist principles are directly applicable to the classroom via classroom management techniques. Any experienced K-12 teacher will tell you that classroom management is a top priority, and there is ample opportunity to apply behaviorism throughout the instructional process.
  • Behaviorism arose as a direct counter to eugenic philosophies, and therefore was one of the first DEI-minded approaches to psychological/educational research. To this effect, John Watson (1930) famously said: “Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in and I’ll guarantee to take any one at random and train him to become any type of specialist I might select – doctor, lawyer, artist, merchant-chief and, yes, even beggar-man and thief…”

Limitations

Noteworthy limitations to behaviorism also exist. For example:

  • Behaviorist theories cannot account for cognitive processing—and explicitly exclude study of cognition. Cognitive/educational research, and even simple experience, tells us that human learning is much more complex than involuntary associations and reinforcement/punishment schedules.
  • The notion of observational learning (i.e., learning by watching someone else) is a prime example of the shortcomings of behaviorism. Behaviorism cannot explain observational learning.
  • Finally, experienced students and teachers understand many tasks require complex problem solving, learning strategies, and metacognition to complete. Behaviorism falls short of even conceptualizing these constructs, let alone explaining them.

Potential Use Cases in Computing Education

  • Research: An intervention based in classical conditioning designed to reduce negative physiological responses (anxiety) to computers/computer science. These negative physiological responses would also influence students’ self-efficacy, so a link could be made to social-cognitive theory as well.
  • Practice: A teacher could begin each class with a pleasant story, song, comment, snack, or even scent to elicit a positive emotional response from their students. After repeated exposure (i.e. conditioning), the students should associate positive feelings with the classroom/subject/teacher.

Influential theorists

  • John B. Watson (1878 – 1958)
  • B.F. Skinner (1904 – 1990)
  • Edward L. Thorndike (1874 – 1949)

Recommended seminal works

  • Watson, J. B. (1913). Psychology as the behaviorist views it. Psychological Review, 20(2), 158–177. https://doi.org/10.1037/h0074428
  • Skinner, B. F. (1965). Science and human behavior. Simon and Schuster.
  • Thorndike, E. L. (1898). Animal intelligence: An experimental study of the associative processes in animals. The Psychological Review: Monograph Supplements, 2(4), i–109. https://doi.org/10.1037/h0092987

References

Coşkun, K. (2019). Conditioning Tendency Among Preschool and Primary School Children: Cross-Sectional Research. Interchange, 50(4), 517–536. https://doi.org/10.1007/s10780-019-09373-1

Hulsbosch, A., Beckers, T., De Meyer, H., Danckaerts, M., Van Liefferinge, D., Tripp, G., & Van Der Oord, S. (2023). Instrumental learning and behavioral persistence in children with attention‐deficit/hyperactivity‐disorder: Does reinforcement frequency matter? Journal of Child Psychology and Psychiatry, 64(11), 1–10. https://doi.org/10.1111/jcpp.13805

Leeder, T. M. (2022). Behaviorism, Skinner, and Operant Conditioning: Considerations for Sport Coaching Practice. Strategies, 35(3), 27–32. https://doi.org/10.1080/08924562.2022.2052776
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716. https://doi.org/10.1126/science.aac4716

Skinner, B. F. (2019). The Behavior of Organisms: An Experimental Analysis. B. F. Skinner Foundation. https://books.google.com/books?id=S9WNCwAAQBAJ

Watson, J. B. (1930). Behaviorism (Revised edition). University of Chicago Press.

Series