Learning Isn’t Observable. So How Do We Measure It?

Written By: Julie Smith

Note: This post is the second in an occasional series about learning analytics, based on the Handbook of Learning Analytics.

We measure learning constantly – think of grades on spelling quizzes, SAT scores, and the bar exam. But it’s worth remembering that learning itself cannot be observed in the same way that the growth of a plant can: when it comes to learning, we have to make decisions about how to measure something we can’t see. Gray & Bergner (2022) outline the choices that must be made when educational researchers operationalize a learning construct – such as a sense of self-efficacy, the ability to work productivity in a group, or subject matter knowledge.

First, the decisions: Gray & Bergner present a very helpful distinction between measurements designed to understand a construct and those designed to improve a construct. They aren’t the same: we might understand something (for example, that students who spend more time on discussion boards earn higher grades) without being able to improve it (if we direct students to spend more time on discussion boards, they may spend less time reviewing for an exam and get a lower grade).

Next, Gray & Bergner review the strengths and weaknesses of three kinds of data that can be used for educational measurement. Validated and reliable surveys exist, and they are easy to administer at scale. But they may suffer from various biases related to self-reporting. Trace data (such as keystroke data from students using an educational technology learning platform) can be gathered unobtrusively, but it can be difficult to draw conclusions from it. Text data can be a rich source of insight into a student’s thought processes, but training either machine learning models or humans to assess and code such data is tricky. 

Other pitfalls exist as well. One challenge with many forms of data is that it doesn’t capture change over time. The authors point out that previous research shows that “cycles between positive and negative emotions can have a positive impact on the learning process compared to maintaining a consistent emotion” (p24), which is precisely the kind of insight that can be lost from a data snapshot. Similarly, information can be lost when data is cleaned; for example, grouping students by their final letter grade may result in both overemphasizing the difference between a student who earned a high B and a low A as well as underemphasizing the difference between a student who failed with a 0 and one who failed with a 69.

Despite these challenges, Gray & Bergner aren’t discouraged about the potential for learning analytics to help understand and improve learning outcomes – in fact, their careful outline of the challenges facing various forms of data collection are a good step toward thoughtful, responsible data collection and use.

Further Reading

 

Comments are closed