Assessment plans typically contain a strategy for measuring the intervention or program. These plans define what factors need to be measured among which populations as well as how those measurements will be made. Within any given plan, time and resources need to be carefully considered, including the impact on the assessment on the participants.

Although such plans can be very detailed, we offer some high-level steps here, along with a case study of how to identify the factors to be measured and then search for an existing instrument to evaluate those factors that may meet your needs. By using an existing instrument (as is or with modifications), your team can save time and resources that would be needed to create a brand new instrument.

Common Steps for Measurement

The major steps for measurement include:

  1. Refer back to your project’s original goals and pre-defined outcomes, then identify:
    a. The factors that you want to measure and
    b. Who will be measured.
  2. Determine the most appropriate design and data collection measures, such as qualitative or quantitative and whether you will use a randomized control trial, one group pre-test/post-test measure or other study design.
  3. Search the literature to find studies similar to yours (both in terms of goals and population). Extract the constructs (or measurement scales) described and determine which (if any) are relevant for your population (e.g., “motivation to succeed” or “classroom engagement”). If your project has a broad range of goals, it may be necessary to extract different constructs from different resources. In some cases, constructs may not yet have been defined in articles about CS education, but may have been explored in other STEM fields like math or engineering and can be adapted for CS education.
  4. Search on these terms and constructs to find evaluation instruments (including interview guides or templates for ethnographic research for qualitative studies) that may already exist. If an instrument is found, determine whether it needs any additions or modifications to better meet your needs–that is, match your goals. If no instrument is found, you may need to develop your own instruments.
  5. Define your measurement protocol, including the data collection techniques and the type of analysis that will be conducted on the data.
  6. Collect and analyze your data according to your protocol.
  7. Gather evidence of reliability and validity of the instrument as used in your study.
  8. Compare the data against your goals and pre-defined outcomes to determine how well you met your goals and what areas could be further studied and/or improved.

Align your measurement goals with your project’s well-defined goals. Once these are aligned, it becomes easier to find the instruments that include the constructs you want to measure.

Case Study: A CS-themed AISL Proposal

In a proposal to NSF’s Advancing Informal STEM Learning (AISL) program (NSF 17-573), Maggie, a Principal Investigator (PI) proposes a project to pilot test a new approach bolster student interest in computing among high school students by infusing CS content into a large, urban high school’s STEM club. The purpose of the modification would be to expose high school students to real-world CS content by inviting local industry workers who specialized in CS to share projects that the students could work on to solve specific problems.

The specific goals of the project were devised in a way that ensured that they were measurable quantitatively and that they could be tied back to specific non-cognitive constructs. The two primary goals were:

  1. To increase high school student confidence in computer science – especially among students underrepresented in the field.
  2. To increase high school student interest in computer science and their desire to use it in future academic and career pursuits – especially among students underrepresented in the field.

In order to demonstrate an impact of participate in the revised STEM club activities, it would be necessary to document student CS interest and self-efficacy in CS among several groups:

  • STEM club students participating in the CS-specific STEM club activities
  • STEM club students not participating in the CS-specific STEM club activities
  • A comparison group of non-STEM club students at the school (possibly including a sub-group who has taken some CS coursework)

A background literature search resulted in information about two discrete constructs – CS interest and CS self-efficacy, both of which are non-cognitive constructs. Knowing this, the PI visited the evaluation instrument page on csedresearch.org and selected:

  • A “Focus Area” of “Computing”,
  • “Demographic”, then “9th-12th”,
  • “Student Engagement”, then under this category selected “Self-Efficacy” and “Interest (Computing)”, and
  • “Quantitative/Qualitative”, then under this category selected “Quantitative”.

This search revealed a number of instruments that were potentially relevant (with self-efficacy being a fairly common construct that was examined). Maggie was able to find one instrument called the Computer Science Interest Survey, which covered both of the relevant constructs and had also been used with a similar population to the one being examined in this study (high school students).

In order to keep the instrument intact and measure both constructs in the same way they were measured in the original study, Maggie used all of the questions from the survey. However, the only background questions asked on the original survey were about prior course-taking and gender. They added additional questions about race/ethnicity and whether or not the student had a disability, in order to fully address the research questions related to broadening participation in computing.

Cite this page

To cite this page, please use:

Xavier, Jeffrey and McGill, Monica M. 2019. Choosing an Evaluation Instrument. csedresearch.org. Retrieved from https://csedresearch.org/choosing-an-evaluation-instrument