To determine how well students in your program have accomplished the program’s learning outcomes you will need to measure their learning. There are a variety of techniques that can be used to capture student learning that occurs as a result of your program curriculum. To assist you in selecting a measure this brief overview includes a description of the major categories of measures, a few things to keep in mind when you are selecting an assessment measure and the properties of good measurement techniques.

Categories of Measures

Generally, assessment measures can be grouped into two categories: direct and indirect.

  • Direct Measures evaluate student work products in light of learning outcomes for the program. They provide direct evidence of student learning. A few examples of direct measures include:
    • Systematic evaluation of student work (e.g., rubric to evaluate: papers, creative work, portfolios, presentations)
    • Subsection or item scores that provide evidence about the learning outcome from a published, standardized test (e.g., GRE Subject Test, ETSMajor Field Test)
    • Item scores that provide evidence about the learning outcome from a locally developed test
  • Indirect Measures evaluate student perceptions of their learning and the educational environment that supports learning. These are often used in combination with direct measures, contributing to the interpretation of their results. A few examples of indirect measures include:
    • Responses to selected items that provide evidence about the learning outcome on published surveys (e.g., NSSE, CIRP)
    • Responses to items that provide evidence about the learning outcome from locally developed surveys and interviews
    • Responses to items that provide evidence about the learning outcome on alumni surveys


Selecting a Measure

Here are a few things to keep in mind when selecting a measure for program assessment. 

  • The measure you select should capture student learning that occurs as a result of the program curriculum.
    • Measure at points in the curriculum after students have had opportunities to learn and practice the outcome. For example, using a rubric to measure oral presentation skills in the junior or senior year will give you a better indication of how well your program helps students achieve this outcome than if you were to measure in the freshman year.
  • Be sure that the outcome in question can be measured by the tool you choose.
    • Some measures of learning are not appropriate for program-level assessment because they will not tell you how well students achieved the outcome. For example, course grades are determined by a number of assignments and activities, making it impossible to tease out how well students accomplished a particular learning outcome. Other examples of measures that do not provide information about accomplishment of a particular learning outcome include overall test scores and satisfaction surveys.
  • Many measurement tools can be used for multiple levels of assessment; it’s really how you use them that makes the difference.
    • For example, a rubric can be used to grade an individual student’s performance or assess program learning outcomes. To assess the program’s student learning outcomes make sure that the rubric, or other measurement tool will give you scores that tell you about the program learning outcome, and aggregate those scores across students.

Properties of Good Measurement Techniques 

A good assessment technique is: 



The measure provides consistent results, regardless of who uses it or when it is used. 


It measures the learning outcome it is intended to measure. 


The results point reviewers toward challenges that need to be addressed and how to address them. 

Feasible and manageable 

The measure is efficient and cost effective in time and money 

Interesting and meaningful 

People care about the results and are willing to act on them. 


Multiple lines of evidence point to the same conclusion 

For more examples of both direct and indirect measures, including potential strengths and weaknesses of each, please see: 

Allen, M. J. (2008, July). Strategies for direct and indirect assessment of student learning. Paper presented at the SACS-COC Summer Institute.