To determine how well students in your program have accomplished the program’s learning outcomes, you will need to collect and evaluate evidence of their learning. There are a variety of techniques that can be used to capture student learning that occurs as a result of your program curriculum. This brief overview includes a description of the major categories of evidence, a few things to keep in mind when you are selecting evidence, and the properties of good evaluation techniques.
Categories of evidence
Generally, assessment evidence can be grouped into two categories: direct and indirect.
- Direct evidence is work that demonstrates students' level of knowledge or skill with respect to an outcome. Examples of direct measures include:
- Systematic evaluation of student work (e.g., rubric to evaluate papers, creative work, portfolios, presentations, etc.)
- Subsection or item scores that provide evidence about the learning outcome from a published, standardized test (e.g., GRE Subject Test, ETSMajor Field Test)
- Item scores that provide evidence about the learning outcome from a locally developed test
- Indirect evidence provides insight into students' and others' perceptions of their learning and the educational environment that supports learning. These are often used in combination with direct measures, contributing to the interpretation of their results. Examples of indirect measures include:
- Responses to selected items that provide evidence about the learning outcome on published surveys (e.g., NSSE, CIRP)
- Responses to items that provide evidence about the learning outcome from locally developed surveys and interviews
- Responses to items that provide evidence about the learning outcome on alumni surveys
* Note: for information and guidance on developing and conducting student surveys, please consult these resources from LMU's Office of Institutional Research.
Selecting evidence
Here are a few things to keep in mind when selecting evidence for program assessment.
- The evidence you select should capture student learning that occurs as a result of the program curriculum.
- Gather evidence at points in the curriculum after students have had opportunities to learn and practice the outcome. For example, using a rubric to evaluate oral presentation skills in the junior or senior year will give you a better indication of how well your program helps students achieve this outcome than if you were to do so in the freshman year.
- Be sure that the outcome in question can be evaluated by the tool you choose.
- Some tools are not appropriate for program-level assessment because they will not tell you how well students achieved the outcome. For example, course grades are determined by a number of assignments and activities, making it impossible to tease out how well students accomplished a particular learning outcome. Other examples of tools and evidence that do not provide information about accomplishment of a particular learning outcome include overall test scores and satisfaction surveys.
- Many tools can be used for multiple levels of assessment; it’s really how you use them that makes the difference.
- For example, a rubric can be used to grade an individual student’s performance or assess program learning outcomes. To assess the program’s student learning outcomes make sure that the rubric (or other tool) will give you scores that tell you about the program learning outcome, and aggregate those scores across students.
Properties of good assessment tools and techniques
A good tool or technique is: | Because: |
---|---|
Reliable | The tool provides consistent results, regardless of who uses it or when it is used. |
Valid | It fairly and comprehensively represents the learning outcome it is intended to evaluate. |
Actionable | The results point reviewers toward challenges that need to be addressed and how to address them. |
Feasible and manageable | The tool is efficient and cost effective in time and money. |
Interesting and meaningful | People care about the results and are willing to act on them. |
Converging | Multiple lines of evidence point to the same conclusion |
For more examples of both direct and indirect evidence and tools for evaluating them, including potential strengths and weaknesses of each, please see:
Allen, M. J. (2008, July). Strategies for direct and indirect assessment of student learning. Paper presented at the SACS-COC Summer Institute. .