Using rubrics for program assessment

Many programs use student work, such as papers, portfolios or performances, as evidence of whether students have developed the knowledge outlined by their learning outcomes. A rubric is a useful tool for assessing student learning as it is reflected in this work.

In the context of program assessment, after you have scored a representative sample of student work for an outcome, scores for each dimension of the outcome are aggregated across all students. The objective is to understand how many students are reaching each level of performance for each dimension. 

If you are considering using student work as part of your program’s assessment plan, the following steps can serve as a guide for how to go about using rubrics to accomplish this.

1) Determine where in the curriculum the outcome is addressed, and at what level you want to assess it.

  • curriculum or outcome map would be very helpful in accomplishing this task.
  • To determine if improvement in student learning has occurred across the program, consider looking for courses where the outcome is introduced and for courses later in the program that emphasize the outcome.
  • To determine if students have achieved the outcome by the time they are about to complete the program, consider looking at senior-level courses.

2) Identify student work by looking within the courses you have selected (e.g., papers, performances, or other projects) that demonstrates the outcome.

  • If you want to test improvement over the course of the program, look for similar products in lower and upper-division courses.
  • If you want to determine if students have achieved the outcome by the end of the program, look for products produced towards the end of the program.

3) Develop the rubric.

4) Collect the student work you have identified.

  • The recommended sample size is 30 pieces of student work from the identified assignment, to ensure you capture the full range of learning among your students.
  • For smaller programs, this may mean collecting student work across multiple years; if you have fewer than 30 students across three or four years, then just collect work from all of your students.
  • For larger programs, a random sample of 30 students' work should suffice.

5) Apply the rubric.

  • Have two or more faculty raters first apply the rubric to 4-5 samples of student work and resolve any discrepancies in their scoring, to ensure they are applying the rubric consistently - a process called norming.
  • After norming, have the faculty raters apply the rubric to each piece of student work.
  • Remember to consider each dimension of the rubric separately, being careful not to let a student's performance on one rubric element bias your impression of the whole work.

6) Aggregate rubric scores across students for each outcome or skill on the rubric using frequencies or mean scores.

  • If you had two or more individuals independently apply the rubric, you will first need to average their scores for each student on each dimension of the rubric. Please contact the Office of Assessment at assessment@lmu.edu if you need help with this process.

7) Present data in a way that is user-friendly for your program’s faculty and then discuss what the results mean for your program.

  • A user-friendly presentation of rubric data can mean putting it into a table, a graph, or a paragraph—whatever makes the most sense for you and your discipline.
  • Have a criterion or standard of success in mind when you start the discussion of results. For example, you might say that the average score must be above a 3 on a 4-point scale, or you might say that 75% of your students must fall in the ‘superior’ range of your rubric.

8) Look for potential changes to your program once the faculty as a group has interpreted the results and identified any areas of concern.

Next: Analyzing rubric results using Excel Return to Rubrics home page