Assessment of Student Learning

Learning outcomes and degree proficiencies, such as those suggested in the DQP or that result from Tuning, obviously, are tied to processes of assessment, but “assessment” needs to be clarified as a term. Two types of assessment are implied. First, the term describes the procedures by which faculty use assignments to evaluate the degree of proficiency with knowledge and abilities students learn in a course or program. Second, “assessment” can refer to the procedures by which program faculty evaluate the degree to which their curricula, pedagogies, and assignments are producing the level of student learning they indicate in their outcomes. The former relates to intentional program design and has been discussed in relation to the coherent alignment of program outcomes, curricula, courses, and assignments. The latter refers to evaluation of program effectiveness—although, increasingly, institutions are coupling the two such that assignments given in courses are “rolled-up” for a picture of program and institutional effectiveness.

Program effectiveness is a matter of quality assurance, with quality defined as a level of learning as determined by program faculty. The need to define quality, in fact, is what motivated, in 2009, the introduction of Tuning, originally a European process, to the United States. With multiple national campaigns launched to increase the number of citizens holding degrees or credentials, Lumina Foundation sought a means of assuring that these awards are meaningful and represent real learning. Tuning offers a strategy through which disciplinary experts define the breadth and depth of learning that constitute a degree in that discipline. Assessment, then, offers a means for program faculty to evaluate the extent to which students are achieving the learning that those faculty designate for their programs.

Tuning has focused attention on assessing program effectiveness, which can take many forms, depending on the sorts of questions faculty and staff might have about their students’ performance. Ultimately, however, the assessment of program effectiveness is driven by identifying patterns of strength and weakness in students’ demonstrated proficiencies. Approaching program assessment this way emphasizes where curricula yield desired student learning consistently and, conversely, where curricula yield less-than-expected student learning. When program faculty and staff identify such patterns of performance, they can take action to address elements of the program that may need to be revised. Assessing program effectiveness can also help program faculty and staff identify where curricula, courses, pedagogies, and assignments may be contributing to or hindering student learning. Where problems emerge, strategies for program revisions can be developed and implemented, with ongoing and regular assessment offering indication that interventions and innovations are having their desired effect. When done in a collaborative environment, assessment of program effectiveness can generate meaningful innovations within a program.

Many institutions are employing rubrics to assess student learning, reframing capstone experiences, redeveloping portfolio approaches, and developing targeted assignments for students to demonstrate their learning. For instance, the assistant vice provost for undergraduate education at Indiana University, reports that all graduating students are asked to reflect on assignments collected over their four years at the institution around the following questions:

  1. What was the key take-away?
  2. Describe how you would have improved your work on this assignment?
  3. Through this assignment, what have you learned about how you learn and work?
  4. What new interests or values have you acquired as a result of this learning experience?
  5. How does this learning fit into your life’s goals (professional and personal)?

Such an approach allows for students to reflect on their educational journey and tie together diverse learning experiences across their time with the institution. Learning outcomes enable students to work toward learning goals, track their progress, and evaluate their own success. By way of example, one faculty member in a history department that had engaged in work with Tuning and the DQP found students’ awareness of learning changed dramatically as a result of his work with learning outcomes. When using rubrics aligned to learning outcomes to assess student work, he reported that students visited his office to ask not why they received the grades they did, but how they could improve their knowledge or skills in the learning outcomes identified by the rubric. That change in students’ approach evinces a degree of self-awareness in which students understand their own weaknesses. Those students’ subsequent action—to seek guidance in improving their learning—suggests that improved self-awareness in students can result in increased self-direction, too.

In addition, institutions that take a reflective portfolio approach and those that are engaged with assignment design have experienced a renewed focus on the importance of feedback to students. Faculty have begun to examine when opportunities for students to react and reflect on their feedback is built into the curriculum and which strategies can be employed to increase practice time given curricular constraints. In some instances, this has involved moving the deadlines for final submission of assignments to allow for the assignment to be returned to students so that they can react to feedback. In others, it has led to the connection of assignments across multiple courses so that students continue to build upon and work on assignments beyond the time span of a semester. In still others, it has involved intentionally teaching students to give themselves feedback not only through peer and group processes but also through processes of self-reflection. One instance of this is the utilization of rubrics given to students with two columns—one for the faculty member to provide ratings and feedback and the other for the student to complete and submit with the assignment. The intention is that while the student and faculty views of the work may differ at first, over time the two would converge and students would learn valuable skills in self-reflection and critique of their work in relation to criteria for performance.

Most of the institutions and departments that have used DQP and Tuning have yet to assess the extent to which students are acquiring the expected proficiencies. Those that have begun to do so are using a variety of approaches, all of which focus on ensuring that every student is meeting the stated level of proficiency and doing so in a manner that looks across the entire curriculum as opposed to moments of time at the end. As Peter Ewell (2013) outlines in his paper on the implications of assessment from the DQP, the DQP necessarily implies embedded assessment of student learning over time in the form of well-crafted assignments. It is worth reading Ewell’s overview of the implications for a grounded understanding of assessment and the DQP. An additional useful resource is Appendix C of the DQP on assignments and assessments. The NILOA assignment library of DQP-aligned assignments grew out of this focus and from feedback from the field asking for examples of assignments on the ground. The online searchable repository of these assignments is available at assignmentlibrary.org. Appendix F of the Roadmap includes additional information on signature assignments employed by several DQP participating institutions. Signature assignments are a task, problem, case or project that can be tailored or contextualized in different disciplines or course contexts that provide information on students’ integration and application of learning at distinct levels within the curriculum. Still further information on assessment may be found at the NILOA website: http://learningoutcomesassessment.org/