Essays on Teaching Excellence

Toward the Best in the Academy

A publication of The Professional & Organizational Development Network in Higher Education.

Vol. 18, No. 3, 2006-2007

Incorporating Course-Level Evidence of Student Learning into Program Assessment

Nancy Simpson, Texas A&M University and

Laurel Willingham-McLain, Duquesne University

As a faculty member in a post-secondary educational setting, you have likely encountered the term student-learning assessment. With the need for a college-educated work force increasing, and the cost of education escalating, higher education’s stakeholders are asking for evidence that dollars are well-spent and that graduates are prepared to be productive, ethical, problem-solving citizens. Such evidence may be produced through program assessment of student learning and can inform decisions regarding program development or modification. We argue here for the benefits of integrating course-level assessment conducted by faculty members into overall program assessment.  We begin by describing assessment principles, and then present a rationale and practical steps for conducting course-embedded program assessment. 

Assessment Principles

Assessment of student learning is a process of defining expected learning outcomes, identifying or creating relevant learning experiences, collecting and interpreting evidence of learning, and using this evidence to make decisions intended to improve student learning (Bresciani, 2006; Suskie, 2004; Walvoord, 2004). The following principles characterize sound assessment:

Learning-focused: Clearly articulated learning outcomes describe what we want students to know and be able to do when they complete their academic program

Meaningful: Assessment is connected to institutional mission and goals, and to values of the discipline. It is integrated into daily teaching and learning.

Transparent: Assessment purposes, processes, and findings are communicated to all involved. There is no hidden agenda. 

Faculty-owned: Faculty design and conduct assessment processes, and place confidence in results.

Systematic: Faculty focus on a few specific outcomes at a time in a cyclical approach so that, over time, a holistic picture of the amount and/or quality of learning emerges.

Useful: Evidence of learning that is gathered and interpreted is directly applicable to highlighting strengths, and improving courses and curricula.

Program assessment differs from course-level assessment primarily by virtue of the time at which the picture of student learning is taken and by the proportion of influence an individual faculty member has on that learning. When we assess learning in our courses, we want to know what students know and can do with that knowledge by the time they complete the course. By contrast, when we assess student learning at the program level, we typically want to know the cumulative effect of all the courses and activities students have experienced when they complete their degree. No single faculty member teaching in such a program has a stake in the cumulative view,  if program assessment is an add-on, after-the-fact process that makes no use of course-embedded evidence. This apparent disconnect between course-level learning and program assessment can give faculty the sense that program assessment is a burden, an exercise to be completed and then quickly forgotten until the “next time.”

Using course-embedded evidence in program assessment helps to reconnect program assessment with course-level teaching and learning of faculty and students. There are several advantages to this approach:

  • Results from program review that use this kind of evidence are more clearly connected to daily student learning and provide feedback that can directly impact that learning.
  • Often, needed evidence of student learning already exists.  Why do more work by creating assignments and tests outside of class when it is possible to use assessments already incorporated into courses?
  • Students are more motivated to do their best work within courses where they have a rapport with the professor and a context for their learning.  Course-embedded assessment thus provides a more accurate representation of what students know and can do.
  • Communication among faculty that is required to accomplish program assessment often sheds light on the learning goals and teaching methods of each course, allowing colleagues to examine and learn from each others’ practices. This communication also serves the creation of a curriculum that is an integrated whole, rather than a disjointed series of individual courses.  

Program Level Assessment Steps

1. Reach consensus on learning outcomes. Program-level assessment begins with faculty agreeing on a few overall learning outcomes for their graduates; they identify the major concepts and skills students will need to remember and apply in new contexts.   This process is strengthened by seeking input from former students, their employers, post graduate educational institutions and other outside audiences. Several excellent resources provide guidance for articulating learning outcomes. See, for example, Fink (2003); Richlin (2006); Suskie (2004); Walvoord (2004).

2. Identify necessary learning experiences. With the learning outcomes in mind, examine courses and other experiences that make up the curriculum. To be systematic, it is useful to map desired outcomes onto courses so that each outcome is introduced and reinforced at appropriate times and in an appropriate sequence. This process gives departmental faculty an opportunity to verify that their assumptions about prerequisite knowledge are reasonable and that learning goals that are not course-specific—such as problem-solving ability or communication skills—are, in fact, being addressed throughout  the program’s curriculum. Sample matrices and charts to provide a framework for this part of the program assessment process may be found in Diamond (1998); Maki (2004); Walvoord (2004.). 

3. Determine what will count as evidence of learning; collect and interpret the data. For each major learning outcome, choose multiple sources of evidence that faculty in your field find convincing. Direct methods examine actual student performance to determine the extent to which students have met the learning goals (e.g., written assignments, performances, presentations, quality of field work, tests). Indirect methods examine perspectives on the learning process (e.g., student self-appraisals, satisfaction surveys, focus groups).  Both methods can employ qualitative and/or quantitative approaches, and together they provide a complete picture (Suskie, 2004, pp. 95-97).

If you are assessing an existing program, begin by determining the kinds of evidence already available through course work. Walvoord (2004) provides a chart for identifying course-level assessment usable for program assessment (p. 125). Often, sample student work already being produced for a particular course can be reexamined from a program perspective.  In other cases, faculty can add a question to an exam or create an assignment that will work well to assess learning in a course and be used later for program-level assessment. Examples of course-embedded evidence of learning:

a. Capstone Courses – Many programs have a culminating course in which students create a learning portfolio or do a complex project.  These synthesis projects require students to demonstrate what they have learned throughout the major.  In the absence of a capstone course, synthesizing assignments can be incorporated into upper level courses.

b. Selected Writing Samples – Faculty select writing samples from across courses to look for evidence of program-wide goals (e.g., critical thinking, professional communication, proper citation, disciplinary writing and research skills). Using an agreed-upon scoring guide, a group of faculty evaluates the sample papers (Huba & Freed, 2000, p. 151; Suskie, 2004, p. 123). Clean, unmarked copies of the papers, with student and faculty names removed, are used.  

c. Common Exam Items – Faculty agree on common test items or design tests that provide course and program evidence of learning simultaneously (e.g. mastery of specific content and general evidence of critical thinking).  Instructors grade the items for the course, and two to three faculty members evaluate selected items for the program.

d. Reflective Writing or Discussion – Reflection questions require students to examine their knowledge, academic skill development, personal learning goals and success, or their learning styles.  For example, a learning community at Duquesne University asks students to reflect on six topics by drawing connections across three courses.  Each instructor incorporates the reflective writing into the course grade, and at the same time the faculty as a whole examine the success of their learning community through these reflections.

e. Questionnaires/Guided Discussion – Students may be asked to complete a questionnaire relevant to program outcomes and articulate, for example, what experiences most promote their learning.  Administering this in class both promotes reflection on learning and provides feedback on the program.

Once the evidence has been gathered and the resulting data analyzed, faculty meet to discuss specific action to take in response to the results. During this step, the value of a carefully created curriculum “map” (see step 2) becomes clear. Where assessment findings indicate a high degree of achievement, it is possible to trace back to the learning experiences that facilitated this achievement and to celebrate and learn from this success. Where findings indicate gaps or low achievement, faculty identify assignments to modify or supplement. The most important principle to remember is that assessment only works when faculty use the findings to continually enhance learning. 

Done well, assessment increases our confidence that we are putting resources into activities that result in valuable learning and allows us to communicate meaningfully and credibly to stakeholders. Course-embedded assessment, in particular, can be used to many advantages: feedback to individual students and teachers, as well as an efficient source of evidence for academic programs to use in improving and celebrating the overall quality of their students’ learning.

References and Resources

Bresciani, M. J. (2006). Outcomes-based academic and co-curricular program review: A compilation of institutional good practices. Sterling, VA: Stylus Publishing.  

Diamond, R. M.  (1998). Designing and assessing courses and curricula: A practical guide.  San Francisco, CA: Jossey-Bass. 

Fink, L. D.  (2003). Creating significant learning experiences: An integrated approach to designing college courses.  San Francisco, CA: Jossey-Bass. 

Huba, M. E., & Freed, J. E.  (2000). Learner-centered assessment on college campuses: Shifting the focus from teaching to learning.  Boston, MA: Allyn and Bacon.

Maki, P.L. (2004). Assessing for learning: Building a sustainable commitment across the institution. Sterling, VA: Stylus.

Richlin L. (2006). Blueprint for learning: Constructing college courses to facilitate, assess, and document learning. Sterling, VA: Stylus

Suskie, L.  (2004). Assessing student learning: A common sense guide.  Bolton, MA: Anker. 

Walvoord, B. E.  (2004). Assessment clear and simple.  San Francisco, CA: Jossey-Bass.

 

Nancy Simpson (Ph.D., Texas A&M University) is Director of the Center for Teaching Excellence at Texas A&M University. Laurel Willingham-McLain (Ph.D., Indiana University) is Director of the Center for Teaching at Duquesne University and coordinator of academic learning outcomes assessment.


Essays on Teaching Excellence Editor:

Elizabeth O’Connor Chandler, Director
Center for Teaching & Learning
University of Chicago
echandle@uchicago.edu

Subscriptions: Member, $100, or nonmember $120 annually (campus-wide reproduction rights); Individual, $15 annually.  Teaching Excellence is published eight times annually.  To order, send check or P.O. to POD Network at the address below, or contact us for further information.

The POD Network facilitates the exchange of information and ideas, the development of professional skills, the exploration and debate of educational issues, and the sharing of expertise and resources.  For further information, contact:

POD Network in Higher Education
P. O. Box 3318
Nederland, CO 80466 U. S. A.
303.258.9521 (voice)
303.258.7377 (fax)
podnetwork@podweb.org

The posting and use of this publication on an institution's WWW server is covered by an End User License Agreement (EULA). By the terms of this agreement, the essay must be located on a secure server that limits access to your institution's students, faculty, or staff. Use beyond your institutional boundaries, including the linking of this page to any site other than this one, requires the express permission of the POD Network. To view the terms of the End User License Agreement, click here.