Assessing Student Learning Outcomes

Academic Program Assessment and Student Learning Outcomes

The University of Pittsburgh is committed to a culture of continuous improvement in our academic programs. We do this through systematically assessing the success of all of our academic programs by:

  • Clearly stating the specific and most important student learning outcomes for the program,
  • Designing an assessment process to gain insight into all students’ achievement of those learning outcomes,
  • Measuring the learning outcomes annually,
  • Analyzing the measurements of student learning outcomes, and
  • Y=Using the results of those assessments to make program improvements.

The continuous process of assessing student learning outcomes aligns with the Middle States Commission on Higher Education’s Standards for Accreditation, specifically, Standard V - Educational Effectiveness Assessment, which states that the "assessment of student learning and achievement demonstrates that the institution’s students have accomplished educational goals consistent with their program of study, degree level, the institution’s mission, and appropriate expectations for institutions of higher education.”

The goal of assessing student learning outcomes for program assessment and continuous improvement is different from assessing individual students’ performance. Rather than examining course grades or program completion, the assessment of student learning outcomes often draws on data sources that cross individual courses, such as capstone projects, and provide detailed and useful information about students’ achievement of expected learning outcomes to assess program-level, rather than student-level, educational effectiveness.

This continuous process provides faculty, staff and students with a clear understanding of the program’s mission, the expected learning goals of the program, how the curriculum supports those learning goals, and how all students are performing on those learning goals. The process allows programs to make data-informed decisions about revising curriculum, assignments, student support, and other aspects of the academic program where needed.

At the University of Pittsburgh, each academic program’s process for assessing student learning outcomes is recorded on a Student Learning Outcomes Matrix document.

Student Learning Outcome Goals

Each school’s and campus’s goals for student learning outcomes should be consistent with the University’s goals for all our graduates—that our students be able to:

  • Think critically and analytically.
  • Gather and evaluate information effectively and appropriately.
  • Understand and be able to apply basic, scientific, and quantitative reasoning.
  • Communicate clearly and effectively.
  • Use information technology appropriate to their discipline.
  • Exhibit mastery of their discipline.
  • Understand and appreciate diverse cultures (both locally and internationally).
  • Work effectively with others.
  • Have a sense of self, responsibility to others, and connectedness to the University.

Assessment Process

For each program and for their general education curriculum, schools and regional campuses should document the following components of the assessment process on a Student Learning Outcomes Matrix: the program’s mission and goals; three to five student learning outcomes; the methods of assessment; standards of comparison, the interpretation of results and use of results for an action plan.

The following people and units are responsible for the specific assessment process:  

  • Program faculty are responsible for the development and administration of the assessment processes of individual programs in accordance with the appropriate programmatic or departmental governance structure.
  • Department chairs are responsible for coordinating the assessment process for departmentally based programs. Deans, directors, and campus presidents are responsible for coordinating the assessment process for school- and campus-based programs.
  • Schools and regional campuses are responsible for developing internal procedures for documenting program assessment.
  • Deans, directors, and campus presidents are to report annually to the provost on the school’s and campus’s assessment activities and relevant results as part of their planning process.

Note: Programs may request permission to substitute a professional accreditation process as the assessment protocol by showing how that professional accreditation process maps onto the institutional framework for assessment.

Guidelines for Completing the Student Learning Outcomes Assessment Process in the Student Learning Outcomes Matrix

1. Program or School Mission Statement and Goals 

The program’s mission states the broad goals that are of value to the program and what the program aims to provide its students. These goals should be consistent with the goals of the school, campus, and University. The program’s mission statement provides the framework for assessing other components of the program.

2. Student Learning Outcomes

Student learning outcomes answer the question “What will students know and be able to do when they complete the program?” and should be clear, concise, and specific to the discipline. Program-level learning outcomes should span individual courses.

Consider using action words such as “identify,” “analyze,” and “explain,” when defining learning outcomes to clarify the learning outcomes to be assessed.

There are typically three to five learning outcomes for a major and two to three learning outcomes for a minor, ARCO, certificate or microcredential.        

Strong examples:

  • Students will demonstrate a solid working knowledge of basic principles and vocabulary of micro and macroeconomics.
  • Students will be able to effectively communicate mathematical research knowledge and teach mathematics at a university level.
  • Students will be able to demonstrate mastery of clinical performance skills to provide safe, competent care.

Common Oversights:

  • Student learning outcomes are not specific to the program and are more reflective of general education outcomes.  
    • Outcomes such as “think critically and analytically,” “speak and write clearly,” and “gather and evaluate data” are all important student outcomes, but they are outcomes that might apply to the goals of the general education curriculum or the institution’s goals for its students. Programs should determine their own specific student learning outcomes that are germane to the discipline. For example, a history program may have a learning outcome that states,” Students will acquire knowledge about historical processes in a specific region of the world.” A biological sciences program might have an outcome that states, “Students should be able to work collaboratively in a laboratory setting,” while a physics program may state, “Students should be able to design and set up an experiment, collect and analyze data, interpret results, and connect it to related areas in physics.” Similarly, if one of the most important skills you want your students to learn from your program is how to communicate effectively, it should be framed in a way that reflects the discipline itself. The professions, the sciences, and humanities, etc. all have their own style and methods for transmitting knowledge that’s appropriate to the discipline.
       
  • Learning outcomes outline the requirements for graduation and not what students should know or be able to do after they complete the program.
    • Student learning outcomes are the specific skills, knowledge and abilities that students can demonstrate upon completion of an academic program. Learning outcomes should be reflective of the product rather than the process.  Instead of stating that students will write a dissertation as a learning goal, programs might instead state that students who complete their program will be able to successfully engage in original and creative research that has an impact on the discipline. Such a learning outcome might be assessed through tabulating the number of peer-reviewed publications graduates of the program have at specific intervals (by graduation, three years out, five years out, etc.), or it may be assessed every five years by a team of faculty who, with set criteria in mind, review a sample of dissertations.
       
  • Courses are used as a method of assessment with no external validation and course grades or GPAs are used as a standard for assessment.
    • It is not sufficient to state a grade or overall GPA as a measurement of a student learning outcome. A grade is a multifaceted measure of many different activities that occur in a course all collapsed into a single letter or number. Therefore, a letter grade by itself does not give enough information about the learning that was measured or the criteria that were used to do so. Similarly, a GPA is a measurement of many different courses that build a student’s curriculum but cannot be mapped back into a specific learning outcome.
3. Assessment Methods

In the Assessment Methods section of the Student Learning Outcomes Assessment Matrix, you are asked to answer the following five questions. 

1. What data source will be used to measure this learning outcome? 

Data sources should be strategically selected to best assess the specific learning outcomes. The program can use (or adapt) data sources already in place such as papers, projects, student surveys, capstone or milestone projects, licensure and professional exams, exit interviews, and job/graduate school placement data instead of creating entirely new data sources to measure. Keep in mind, however, that to assess student learning at the program level, it might be best to assess artifacts from multiple courses to measure a single learning goal.

When assessing artifacts from multiple courses or large numbers of students, consider randomly selecting a subset of artifacts to assess so that the process is not overly time-consuming. Additionally, be sure that the same assessment methods can be used for all data sources used to measure a single learning goal.

Finally, it is important not only to measure students’ perception of how much they have learned in each program, but also to identify measurements that will show if students can demonstrate the learning goals the program has set.

2. How will the data source be assessed?

For some data sources, such as job placements, student surveys, or number of academic papers published, assessment may be based on descriptive statistics, such as the percentage of students who were co-authors on two or more academic papers. However, most data sources—such as papers and projects—will require that the program faculty create a rubric to identify and assess the extent to which each student achieved the learning goal through their performance.

3. What group/subgroup of students will be assessed?

Once an appropriate method of assessment is identified, the faculty should decide who will be assessed, i.e., what sample of students. In some cases, particularly when reviewing student papers, theses, or dissertations, it makes sense to only use a small sample of students such as 10%. In other cases when, for example, placement records of graduate students are used, 100% of the students might be assessed. 

4. Who will do the assessment?

To assess academic artifacts, many programs choose to use a team of three faculty members who review a sample of student products to assess how well students are meeting the standard set by the faculty for specific learning outcomes.

This approach need not be time-consuming as only a sample of papers, projects or presentations would need to be reviewed every three to five years.

5. When will the assessment be conducted (Year 1, 2, or 3 of assessment cycle)?

Student learning outcomes need only be assessed every three to five years. To keep assessment manageable, programs should determine an assessment timetable that results in meaningful data without causing undue burden to the faculty.

A schedule that takes the entire plan into consideration can and should result in only a small time commitment on the part of the program’s faculty or staff each year. 

Examples:

  • A committee of three faculty members will review the course papers of a sample of 20 students from across the core courses biannually.
  • A three-member faculty committee will conduct an exit interview. A sample of [15] graduating seniors during their last semester will be assessed. The assessment takes place every three years.
  • Faculty will evaluate a sample of 20 student assignments from the writing intensive course using a standardized scoring rubric that will be developed by the Undergraduate Curriculum Committee. This exercise will be completed each year.

Common Oversights:

  • Assessment methods are not tailored to specific learning goals.
    • Assessment methods should be strategically selected to best assess the specific learning outcomes. Not all assessment methods are well suited for every learning outcome. For example, the ability to master the core concepts of a profession might be best assessed using a licensure or professional certification exam as opposed to a job placement record. A job placement record might be a good indirect assessment measure if the learning outcomes most important to your major are also important to employers. Publications and citations might be a good assessment of graduate students’ ability to conduct independent research. If an assessment plan employs the exact same data source or assessment methods for each learning outcome, revision of the assessment methods are likely needed.
       
  • Assessment methods do not contain information on who will be assessed, when they will be assessed and how often the assessment will take place.
    • Once an appropriate method of assessment is identified, the faculty should decide who will be assessed, i.e., what sample of students. In some cases, particularly when reviewing student papers, theses, or dissertations, it makes sense to only use a small sample of students such as 10%. In other cases when, for example, placement records of graduate students are used, 100% of the students might be assessed. When and how often assessment takes place are also important factors to consider. To keep assessment manageable, programs should determine an assessment timetable that results in meaningful data without causing undue burden to the faculty. Student learning outcomes need only be assessed every three to five years. A schedule that takes the entire plan into consideration can and should result in a reasonable time commitment on the part of the program’s faculty or staff each year.
4. Standards of Comparison

Standards are values set by individual programs that represent the expectation for a given measurable goal. Standards of comparison are determined by the program faculty to provide a benchmark for student achievement and identify how well students should be able to do on the assessment as well as the percentage of students who should achieve the stated outcome. Standards link directly back to the specific learning outcome and not to a cumulative set of student achievement such as course grades or GPAs.

Examples: 

  • 100% of evaluated drug log assignments score at or above 80%
     
  • 90% of papers demonstrate that the topic was researched, developed, and presented with a high degree of relevance in terms of policy implications and/or practice applications.
     
  • Within one year of graduation, more than 90% of alumni employed in a teaching or teaching related position or enrolled in graduate school; after three years, more than 85% of alumni indicate satisfaction with various elements of their total program and of required courses.

Common Oversights:

  • Standards do not identify the percentage of students who should achieve the stated outcome.
    • To assess progress, faculty should determine what percentage of students should meet each learning outcome. Though it may be difficult to determine a percentage in the beginning, programs can adjust their standards as they receive the results of the assessment and get a clearer idea of how their students are achieving the goals set by the program. In some programs, it may be unrealistic to set a standard that 100% of students can meet, for example, having a publication in print by graduation. While in other cases, it may be reasonable for 100% of students to meet a given standard, such as successfully applying the theories and methods of the given field in a capstone paper.
5. Interpretation of Results 

Results are analyzed using the criteria and standards set forth by the program faculty and answers the question, “What does the data show?”

Examples:

  • 75% of the class scored 80% or higher on the portions of the exam that addressed ethical issues in data science. The mean was 84.6%. The median was 86%. That is lower than prior years.
     
  • Of the 32 students who graduated with a PhD from our department between 2018 and 2021, 16 (50%) have attained full-time, tenure-track positions. This figure surpasses that reported (49%) in the most recent study (2021-2022) of placement to tenure-track conducted by our professional association.
     
  • Eight exams—the totality of those presented since written comprehensive essays were instituted as part of our graduate program reform several years ago—were evaluated. 100% were assessed as demonstrating at least Proficient knowledge on our faculty-designed rubric; only one was judged to be Exceptional.
6. Use of Results/Action Plan 

The resulting data from the measurement of student learning outcomes is of little use unless the academic program has a strategic plan for using those results for program improvement. Individuals and/or committees responsible for using the data to implement strategies for program improvement are identified in response to the question, “Who reviewed the findings?” To answer, “What changes were made after reviewing the results?”, the action plan should describe how the program will address any shortcomings, adjust the curriculum, increase expectations, and/or refine methods with specific steps and a timetable.

Examples of use of results/action plan: 

  • The data collected and analyzed by the doctoral committee suggests that Ph.D. students need more opportunities to publish and present research at professional conferences. In response, the doctoral committee plans to take the following actions: 
    • Convene a faculty meeting next year to discuss ways to encourage and mentor doctoral student authorship and presentations of research at conferences. 
    • Develop and distribute guidelines for doctoral advisors that include departmental expectations for doctoral student publications and presentations. 
    • Create more department-level research and writing groups that include doctoral students. 
    • Improve our distribution of information related to professional conferences, including posting upcoming CFPs on the departmental “doctoral student information” bulletin board. 
    • Request that the school faculty annual review process include a section on our notation for co-authoring with doctoral students. 
    • Make expectations for publishing and presenting clear at the annual doctoral orientation. 
       
  • Although we have attained consistent positive findings regarding achievement on this goal, other than overall quality of the program, we are not sure exactly what the specific sets of contributing factors are. Thus, the Program Director and the quality assurance committee will explore this question by beginning with a focus group session involving near-graduation students during next spring. Information generated by such assessment will help us to be targeted and nurture and capitalize on the contributing factors. 
     
  • After reviewing last year’s data, it is our plan to develop an instrument whereby upper-level courses are standardized in their goals and outcomes for students’ reading proficiency. As such, the learning outcomes for reading comprehension will be expressly integrated into all syllabi, as will a discussion of reading strategies and approaches. 

Resources 

This web content is intended to be a resource for those faculty members and administrators who are responsible for the process of assessing student learning in their units.