Peer Review

Peer review of teaching is a familiar component of teaching evaluation for most educators, and a long-standing scholarly consensus establishes that it is an essential component to include in summative reviews for personnel decisions such as annual reviews, promotion and tenure (Linse, 2017; AERA, 2013; Weller, 2009; Chism, 1999; Arreola, 1995; Centra, 1993; Miller, 1987). 

Peer review of teaching can be, more importantly, an important process not only for fostering collegiality and collaboration between educators (Fletcher, 2018) but also for improving student learning (Hutchings, 1996).

Previous MU campus and UM System task forces have specifically highlighted the importance of both summative and formative peer teaching reviews. The 2014 Teaching Scholarship Task Force Report observed:

The Task Force strongly believes that peer evaluation of teaching is critical to both formative and summative processes. . . . Task Force members stressed that peer-review of teaching should be comprehensive (e.g., more than a single classroom visit). The development of a Review Guideline, Checklist, or Rubric is strongly endorsed by the Task Force to encourage less variation in evaluation reports (to the teaching faculty and supervisors). This is particularly important in summative use of such evaluations. 

Subsequently, the 2018 UM Intercampus Faculty Council task force recommended:

Peer review of teaching is composed of two activities: peer observation of in-class teaching performance and peer review of the written documents used in a course. Both forms of peer review should be included in a comprehensive system, where possible.

Given the importance of the peer review of teaching for both the educator’s professional development and the institution’s personnel decisions, TFELT designed and tested tools and a procedure for peer review that is informed both by relevant scholarship and by conversation with focus groups and workshop participants during the two years of TFELT’s work.

TFELT’s Peer Review System and Tools

The TFELT recommendations for peer review draw from a wide range of research scholarship on teaching evaluation in higher education. Fernandez and Yu (2007) identify several important elements of effective peer review of teaching:

  • The process should be well organized;
  • The process should be part of a broader faculty development program aimed at improving teaching;
  • The process should focus especially on aspects of course content and pedagogical design that students are not able to helpfully assess;
  • The process should include review of course materials as well as observation of teaching in the learning environment;
  • The process should involve conversation at the onset to share goals and priorities, and after the observation to discuss observer feedback and address areas not covered in the observation;
  • The process should involve standards for evaluation, “systematic process and documentation” (p. 157) that minimize inconsistencies and idiosyncrasies between reviewers.

TFELT’s system of peer review of teaching is informed by Fernandez and Yu’s (2007) recommendations as well as other research-tested best practices in the literature. This system involves both a protocol for the peer review process and instruments for facilitating peer observations and dialogue.

The Peer Review Protocol

Following a model recommended by Fernandez and Yu (2007), among others, TFELT recommended a protocol for peer review of teaching designed to facilitate dialogue between educators for providing feedback and facilitating goal-setting, both with the aim of promoting continuing development in teaching.

This protocol consists of a three-step process:

  1. an Orientation conversation between the peer reviewer and the instructor being reviewed;
  2. Observation of teaching in the reviewee’s classroom or other learning environment (e.g., lab, studio, clinic, asynchronous online materials and interactions, etc.). Teaching observation also includes a review of the instructor’s course materials (e.g., syllabus, Canvas site, assessments, etc.)’
  3. a Debrief conversation between the peer reviewer and the instructor being reviewed.

The process and substance of these steps are primarily the same, in order to prepare educators for future summative evaluation expectations during the formative assessment process. How each is addressed is modified based on whether the review is being conducted as formative or summative assessment of teaching.

Peer Review Tools

To provide effective structure, consistency, and support for peer reviews, TFELT designed a Formative Peer Review Checklist and a Summative Peer Review Rubric. It is widely accepted in educational research on teaching evaluation that formal observation protocols such as rubrics are excellent tools for assessing instruction and identifying areas for ongoing professional development. In 2013, the American Educational Research Association (AERA) published a report recommending the adoption of such protocols for peer review of teaching (see also Fernandez and Yu, 2007).

The TFELT peer review instruments were created using a combination of literature on evidence-based teaching practices, similar instruments from peer institutions, and priorities for Inclusive and Effective Teaching developed during workshops including faculty, administrators, graduate and undergraduate students. Drafts of the instruments were reviewed by departments representing a wide range of academic disciplines in Fall 2020, and piloted and revised during the 2020-2021 academic year.

The TFELT peer review tools are forms used by the reviewer during the three steps of the peer review protocol. The criteria included in Part 1 of each instrument are aligned to the four Dimensions of Effective and Inclusive Teaching to ensure that faculty are evaluated on strategies that align with campus and evidence-based priorities. Within each Dimension category are sub-categories to provide the peer reviewer, the reviewee, and subsequent faculty reviewers clarity and consistency of expectations during the review process. 

      • Learning Climate
      • Communication
      • Receptiveness to Student Needs
      • Communication of Learning Objectives
      • Preparation and Presentation
    • Knowledge of Subject, Content, and Discipline-Specific Language
    • Contextual Relevance and Transferability
    • Appropriate Lesson Content or Level
    • Active Learning

Part 2 of each instrument involves review of course materials (e.g., syllabi, course Canvas site, assessments, and learning materials). Most items in Part 2 are also categorized based on the Dimensions of Inclusive and Effective Teaching. An additional section assesses how course materials communicate course policies and expectations in a manner similar to the Quality Course Review process used by Missouri Online (e.g., Canvas navigation support; explanation of grading system and policies; required course material and need for technology; weekly semester course plan).

In addition, specific items in both instruments, throughout all categories, include an “IDE” tag to indicate areas relevant to an assessment of teaching practices with regard to inclusion, diversity and equity.

Finally, space is provided for written observations and feedback from the reviewer.

Formative Peer Review: Observation for Improvement

Click here for a copy of the Formative Peer Review Checklist form.

Click here for a Formative Peer Review Workflow sheet that sums up and helps reviewers organize the process.

Because formative assessment is intended for an individual educator’s professional growth and development, rather than for institutional evaluation of merit, the three-step protocol and the Formative Peer Review Checklist are designed for use in a confidential dialogue between peers. This is a diagnostic process to identify and discuss potential strengths and challenge areas in teaching, set goals for future improvement, and otherwise share perceptions and suggestions regarding Inclusive and Effective Teaching. 

Departments may report the fact of the reviewed educator’s participation in formative peer review as part of summative review processes. Otherwise, unless a reviewee intentionally elects to share the substance of formative observations and feedback as part of their Teaching Self-Reflection for annual reviews or dossier materials, the contents and discussion of any formative peer reviews may not be used as part of a summative teaching evaluation process in any way. 

The three-step protocol for formative peer review reflects the intended purpose of such a confidential, collegial dialogue:

1.The Orientation conversation should place emphasis on familiarizing the peer reviewer with the course and its goals, familiarizing the reviewee with the review instrument, and identifying areas of teaching for which observation and feedback should be especially prioritized.

2.During the Observation the peer reviewer examines course materials and observes teaching practice using the Formative Peer Review Checklist, checking one of three possible responses:

  • Discussed
  • Observed
  • Not Relevant

“Observed” indicates that the reviewer was able to observe and draw conclusions about a teaching area while reviewing materials or during the lesson observation. No determination of the quality of the practice is noted on the items, although reviewers may provide the reviewee with feedback that addresses observed strengths and challenge areas in teaching practice.

“Discussed” indicates that the peer reviewer and reviewee engaged in specific conversation regarding that particular item. This may occur for a number of reasons: the item may involve a priority area of feedback that is discussed in advance and/or after the observation, or the item may be something that was not observed during the particular lesson attended by the reviewer. In such cases conversation between the parties can illuminate and clarify important aspects of teaching that may just not have been present at that time.

A single class lesson is highly unlikely to address every possible item on the instrument. Some might not be relevant for that particular class session. Some (e.g., instructor feedback, availability for office hours, etc.) cannot be adequately observed during the classroom observation itself. Under such circumstances, discussion between the parties is especially important.

The Not Relevant item is intended to identify areas that might not be a fair or applicable expectation for teaching practice in a given lesson, course, or discipline. These “not relevant” observations should be rare – the TFELT instrument was designed expressly to accommodate the widest range of possible disciplines and course modalities. They are, however, possible and appropriate (e.g., addressing student peer interactions or using wait time when asking questions during a self-paced online course)

3.In the Debrief conversation clarifying gaps in the observation and answering questions posed by both parties is important. Explaining written feedback and additional observations regarding teaching strengths and challenge areas are typically covered during this conversation This Debrief should also include an opportunity for brainstorming future goals for teaching improvement, as well as identifying first steps for engaging available resources to help reach those goals. The reviewee should be provided with a copy of the Formative Peer Review Checklist during this Debrief conversation.

Summative Peer Review: Observation for Evaluation

Click here for a copy of the Summative Peer Review Rubric form.

Click here for a Summative Peer Review Workflow sheet that sums up and helps reviewers organize the process.

Given the significance of the summative peer review as part of the process of institutional teaching evaluation, the three-step protocol and the Summative Peer Review Rubric follow the same overall process as formative review. This provides educators with transparency and familiarity of expectations so that participant stress is mitigated and the focus can be placed appropriately on the dialogue between reviewer and reviewed.

1.The Orientation conversation should essentially involve exactly the same goals and points of conversation as in the formative review process.

2.During the Observation the peer reviewer examines course materials and observes teaching practice using the Summative Peer Review Rubric. Just as in the formative review instrument, the criteria included in the rubric are aligned to the four Dimensions of Effective and Inclusive Teaching. The sub-categories and individual items are also the same as the formative instrument. The only formal difference on the rubric is that, rather than simply indicating whether a particular teaching element was observed, the observer will indicate one of four possible responses:

    • Not Met
    • Developing
    • Proficient
    • Not Observed

“Not met” indicates that the educator either provided no evidence of the presence of the criterion in the item, and/or engaged in a teaching practice likely to have a negative effect opposed to the goal of that criterion.

For example: the reviewer observes the instructor engaged in lecture for the entire class period, without ever asking students questions or having them complete any activities or other tasks. The reviewer would mark the rubric item “Instructor designs, monitors, and adjusts active learning exercises to ensure everyone is included and on-task” as Not Met.

“Developing” indicates that the educator is on the right track toward successfully implementing the criterion in the item, but room for improved performance is apparent.  In other words, they have provided some evidence that they are aware of and practicing the criterion in the item, but it may be incomplete or ineffective.

For example: the reviewer observes the instructor engaged in lecture, occasionally asking questions for students to answer. During these moments few hands are raised. Rather than pausing for 5 or more seconds before providing a means for enabling student involvement more broadly, the instructor calls on the same small number of students to answer all questions.  The reviewer would mark the rubric item “Instructor appropriately utilizes wait time when asking or prompting for questions and seeks responses from a diversity of students” as Developing.

Proficient indicates that the educator meets the criterion in the item successfully, demonstrating the use of an inclusive and effective teaching practice. While the written feedback space provides reviewers an opportunity to comment on the quality of that practice, the rubric does not include the option for an “Excellent” rating. This is due to TFELT’s recommendation to establish “Inclusive and Effective Teaching,” rather than “excellent teaching,” as the primary benchmark for evaluating teaching. 

For example: the reviewer observes the instructor’s lesson organized as a series of 10-to-15 minute segments of lecture, each followed by a structured discussion or small group student activity of 5-to-15 minutes that engages the material just presented. The instructor consistently checks near the end of each activity whether students have had adequate time to complete, and adjusts the length of subsequent activities somewhat if necessary. The reviewer would mark the rubric item “Learning material and activities are chunked into sections to help students ‘digest’ the material more easily and accommodate a diversity of working speeds” as Proficient.

Not Observed indicates criteria areas that might be part of the reviewee’s teaching but were not present during the particular lesson observation. Reasons for this response are similar to those described for the “Discussed” response in the Formative Peer Review Checklist above. These items should be discussed as part of the post-observation conversation, and may be amended by the interviewer after that conversation.

For example: the reviewer observes the instructor administering a quiz or collecting the products of a small group activity. The reviewer cannot directly evaluate the item “Assessments (formative and summative) give students feedback on their achievements of the learning objectives,” because they are unable to observe how the instructor actually provides feedback on such assessments to students. In this case, the reviewer would first mark the rubric item as Not Observed. The reviewer would then ask the instructor about this assessment during the Debrief conversation. Based on that conversation, the reviewer can then changed the “Not Observed” to another response.

3.The Debrief conversation should involve essentially the same conversation objectives as in the formative review process. Discussing future goals for teaching development and improvement would not be required (although might be included, depending on mutual preferences or contextual circumstances). 

There is one significant difference from the formative assessment process at the end, however: following this Debrief, the reviewer will provide a copy of their evaluation to the reviewee’s supervisor as well as to the reviewee.

A separate letter written by the reviewer is not part of TFELT’s recommended process. For those who appreciate writing and reading prose, the new form welcomes the same prose writing in the open field text boxes.

Resources for Using Peer Review of Teaching

The MU Teaching for Learning Center has created two mini-courses on Canvas to assist educators on campus to learn how to approach the peer review of teaching process, one for each type of peer review. These mini-courses provide additional explanation and description on such matters as:

  • Expectations for both reviewers and reviewees
  • Reading and using the Formative Peer Review Checklist and the Summative Peer Review Rubric
  • Examples for distinguishing among levels of proficiency for each Dimension of Inclusive and Effective Teaching
  • Goals and suggested practices for Orientation and Debrief conversations
  • Suggestions for writing the summary components of the instruments for the peer review

These mini-courses are part of a series developed to help members of the campus community better understand and use TFELT’s recommendations for teaching evaluation. 

Click here to access these Canvas mini-courses:

Additional resources in this area may be found on the “Resources” page of this website.

 

 

References:

American Educational Research Association. (2014). Rethinking faculty evaluation: AERA report and recommendations on evaluating education research, scholarship, and teaching in postsecondary education. AERA. https://www.aera.net/Portals/38/docs/Education_ Research_and_Research_Policy/RethinkingFacultyEval_R4.pdf

 Arreola, R. A. (2007). Developing a comprehensive faculty evaluation system (3rd ed.). Anker.

 Centra, J.A. (1993). Reflective faculty evaluation: Enhancing teaching and determining faculty effectiveness. Jossey-Bass.

Chism, N. (1999). Peer review of teaching: A sourcebook. Anker.

Fernandez, C.E., & Yu, J. (2007). Peer review of teaching. Journal of Chiropractic Education, 21(2), 154-161.

 Fletcher, J.A. (2018). Peer Observation of Teaching: A Practical Tool in Higher Education. Journal of Faculty Development, 32(1), 51-64.

Hutchings P. Making teaching community property: a menu for peer collaboration and peer review. Washington, DC: American Association of Higher Learning; 1996. 

 Linse, A.R. (2017). Interpreting and using student ratings data: Guidance for faculty serving as administrators and on evaluation committees. Studies in Educational Evaluation, 54, 94-106.

 Miller, R.I. (1987). Evaluating faculty for promotion and tenure. Jossey-Bass.

Weller, S. (2009). What does “peer” mean in teaching observation for the professional development of higher education lecturers? International Journal of Teaching and Learning in Higher Education, 21(1), 25-35.