Student Feedback

Student feedback on teaching has long been a mainstay of teaching evaluation at colleges and universities. While the research literature on student feedback has frequently touted its value for providing a means to identify student concerns, accurately measure relevant variables of teaching quality, and improve teaching as a result (Marsh, 2007; Spooren, et al., 2013), it has also frequently identified its limitations and challenges as a source of useful evidence to determine actual teaching effectiveness due to results influenced by irrelevant variables (Goos & Salomons, 2016; MacNell, et al., 2015).

In any case, student learning is at the core of our teaching mission at MU, and students have much of value to tell us about their experiences and perceptions as learners in the classroom. For this reason, TFELT examined both available scholarship on the use of student feedback and current campus practices for its collection, reporting, and use. The TFELT report subsequently made a number of recommendations for the purpose of maximizing the benefits and mitigating the challenges of using student feedback in the evaluation of teaching.

Formative Student Feedback: Options for Collection and Use

Ample research literature supports TFELT’s recommendation that providing students with opportunities to provide instructors with formative feedback before the end of the semester yields a number of important benefits. 

First and foremost, collecting formative student feedback during the midterm period, or perhaps even earlier, can provide instructors with helpful information on what is working and what may not for students in a way that the instructor can respond to immediately (Diamond, 2004). This use of formative feedback has been shown consistently to help instructors improve their skills and develop their perspectives regarding effective teaching for learning (Gunn, 2014; Weimer, 2016), particularly when they are motivated to improve (Yao & Grady, 2005).  

In addition, collecting formative student feedback and discussing the results with students demonstrates to students that the instructor values student feedback as a resource for making the course better (Caulfield, 2007). Opportunities for providing formative feedback on teaching provide students with greater motivation to learn (Redmond, 1982), a stronger sense of ownership in the course and improved experience in providing effective summative feedback (Crews and Curtis, 2011), and improved academic performance (Wickramasinghe & Timpson, 2006). These various benefits, in turn, have been shown to be positively related to improved response rates in end-of-semester summative student feedback surveys (Ravenscroft and Enyeart, 2009; Crews and Curtis, 2011; Chapman and Joines, 2017) as well as improved student feedback results for instructors ().

For these reasons, the TFELT report recommended the adoption of frequent formative student feedback collection by MU educators. While many methods of collecting formative student feedback are available (and simple to implement), TFELT highlights four in particular for consideration.

First, the MoCAT (Missouri Cares About Teaching) mid-semester course student feedback survey is supported by the MU Assessment Resource Center and encouraged by the MU Teaching for Learning Center. The MoCAT survey can be deployed at any point during the semester, and it comes in different forms for different types of courses. For example, instructors of online/hybrid courses can administer a form with questions about technology. In addition, there are forms customized for art/music/theater, labs, seminars, standard lectures, and writing intensive courses. Instructors also have the option to add customized questions of particular interest to them. Student responses are anonymous, aggregated, and provided only to the instructor who launches it.  Efforts are currently underway to revise the MoCAT survey so that it aligns with the new end-of-semester summative student feedback survey being launched in Fall 2023 (see below).

Another recommended option for collecting formative student feedback is small group analysis, also called small group feedback sessions or small group instructional diagnosis (Redmond, 1982). This is a method employed mid-semester to gather qualitative data from students. Although there are variations in the approach, it generally involves someone other than the course instructor interviewing students about their learning experience. This interview data is then aggregated and provided to the instructor, who can then modify the remainder of the course as desired or needed. Research on this method demonstrates a positive relationship with improved student perceptions of the learning environment, understanding of assessments and course preparation, motivation to excel, and positive behavioral changes in their learning behaviors (Hurney, et al., 2014). For an example of how this approach may be used, a description is provided by the Vanderbilt University Center for Teaching.

An approach to formative teaching feedback that may be implemented at the end of a course is the Student Assessment of their Learning Gains instrument (SALG). This survey asks students to assess and report on their own learning. While the SALG should not be used as reliable evidence of actual student learning outcomes, it is useful for instructors who may want to review this informal end-of-course feedback in order to consider redesigning the course for future semesters based on student perceptions. Although instructors can customize this survey to fit their own learning environments, the SALG focuses on five overarching questions: aspects of the course that helped student learning, and self-reported gains in understanding course content, developing relevant skills, developing attitudinal changes, and integrating information. 

Finally, the MU Teaching for Learning Center is developing a program for student consulting to provide formative feedback to instructors. The Students as Partners approach (Cook-Sather, Bovill, & Felten, 2014), also known as Student Pedagogical Teams (Hayward, et al., 2018), employs undergraduate students as “course consultants” to attend classes in which they are not enrolled in order to share their perspectives with instructors. These pedagogical partnerships between students and faculty have the potential to strengthen student learning and growth by fostering democratic and inclusive dialogue and dynamics (Cook-Sather, Bahti, & Ntem, 2019). In addition, this method of formative feedback has been shown to provide an effective basis for faculty self-reflection on teaching that can “evaluate the efficacy of teaching strategies and challenge existing preconceptions about how students view instruction”  (Hayward, et al., 2018, p. 44).

Beyond these four possibilities are a myriad of opportunities for collecting formative student feedback on teaching during a course. One simple but effective method for collecting open-ended formative feedback is often called “Start-Stop-Continue” (Danley, 2019): students are asked to describe to the instructor something they should start doing in the course that they are not currently doing, something in the course that they should stop doing, and something they are doing which is effective and should continue. Brief explanations to provide the rationale for each suggestion can help the instructor process the feedback and discuss it with the class shortly following collection.

The University of Washington’s Teaching@UW center (2023) suggests using short, frequent classroom assessment techniques (CATs) such as minute papers, directed paraphrasing, and student generated test questions as mechanisms to “monitor how well students are meeting learning objectives and processing course content.” Stanford University’s Teaching Commons (n.d.) recommends using Canvas, Qualtrics or apps like PollEverywhere to administer surveys for formative feedback. 

Whatever the method used for collecting formative feedback on teaching from students, it is important for educators to examine and respond to the feedback as quickly as possible – both to identify and implement possible changes to improve the course-in-progress as well as to open a productive, transparent dialogue with students regarding the value of actionable feedback.

Summative Student Feedback: The End-of-Semester Survey

Because of the importance of students’ experiences in and perceptions of the learning environments we provide for them, student feedback on teaching continues to be one of the three key sources of data used in the evaluation of Inclusive and Effective Teaching. However, available scholarship in this area is in agreement that the use of summative student feedback for assessing teaching has inherent limitations and challenges that must be addressed in order to make use of it effectively.

TFELT’s three primary recommendations for improving the use of summative student feedback included:

  1. Development of a new student feedback survey instrument that aligns with the four identified Dimensions of [Inclusive and Effective Teaching] and better implements best practices for maximizing the usefulness of student feedback and mitigating the potential effects of implicit biases.
  2. Development of new, more specific guidelines for the collection, reporting and evaluation of student feedback data, including quantitative student ratings and qualitative open-response student comments, as well as recommendations for improving current resources for collecting and reporting this data.
  3. Development of student awareness and training resources for improving student feedback response rates and the quality of feedback, in the form of a series of student-produced short videos to be disseminated through MU courses, internet platforms and social media.
New Student Feedback Survey for Fall 2023

First, TFELT’s report included a recommendation for developing a new survey instrument to replace the previous Student Evaluation of Teaching (SET) instrument. The new survey was developed, pilot tested and revised by an interdisciplinary Design Team appointed by the Provost in Spring 2021; the resulting instrument will launch at the end of the Fall 2023 semester.

Click here to access a sample copy of the new student feedback survey

The new survey instrument was intended to be, according to the Working Group’s supplemental report, “one designed through methodological best practices for collecting psychometric data, tested for reliability and validity, and provid[ing] faculty and administrators with data that align well with MU’s new definition and dimensions for [Inclusive and Effective Teaching].” Survey items were also developed and revised to mitigate the usual limitations of such surveys. Specifically, the survey’s questions focus on collecting student perceptions of learning opportunities and experiences during the course, rather than asking students to provide evaluations of the effectiveness of course design, curricular materials and pedagogical choices that they are largely less qualified to provide (Linse, 2017; Abrami, 2001; Arreola, 2004; Kreitzer and Sweet-Cushman, 2021). The item design was also informed by data-informed recommendations in the research literature for mitigating the potential effects of implicit bias (Linse, 2017).

The Design Team’s testing identified five valid constructs measuring distinct areas of student perceptions, and carefully aligned the data constructs of the new instrument to the four Dimensions, as illustrated below.

Instructors will receive a summary data report for each surveyed course following the semester that presents means and response distributions for these five constructs. 

Click here to access a sample copy of the new student feedback course data report

In addition to the new quantitative survey items, the TFELT student feedback instrument now includes two revised prompts for open-ended student comments. In the previous instrument the following prompts were used:

  1. What aspects of the teaching or content of this course were especially good?
  2. What changes could be made to improve the teaching or the content of this course?

In the new instrument, the open-response prompts are expressly framed to focus student feedback on specific course elements and activity connected to their learning experience:

  1. What are one to three specific things about the class that supported your learning?
  2. What are one to three specific things about the class that could be improved to better support your learning?

This kind of focused prompt is designed to both increase the likelihood of receiving “specific comments” with concrete, actionable feedback and to reduce the likelihood of responses connected to “general comments” that often include overly vague feedback, of “contextual comments” that make observations irrelevant to actual teaching and learning elements (e.g., time of day, required versus elective course, etc.), and of comments tied closely to the instructor’s personal identity presentation that are most vulnerable to implicit bias (Alhija & Fresko, 2009; Wallace, Lewis, & Allen, 2019). 

New Procedures for Reporting Student Feedback Data

Accompanying the new student feedback survey instrument are new forms and processes (currently in development) for reporting student feedback data in dossiers for promotion and tenure.

First, educators on campus now have an explicit means for regularly contextualizing their student feedback data before it is used for summative review. In a devoted section of the Self-Reflection form now required as part of the annual review process, instructors are encouraged to discuss and put into context the student feedback received in these construct areas and how they relate to the associated and broader TFELT model dimensions.

The Provost’s Office will also develop a new form for reporting student feedback data from the new instrument to replace the “teaching effectiveness” table previously required in promotion and tenure dossiers. This form will present results in the five construct areas, not only mean scores but also including standard deviations and distribution of responses across the scale, as recommended by a large body or research (Boysen, et al. 2014; Abrami, 2001; Franklin, 2001; Theall and Franklin, 2001; Linse, 2017). The new forms for reporting student feedback data will, ultimately, also provide instructors with the opportunity to track progress in results for each construct in the same course over time.

In addition, new student feedback data reports will also include two significant changes from prior reports required for promotion and tenure:

  1. As the new survey instrument does not include such an item, student feedback data reports will not include a global “teaching effectiveness” mean. Researchers in this area are in near-consensus that such a score is an oversimplified metric that provides little to no substantive information and is particularly vulnerable to respondent implicit bias (Kreitzer and Sweet-Cushman, 2021; Fischer and Hanze, 2019; Boysen, et al. 2014; Smith and Hawkins, 2011; Beran and Rokosh, 2009).
  2. Student feedback data reports will no longer include comparative data from other faculty teaching the same or similar courses. The literature is equally consistent in the conclusion that the complexities of course and instructor contexts make such comparisons less useful for providing relevant information and more vulnerable to bias effects (Kreitzer and Sweet-Cushman, 2021; Franklin, 2001; Linse, 2017). Instead, TFELT recommends replacing comparative means with baseline assumptions regarding effective results to be applied regardless of academic unit. As described by Linse (2017), “Faculty with most of their ratings distributed across scores of 3.5 – 5 on a 5-point scale . . . are doing well, even if they have a few stray scores in the lower ratings” (p. 96).

The most important rationale for these changes is provided most clearly by Kreitzer and Sweet-Cushman (2021): “Student evaluations are not designed to be used as a comparative metric across faculty (Franklin, 2001)” (p. 6, emphasis added).

Finally, with regard to written student comments that are discriminatory, harassing, or otherwise abusive to the instructor, the Provost’s Office has adopted TFELT’s recommendation to protect faculty from such comments by permitting them to work in consultation with department chairs to redact them from the data set provided for promotion or tenure. A recent call letter for dossiers provides:

Examples of discriminatory comments that may be removed include:

  • Comments that reference race, gender, ethnicity, etc.
  • Comments that reference instructor appearance (e.g., clothing, hair, physical features, etc.)
  • Comments using profane, offensive hostile or otherwise inappropriate language
  • Comments that are otherwise racist, sexist, homophobic, etc. 

Through these new tools and processes, educators have an opportunity to use summative student feedback data to construct a narrative of their teaching that can identify and discuss relevant connections to other sources of data (e.g., peer review, self-generated evidence of student learning outcomes, etc.). This narrative can provide a richer, multi-faceted account of strengths and challenge areas in teaching that can form the basis for setting and pursuing actionable goals for continuing development in teaching for student learning.

Resources for Using Student Feedback Data

The MU Teaching for Learning Center has created a mini-course on Canvas to assist educators on campus to collect, interpret and report student feedback data more effectively. This mini-course is one in a series developed to help members of the campus community better understand and use TFELT’s recommendations for teaching evaluation. 

Click here to access these Canvas mini-courses.

In addition, the Teaching for Learning Center has produced two videos, written and produced by students, aimed at promoting student awareness of the importance of completing student feedback surveys and providing guidelines for offering useful, actionable feedback to instructors. These videos will be made available to educators on campus and distributed to students via campus student government and social media beginning in Fall 2023.

Additional resources in this area may be found on the “Resources” page of this website.

 

 

References: 

Abrami, P. C. (2001). Improving judgements about teaching effectiveness using teacher rating forms. New Directions for Institutional Research, 109, 59–87.

Alhija, F.N., & Fresko, B. (2009). Student evaluation of instruction: What can be learned from students’ written comments? Studies in Educational Evaluation, 35, 37.44. DOI: 10.1016/j.stueduc.2009.01.002 

Arreola, R. A. (2007). Developing a comprehensive faculty evaluation system (3rd ed.). Anker.

Beran, T. N., & Rokosh, J.L. (2009). The consequential validity of student ratings: What do instructors really think? Alberta Journal of Educational Research, 55(4), 497–511.

Boysen, G.A., Kelly, T.J., Raesly, H.N., & Casner, R.W. (2014). The (mis)interpretation of teaching evaluations by college faculty and administrators. Assessment & Evaluation in Higher Education, 39(6), 641-656.

Caulfield, J. (2007). What motivates students to provide feedback to teachers about teaching and learning? An expectancy theory perspective. International Journal for the Scholarship of Teaching and Learning, 1(1), Art. 7. Available at: https://doi.org/ 10.20429/ijsotl.2007.010107

Center for Teaching (2023). Mid-semester feedback through small group analysis. https://cft.vanderbilt.edu/services/individual/small-group-analysis/

Chapman, D.D., and Joines, J.A. (2017). Strategies for increasing response rates for online end-of-course evaluations. International Journal of Teaching and Learning in Higher Education, 29(1), 47-60 

Cook-Sather, A., Bahti, M., & Ntem, A. (2019). Pedagogical partnerships: a how-to guide for faculty, students, and academic developers in higher education (Online Edition). Elon University Center for Engaged Learning. Retrieved from https://www.centerforengagedlearning.org/books/pedagogical-partnerships/

Cook-Sather, A., Bovill C., & Felten, P. (2014). Engaging students as partners in learning and teaching: A guide for faculty. Jossey-Bass.

Crews, T.B., and Curtis, D.F. (2011). Online course evaluations: faculty perspective and strategies for improved response rates. Assessment & Evaluation in Higher Education, 36(7), 865-878

Danley, A. (2019). Using “start, stop, and continue” to gather student feedback to improve instruction. In A. deNoyelles, A. Albrecht, S. Bauer, & S. Wyatt (Eds.), Teaching Online Pedagogical Repository. Orlando, FL: University of Central Florida Center for Distributed Learning. https://topr.online.ucf.edu/using-start-stop-and-continue-to-gather-student- feedback-to-improve-instruction/

Fischer, E., & Hänze, M. (2019). Bias hypotheses under scrutiny: Investigating the validity of student assessment of university teaching by means of external observer ratings. Assessment & Evaluation in Higher Education, 44(5), 772–786.

Franklin, J. (2001). Interpreting the numbers: Using a narrative to help others read student evaluations of your teaching accurately. New Directions for Teaching and Learning, 87, 59–87.

Goos, M., & Salomons, A. (2017). Measuring teaching quality in higher education: Assessing selection bias in course evaluations. Research in Higher Education, 58, 341–364. DOI:  10.1007/s11162-016-9429-8

Gunn, E. (2014). Using clickers to collect formative feedback on teaching: A tool for faculty development. International Journal for the Scholarship of Teaching & Learning, 8(1), art. 11. DOI: 10.20429/ijsotl.2014.080111 

Hurney, C., Harris, N., Bates Prins, S., & Kruck, S.E. (2014). The impact of a learner-centered, mid-semester course evaluation on students. Journal of Faculty Development, 28(3), 55-62.

Kreitzer, R.J., & Sweet-Cushman, J. (2021). Evaluating student evaluations of teaching: A review of measurement and equity bias in SETs and recommendations for ethical reform. Journal of Academic Ethics. Advance online publication. https://doi.org/10.1007/ s10805-021-09400-w

Linse, A.R. (2017). Interpreting and using student ratings data: Guidance for faculty serving as administrators and on evaluation committees. Studies in Educational Evaluation, 54, 94-106.

MacNell, L., Driscoll, A., & Hunt, A. (2015). What’s in a name: Exposing gender bias in student ratings of teaching. Innovative Higher Education, 40(4), 291–303.

Marsh, H. W. (2007). Students’ evaluations of university teaching: A multidimensional perspective. chapter 9. In R.P. Perry and J.C. Smart (Eds.), The scholarship of teaching and learning in higher education: an evidence-based perspective (pp. 319–384). Springer.

Ravenscroft, M., and Enyeart, C. (2009). Online student course evaluations: Strategies for increasing student participation rates. Washington, DC: Custom Research Brief, Education Advisory Board. http://tcuespot.wikispaces.com/file/view/ Online+Student+Course+Evaluations+-+Strategies+for+Increasing+Student+Participation+Rates.pdf

Redmond, V. M. (1982). A process of midterm evaluation incorporating small group discussion of a course and Its effect on student motivation. ERIC Document Reproduction Service No. ED 217 953 Seattle, WA: University of Washington.

Smith, B. P., & Hawkins, B. (2011). Examining student evaluations of black college faculty: Does race matter? The Journal of Negro Education, 80(2), 149– 162.

Spooren, P., Brockx, B., & Mortelmans, D. (2013). On the validity of student evaluation of teaching: The state of the art. Review of Educational Research, 83(4), 598–642.

Teaching@UW. (2023). Gathering student feedback. University of Washington. https://teaching.washington.edu/reflect-and-iterate/gathering-student-feedback/

Teaching Commons (n.d.). Gathering student feedback. Stanford University. https://teachingcommons.stanford.edu/teaching-guides/foundations-course-design/improving-teaching-effectiveness/gathering-student

Theall, M. & Franklin, J. (2001). Looking for bias in all the wrong places: A search for truth or

a witch hunt in student ratings of instruction? New Directions for Institutional Research, 109, 45–56.

Weimer, M. (2016, June 15). Benefits of talking with students about mid-course evaluations. The Teaching Professor Blog. https://www.facultyfocus.com/articles/faculty-development/ benefits-talking-students-mid-course-evaluations/

Wickramasinghe, S.R., & Timpson, W.M. (2006). Mid-semester student feedback enhances student learning. Education for Chemical Engineers, (1)1, 123-133. 10.1205/ece06012

Yao, Y., & Grady, M.L. (2005). How do faculty make formative use of student evaluation feedback?: A multiple case study. Journal of Personnel Evaluation in Education, 18, 107-126. DOI: 10.1007/s11092-006-9000-9